build: 修改依赖并引入外部库

This commit is contained in:
2026-03-27 19:31:40 +08:00
parent 230a76fe27
commit f7c072dd0b
12 changed files with 1915 additions and 216 deletions

112
README.md
View File

@@ -6,60 +6,6 @@
## 关于此仓库 ## 关于此仓库
本仓库为 "潜进" 软件组项目的核心部分, 包含核心功能模块以及基于 Textual 框架的基础用户界面(heurams.interface)实现 本仓库为 "潜进" 软件组项目的核心部分, 包含核心功能模块以及基于 Textual 框架的基础用户界面(heurams.interface)实现
除了通过用户界面进行学习外, 你也可以在 Python 中导入 `heurams` 库, 使用其中实现的状态机, 算法迭代器和数据模型构建辅助记忆功能 除了通过用户界面进行学习外, 你也可以在 Python 中导入 `heurams` 库, 使用其中实现的状态机, 算法迭代器和数据模型构建辅助记忆功能
本仓库在 AGPLv3 下开放源代码(详见 LICENSE 文件)
## 版本日志
### 0.0.x
- 简易调度器实现与最小原型
### 0.1.x
- 命令行操作的调度器
### 0.2.x
- 使用 Textual 构建富文本终端用户界面
- 项目可行性验证
- 采用 SM-2 原始算法, 评估方式为用户自评估原型
### 0.3.x Frontal 前端
- 简单的多文件项目
- 创建了记忆内容/算法数据结构
- 基于 SM-2 改进算法的自动复习测评评估
- 重点设计古诗文记忆理解功能
- TUI 界面改进
- 简单的 TTS 集成
### 0.4.x Fledge 雏鸟
- 开发目标转为多用途
- 使用模块管理解耦设计
- 增加文档与类型标注
- 采用上下文设计模式的隐式依赖注入与遵从 IoC, 注册器设计的算法与功能实现
- 支持其他调度算法模块 (SM-2, SM-18M 参考理论变体, FSRS) 与谜题模块
- 采用规范的日志调试取代 Textual Devtools 调试
- 更新数据持久化协议规范
- 引入动态数据模式 (宏驱动的动态内容生成) , 与基于文件的策略调控
- 更佳的用户数据处理
- 加入模块化扩展集成
- 更换算法数据格式, 提高性能
- 采用 provider-service 抽象架构, 支持切换服务提供者
- 整体兼容性改进
### 0.5.x Fulcrum 支点
- 以仓库 (repository) 对象作为文件系统与运行时对象间的桥梁, 提高解耦性与性能
- 使用具有列表-字典 API 同步特性的 "Lict" 对象作为 Repo 数据的内部存储
- 将粒子对象作为纯运行时对象, 数据通过引用自动同步至 Repo, 减少负担
- 实现声音形式回顾 "电台" 功能
- 改进数据存储结构, 实现选择性持久化
- 增强可配置性
- 使用 Transitions 状态机库重新实现 Reactor 模块系列状态机, 增强可维护性
- 实现整体回顾记忆功能, 与队列式记忆功能并列
- 加入状态机快照功能 (基于 pickle) , 使中断的记忆流程得以恢复
- 增加 "整体文章引用" 功能, 实现从一篇长文本中摘取内容片段记忆并在原文中高亮查看的组织操作
### 下一步?
- 增加云同步 / 文档源服务
- 使用 Flutter 构建酷酷的现代化前端
- ...
## 特性 ## 特性
@@ -81,13 +27,6 @@
- 支持触屏/鼠标/键盘多操作模式 - 支持触屏/鼠标/键盘多操作模式
- 简洁直观的复习流程设计 - 简洁直观的复习流程设计
### 架构特性
- 模块化设计: 算法, 谜题, 服务提供者可插拔替换
- 上下文管理: 使用 ContextVar 实现隐式依赖注入
- 数据持久化: TOML 配置与内容, JSON 算法状态
- 服务抽象: 音频播放, TTS, LLM 通过 provider 架构支持多种后端
- 完整日志系统: 带轮转的日志记录, 便于调试
## 安装 ## 安装
### 从源码安装 ### 从源码安装
@@ -107,13 +46,17 @@
pip install -e . pip install -e .
``` ```
### 从包管理器安装
暂时还没有:(
## 启动应用 ## 启动应用
```bash ```bash
# 在任一目录(建议是空目录或者包根目录, 将被用作存放数据)下运行 # 在任一目录(建议是空目录或者包根目录, 将被用作存放数据)下运行
python -m heurams.interface python -m heurams.interface
``` ```
配置文件位于 `./data/config/config.toml`(相对于工作目录). 如果不存在, 会使用内置的默认配置. 配置文件位于 `./data/config/config.toml`(相对于工作目录).
如果不存在, 会使用内置的默认配置.
## 项目结构 ## 项目结构
@@ -178,38 +121,25 @@ graph TB
Algorithms --> Files Algorithms --> Files
``` ```
### 目录结构(待更新 0.5.0)
```
src/heurams/
├── __init__.py # 包入口点
├── context.py # 全局上下文, 路径, 配置上下文管理器
├── services/ # 核心服务
│ ├── config.py # 配置管理
│ ├── logger.py # 日志系统
│ ├── timer.py # 时间服务
│ ├── audio_service.py # 音频播放抽象
│ ├── tts_service.py # 文本转语音抽象
│ └── sync_service.py # WebDAV 同步服务
├── kernel/ # 核心业务逻辑
│ ├── algorithms/ # 间隔重复算法 (FSRS, SM2)
│ ├── particles/ # 数据模型 (Atom, Electron, Nucleon, Orbital)
│ ├── puzzles/ # 谜题类型 (MCQ, cloze, recognition)
│ └── reactor/ # 调度和处理逻辑
├── providers/ # 外部服务提供者
│ ├── audio/ # 音频播放实现
│ ├── tts/ # 文本转语音实现
│ └── llm/ # LLM 集成
├── interface/ # Textual TUI 界面
│ ├── widgets/ # UI 组件
│ ├── screens/ # 应用屏幕
│ └── __main__.py # 应用入口点
└── default/ # 默认配置和数据模板
```
## 贡献 ## 贡献
欢迎贡献!请参阅 [CONTRIBUTING.md](CONTRIBUTING.md) 了解贡献指南. 欢迎贡献!请参阅 [CONTRIBUTING.md](CONTRIBUTING.md) 了解贡献指南.
## 许可证 ## 许可证
本项目基于 AGPL-3.0 许可证开源. 详见 [LICENSE](LICENSE) 文件. ### 项目本身
本项目基于 AGPL-3.0 许可证开源. 详见根目录下 [LICENSE](LICENSE) 文件.
### 第三方代码
项目在 `src/heurams/vendor/` 目录下嵌入了以下第三方代码(可能有修改):
#### py-fsrs
- 上游版本: 6.3.1
- 位置: `src/heurams/vendor/pyfsrs/`
- 原项目: [py-fsrs](https://github.com/open-spaced-repetition/py-fsrs)
- 原许可证: Copyright (c) 2026 Open Spaced Repetition Contributors
- 原许可证: MIT License, 详见 `src/heurams/vendor/pyfsrs/LICENSE`
本项目受益于他们无私且优秀的工作

View File

@@ -13,7 +13,6 @@ dependencies = [
"openai==1.0.0", "openai==1.0.0",
"playsound==1.2.2", "playsound==1.2.2",
"psutil>=7.2.1", "psutil>=7.2.1",
"pygobject>=3.54.5",
"tabulate>=0.9.0", "tabulate>=0.9.0",
"textual==7.0.0", "textual==7.0.0",
"toml==0.10.2", "toml==0.10.2",
@@ -27,8 +26,3 @@ tui = "heurams.interface.__main__:main"
[build-system] [build-system]
requires = ["uv_build>=0.9.22,<0.10.0"] requires = ["uv_build>=0.9.22,<0.10.0"]
build-backend = "uv_build" build-backend = "uv_build"
[dependency-groups]
dev = [
"flet>=0.80.1",
]

21
src/heurams/vendor/pyfsrs/LICENSE vendored Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2022 Open Spaced Repetition
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

29
src/heurams/vendor/pyfsrs/__init__.py vendored Normal file
View File

@@ -0,0 +1,29 @@
"""
py-fsrs
-------
Py-FSRS is the official Python implementation of the FSRS scheduler algorithm, which can be used to develop spaced repetition systems.
"""
from fsrs.scheduler import Scheduler
from fsrs.state import State
from fsrs.card import Card
from fsrs.rating import Rating
from fsrs.review_log import ReviewLog
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from fsrs.optimizer import Optimizer
# lazy load the Optimizer module due to heavy dependencies
def __getattr__(name: str) -> type:
if name == "Optimizer":
global Optimizer
from fsrs.optimizer import Optimizer
return Optimizer
raise AttributeError
__all__ = ["Scheduler", "Card", "Rating", "ReviewLog", "State", "Optimizer"]

167
src/heurams/vendor/pyfsrs/card.py vendored Normal file
View File

@@ -0,0 +1,167 @@
"""
fsrs.card
---------
This module defines the Card and State classes.
Classes:
Card: Represents a flashcard in the FSRS system.
State: Enum representing the learning state of a Card object.
"""
from __future__ import annotations
from dataclasses import dataclass
from datetime import datetime, timezone
import time
import json
from typing import TypedDict
from typing_extensions import Self
from fsrs.state import State
class CardDict(TypedDict):
"""
JSON-serializable dictionary representation of a Card object.
"""
card_id: int
state: int
step: int | None
stability: float | None
difficulty: float | None
due: str
last_review: str | None
@dataclass(init=False)
class Card:
"""
Represents a flashcard in the FSRS system.
Attributes:
card_id: The id of the card. Defaults to the epoch milliseconds of when the card was created.
state: The card's current learning state.
step: The card's current learning or relearning step or None if the card is in the Review state.
stability: Core mathematical parameter used for future scheduling.
difficulty: Core mathematical parameter used for future scheduling.
due: The date and time when the card is due next.
last_review: The date and time of the card's last review.
"""
card_id: int
state: State
step: int | None
stability: float | None
difficulty: float | None
due: datetime
last_review: datetime | None
def __init__(
self,
card_id: int | None = None,
state: State = State.Learning,
step: int | None = None,
stability: float | None = None,
difficulty: float | None = None,
due: datetime | None = None,
last_review: datetime | None = None,
) -> None:
if card_id is None:
# epoch milliseconds of when the card was created
card_id = int(datetime.now(timezone.utc).timestamp() * 1000)
# wait 1ms to prevent potential card_id collision on next Card creation
time.sleep(0.001)
self.card_id = card_id
self.state = state
if self.state == State.Learning and step is None:
step = 0
self.step = step
self.stability = stability
self.difficulty = difficulty
if due is None:
due = datetime.now(timezone.utc)
self.due = due
self.last_review = last_review
def to_dict(self) -> CardDict:
"""
Returns a dictionary representation of the Card object.
Returns:
CardDict: A dictionary representation of the Card object.
"""
return {
"card_id": self.card_id,
"state": self.state.value,
"step": self.step,
"stability": self.stability,
"difficulty": self.difficulty,
"due": self.due.isoformat(),
"last_review": self.last_review.isoformat() if self.last_review else None,
}
@classmethod
def from_dict(cls, source_dict: CardDict) -> Self:
"""
Creates a Card object from an existing dictionary.
Args:
source_dict: A dictionary representing an existing Card object.
Returns:
Self: A Card object created from the provided dictionary.
"""
return cls(
card_id=int(source_dict["card_id"]),
state=State(int(source_dict["state"])),
step=source_dict["step"],
stability=(
float(source_dict["stability"]) if source_dict["stability"] else None
),
difficulty=(
float(source_dict["difficulty"]) if source_dict["difficulty"] else None
),
due=datetime.fromisoformat(source_dict["due"]),
last_review=(
datetime.fromisoformat(source_dict["last_review"])
if source_dict["last_review"]
else None
),
)
def to_json(self, indent: int | str | None = None) -> str:
"""
Returns a JSON-serialized string of the Card object.
Args:
indent: Equivalent argument to the indent in json.dumps()
Returns:
str: A JSON-serialized string of the Card object.
"""
return json.dumps(self.to_dict(), indent=indent)
@classmethod
def from_json(cls, source_json: str) -> Self:
"""
Creates a Card object from a JSON-serialized string.
Args:
source_json: A JSON-serialized string of an existing Card object.
Returns:
Self: A Card object created from the JSON string.
"""
source_dict: CardDict = json.loads(source_json)
return cls.from_dict(source_dict=source_dict)
__all__ = ["Card"]

674
src/heurams/vendor/pyfsrs/optimizer.py vendored Normal file
View File

@@ -0,0 +1,674 @@
"""
fsrs.optimizer
---------
This module defines the optional Optimizer class.
"""
from fsrs.card import Card
from fsrs.review_log import ReviewLog, Rating
from fsrs.scheduler import (
Scheduler,
DEFAULT_PARAMETERS,
LOWER_BOUNDS_PARAMETERS,
UPPER_BOUNDS_PARAMETERS,
)
import math
from datetime import datetime, timezone
from copy import deepcopy
from random import Random
from statistics import mean
try:
import torch
from torch.nn import BCELoss
from torch import optim
import pandas as pd
from tqdm import tqdm
# weight clipping
LOWER_BOUNDS_PARAMETERS_TENSORS = torch.tensor(
LOWER_BOUNDS_PARAMETERS,
dtype=torch.float64,
)
UPPER_BOUNDS_PARAMETERS_TENSORS = torch.tensor(
UPPER_BOUNDS_PARAMETERS,
dtype=torch.float64,
)
# hyper parameters
num_epochs = 5
mini_batch_size = 512
learning_rate = 4e-2
max_seq_len = (
64 # up to the first 64 reviews of each card are used for optimization
)
class Optimizer:
"""
The FSRS optimizer.
Enables the optimization of FSRS scheduler parameters from existing review logs for more accurate interval calculations.
Attributes:
review_logs: A collection of previous ReviewLog objects from a user.
_revlogs_train: The collection of review logs, sorted and formatted for optimization.
"""
review_logs: tuple[ReviewLog, ...]
_revlogs_train: dict
def __init__(
self, review_logs: tuple[ReviewLog, ...] | list[ReviewLog]
) -> None:
"""
Initializes the Optimizer with a set of ReviewLogs. Also formats a copy of the review logs for optimization.
Note that the ReviewLogs provided by the user don't need to be in order.
"""
def _format_revlogs() -> dict:
"""
Sorts and converts the tuple of ReviewLog objects to a dictionary format for optimizing
"""
revlogs_train = {}
for review_log in self.review_logs:
# pull data out of current ReviewLog object
card_id = review_log.card_id
rating = review_log.rating
review_datetime = review_log.review_datetime
review_duration = review_log.review_duration
# if the card was rated Again, it was not recalled
recall = 0 if rating == Rating.Again else 1
# as a ML problem, [x, y] = [ [review_datetime, rating, review_duration], recall ]
datum = [[review_datetime, rating, review_duration], recall]
if card_id not in revlogs_train:
revlogs_train[card_id] = []
revlogs_train[card_id].append((datum))
revlogs_train[card_id] = sorted(
revlogs_train[card_id], key=lambda x: x[0][0]
) # keep reviews sorted
# sort the dictionary in order of when each card history starts
revlogs_train = dict(sorted(revlogs_train.items()))
return revlogs_train
self.review_logs = deepcopy(tuple(review_logs))
# format the ReviewLog data for optimization
self._revlogs_train = _format_revlogs()
def _compute_batch_loss(self, *, parameters: list[float]) -> float:
"""
Computes the current total loss for the entire batch of review logs.
"""
card_ids = list(self._revlogs_train.keys())
params = torch.tensor(parameters, dtype=torch.float64)
loss_fn = BCELoss()
scheduler = Scheduler(parameters=params)
step_losses = []
for card_id in card_ids:
card_review_history = self._revlogs_train[card_id][:max_seq_len]
for i in range(len(card_review_history)):
review = card_review_history[i]
x_date = review[0][0]
y_retrievability = review[1]
u_rating = review[0][1]
if i == 0:
card = Card(card_id=card_id, due=x_date)
y_pred_retrievability = scheduler.get_card_retrievability(
card=card, current_datetime=x_date
)
y_retrievability = torch.tensor(
y_retrievability, dtype=torch.float64
)
if card.last_review and (x_date - card.last_review).days > 0:
step_loss = loss_fn(y_pred_retrievability, y_retrievability)
step_losses.append(step_loss)
card, _ = scheduler.review_card(
card=card,
rating=u_rating,
review_datetime=x_date,
review_duration=None,
)
batch_loss = torch.sum(torch.stack(step_losses))
batch_loss = batch_loss.item() / len(step_losses)
return batch_loss
def compute_optimal_parameters(self, verbose: bool = False) -> list[float]:
"""
Computes a set of optimized parameters for the FSRS scheduler and returns it as a list of floats.
High level explanation of optimization:
---------------------------------------
FSRS is a many-to-many sequence model where the "State" at each step is a Card object at a given point in time,
the input is the time of the review and the output is the predicted retrievability of the card at the time of review.
Each card's review history can be thought of as a sequence, each review as a step and each collection of card review histories
as a batch.
The loss is computed by comparing the predicted retrievability of the Card at each step with whether the Card was actually
sucessfully recalled or not (0/1).
Finally, the card objects at each step in their sequences are updated using the current parameters of the Scheduler
as well as the rating given to that card by the user. The parameters of the Scheduler is what is being optimized.
"""
def _num_reviews() -> int:
"""
Computes how many Review-state reviews there are in the dataset.
Only the loss from Review-state reviews count for optimization and their number must
be computed in advance to properly initialize the Cosine Annealing learning rate scheduler.
"""
scheduler = Scheduler()
num_reviews = 0
# iterate through the card review histories
card_ids = list(self._revlogs_train.keys())
for card_id in card_ids:
card_review_history = self._revlogs_train[card_id][:max_seq_len]
# iterate through the current Card's review history
for i in range(len(card_review_history)):
review = card_review_history[i]
review_datetime = review[0][0]
rating = review[0][1]
# if this is the first review, create the Card object
if i == 0:
card = Card(card_id=card_id, due=review_datetime)
# only non-same-day reviews count
if (
card.last_review
and (review_datetime - card.last_review).days > 0
):
num_reviews += 1
card, _ = scheduler.review_card(
card=card,
rating=rating,
review_datetime=review_datetime,
review_duration=None,
)
return num_reviews
def _update_parameters(
*,
step_losses: list,
adam_optimizer: torch.optim.Adam,
params: torch.Tensor,
lr_scheduler: torch.optim.lr_scheduler.CosineAnnealingLR,
) -> None:
"""
Computes and updates the current FSRS parameters based on the step losses. Also updates the learning rate scheduler.
"""
# Backpropagate through the loss
mini_batch_loss = torch.sum(torch.stack(step_losses))
adam_optimizer.zero_grad() # clear previous gradients
mini_batch_loss.backward() # compute gradients
adam_optimizer.step() # Update parameters
# clamp the weights in place without modifying the computational graph
with torch.no_grad():
params.clamp_(
min=LOWER_BOUNDS_PARAMETERS_TENSORS,
max=UPPER_BOUNDS_PARAMETERS_TENSORS,
)
# update the learning rate
lr_scheduler.step()
# set local random seed for reproducibility
rng = Random(42)
card_ids = list(self._revlogs_train.keys())
num_reviews = _num_reviews()
if num_reviews < mini_batch_size:
return list(DEFAULT_PARAMETERS)
# Define FSRS Scheduler parameters as torch tensors with gradients
params = torch.tensor(
DEFAULT_PARAMETERS, requires_grad=True, dtype=torch.float64
)
loss_fn = BCELoss()
adam_optimizer = optim.Adam([params], lr=learning_rate)
lr_scheduler = optim.lr_scheduler.CosineAnnealingLR(
optimizer=adam_optimizer,
T_max=math.ceil(num_reviews / mini_batch_size) * num_epochs,
)
best_params = None
best_loss = math.inf
# iterate through the epochs
for _ in tqdm(
range(num_epochs),
desc="Optimizing",
unit="epoch",
disable=(not verbose),
):
# randomly shuffle the order of which Card's review histories get computed first
# at the beginning of each new epoch
rng.shuffle(card_ids)
# initialize new scheduler with updated parameters each epoch
scheduler = Scheduler(parameters=params)
# stores the computed loss of each individual review
step_losses = []
# iterate through the card review histories (sequences)
for card_id in card_ids:
card_review_history = self._revlogs_train[card_id][:max_seq_len]
# iterate through the current Card's review history (steps)
for i in range(len(card_review_history)):
review = card_review_history[i]
# input
x_date = review[0][0]
# target
y_retrievability = review[1]
# update
u_rating = review[0][1]
# if this is the first review, create the Card object
if i == 0:
card = Card(card_id=card_id, due=x_date)
# predicted target
y_pred_retrievability = scheduler.get_card_retrievability(
card=card, current_datetime=x_date
)
y_retrievability = torch.tensor(
y_retrievability, dtype=torch.float64
)
# only compute step-loss on non-same-day reviews
if card.last_review and (x_date - card.last_review).days > 0:
step_loss = loss_fn(y_pred_retrievability, y_retrievability)
step_losses.append(step_loss)
# update the card's state
card, _ = scheduler.review_card(
card=card,
rating=u_rating,
review_datetime=x_date,
review_duration=None,
)
# take a gradient step after each mini-batch
if len(step_losses) == mini_batch_size:
_update_parameters(
step_losses=step_losses,
adam_optimizer=adam_optimizer,
params=params,
lr_scheduler=lr_scheduler,
)
# update the scheduler's with the new parameters
scheduler = Scheduler(parameters=params)
# clear the step losses for next batch
step_losses = []
# remove gradient history from tensor card parameters for next batch
card.stability = card.stability.detach()
card.difficulty = card.difficulty.detach()
# update params on remaining review logs
if len(step_losses) > 0:
_update_parameters(
step_losses=step_losses,
adam_optimizer=adam_optimizer,
params=params,
lr_scheduler=lr_scheduler,
)
# compute the current batch loss after each epoch
detached_params = [
x.detach().item() for x in list(params.detach())
] # convert to floats
with torch.no_grad():
epoch_batch_loss = self._compute_batch_loss(
parameters=detached_params
)
# if the batch loss is better with the current parameters, update the current best parameters
if epoch_batch_loss < best_loss:
best_loss = epoch_batch_loss
best_params = detached_params
return best_params
def _compute_probs_and_costs(self) -> dict[str, float]:
review_log_df = pd.DataFrame(
vars(review_log) for review_log in self.review_logs
)
review_log_df = review_log_df.sort_values(
by=["card_id", "review_datetime"], ascending=[True, True]
).reset_index(drop=True)
# dictionary to return
probs_and_costs_dict = {}
# compute the probabilities and costs of the first rating
first_reviews_df = review_log_df.loc[
~review_log_df["card_id"].duplicated(keep="first")
].reset_index(drop=True)
first_again_reviews_df = first_reviews_df.loc[
first_reviews_df["rating"] == Rating.Again
]
first_hard_reviews_df = first_reviews_df.loc[
first_reviews_df["rating"] == Rating.Hard
]
first_good_reviews_df = first_reviews_df.loc[
first_reviews_df["rating"] == Rating.Good
]
first_easy_reviews_df = first_reviews_df.loc[
first_reviews_df["rating"] == Rating.Easy
]
# compute the probability of the user clicking again/hard/good/easy given it's their first review
num_first_again = len(first_again_reviews_df)
num_first_hard = len(first_hard_reviews_df)
num_first_good = len(first_good_reviews_df)
num_first_easy = len(first_easy_reviews_df)
num_first_review = (
num_first_again + num_first_hard + num_first_good + num_first_easy
)
prob_first_again = num_first_again / num_first_review
prob_first_hard = num_first_hard / num_first_review
prob_first_good = num_first_good / num_first_review
prob_first_easy = num_first_easy / num_first_review
probs_and_costs_dict["prob_first_again"] = prob_first_again
probs_and_costs_dict["prob_first_hard"] = prob_first_hard
probs_and_costs_dict["prob_first_good"] = prob_first_good
probs_and_costs_dict["prob_first_easy"] = prob_first_easy
# compute the cost of the user clicking again/hard/good/easy on their first review
first_again_review_durations = list(
first_again_reviews_df["review_duration"]
)
first_hard_review_durations = list(first_hard_reviews_df["review_duration"])
first_good_review_durations = list(first_good_reviews_df["review_duration"])
first_easy_review_durations = list(first_easy_reviews_df["review_duration"])
avg_first_again_review_duration = (
mean(first_again_review_durations)
if first_again_review_durations
else 0
)
avg_first_hard_review_duration = (
mean(first_hard_review_durations) if first_hard_review_durations else 0
)
avg_first_good_review_duration = (
mean(first_good_review_durations) if first_good_review_durations else 0
)
avg_first_easy_review_duration = (
mean(first_easy_review_durations) if first_easy_review_durations else 0
)
probs_and_costs_dict["avg_first_again_review_duration"] = (
avg_first_again_review_duration
)
probs_and_costs_dict["avg_first_hard_review_duration"] = (
avg_first_hard_review_duration
)
probs_and_costs_dict["avg_first_good_review_duration"] = (
avg_first_good_review_duration
)
probs_and_costs_dict["avg_first_easy_review_duration"] = (
avg_first_easy_review_duration
)
# compute the probabilities and costs of non-first ratings
non_first_reviews_df = review_log_df.loc[
review_log_df["card_id"].duplicated(keep="first")
].reset_index(drop=True)
again_reviews_df = non_first_reviews_df.loc[
non_first_reviews_df["rating"] == Rating.Again
]
hard_reviews_df = non_first_reviews_df.loc[
non_first_reviews_df["rating"] == Rating.Hard
]
good_reviews_df = non_first_reviews_df.loc[
non_first_reviews_df["rating"] == Rating.Good
]
easy_reviews_df = non_first_reviews_df.loc[
non_first_reviews_df["rating"] == Rating.Easy
]
# compute the probability of the user clicking hard/good/easy given they correctly recalled the card
num_hard = len(hard_reviews_df)
num_good = len(good_reviews_df)
num_easy = len(easy_reviews_df)
num_recall = num_hard + num_good + num_easy
prob_hard = num_hard / num_recall
prob_good = num_good / num_recall
prob_easy = num_easy / num_recall
probs_and_costs_dict["prob_hard"] = prob_hard
probs_and_costs_dict["prob_good"] = prob_good
probs_and_costs_dict["prob_easy"] = prob_easy
again_review_durations = list(again_reviews_df["review_duration"])
hard_review_durations = list(hard_reviews_df["review_duration"])
good_review_durations = list(good_reviews_df["review_duration"])
easy_review_durations = list(easy_reviews_df["review_duration"])
avg_again_review_duration = (
mean(again_review_durations) if again_review_durations else 0
)
avg_hard_review_duration = (
mean(hard_review_durations) if hard_review_durations else 0
)
avg_good_review_duration = (
mean(good_review_durations) if good_review_durations else 0
)
avg_easy_review_duration = (
mean(easy_review_durations) if easy_review_durations else 0
)
probs_and_costs_dict["avg_again_review_duration"] = (
avg_again_review_duration
)
probs_and_costs_dict["avg_hard_review_duration"] = avg_hard_review_duration
probs_and_costs_dict["avg_good_review_duration"] = avg_good_review_duration
probs_and_costs_dict["avg_easy_review_duration"] = avg_easy_review_duration
return probs_and_costs_dict
def _simulate_cost(
self,
*,
desired_retention: float,
parameters: tuple[float, ...] | list[float],
num_cards_simulate: int,
probs_and_costs_dict: dict[str, float],
) -> float:
rng = Random(42)
# simulate from the beginning of 2025 till before the beginning of 2026
start_date = datetime(2025, 1, 1, 0, 0, 0, 0, timezone.utc)
end_date = datetime(2026, 1, 1, 0, 0, 0, 0, timezone.utc)
scheduler = Scheduler(
parameters=parameters,
desired_retention=desired_retention,
enable_fuzzing=False,
)
# unpack probs_and_costs_dict
prob_first_again = probs_and_costs_dict["prob_first_again"]
prob_first_hard = probs_and_costs_dict["prob_first_hard"]
prob_first_good = probs_and_costs_dict["prob_first_good"]
prob_first_easy = probs_and_costs_dict["prob_first_easy"]
avg_first_again_review_duration = probs_and_costs_dict[
"avg_first_again_review_duration"
]
avg_first_hard_review_duration = probs_and_costs_dict[
"avg_first_hard_review_duration"
]
avg_first_good_review_duration = probs_and_costs_dict[
"avg_first_good_review_duration"
]
avg_first_easy_review_duration = probs_and_costs_dict[
"avg_first_easy_review_duration"
]
prob_hard = probs_and_costs_dict["prob_hard"]
prob_good = probs_and_costs_dict["prob_good"]
prob_easy = probs_and_costs_dict["prob_easy"]
avg_again_review_duration = probs_and_costs_dict[
"avg_again_review_duration"
]
avg_hard_review_duration = probs_and_costs_dict["avg_hard_review_duration"]
avg_good_review_duration = probs_and_costs_dict["avg_good_review_duration"]
avg_easy_review_duration = probs_and_costs_dict["avg_easy_review_duration"]
simulation_cost = 0
for i in range(num_cards_simulate):
card = Card()
curr_date = start_date
while curr_date < end_date:
# the card is new
if curr_date == start_date:
rating = rng.choices(
[Rating.Again, Rating.Hard, Rating.Good, Rating.Easy],
weights=[
prob_first_again,
prob_first_hard,
prob_first_good,
prob_first_easy,
],
)[0]
if rating == Rating.Again:
simulation_cost += avg_first_again_review_duration
elif rating == Rating.Hard:
simulation_cost += avg_first_hard_review_duration
elif rating == Rating.Good:
simulation_cost += avg_first_good_review_duration
elif rating == Rating.Easy:
simulation_cost += avg_first_easy_review_duration
# the card is not new
else:
rating = rng.choices(
["recall", Rating.Again],
weights=[desired_retention, 1.0 - desired_retention],
)[0]
if rating == "recall":
# compute probability that the user chose hard/good/easy, GIVEN that they correctly recalled the card
rating = rng.choices(
[Rating.Hard, Rating.Good, Rating.Easy],
weights=[prob_hard, prob_good, prob_easy],
)[0]
if rating == Rating.Again:
simulation_cost += avg_again_review_duration
elif rating == Rating.Hard:
simulation_cost += avg_hard_review_duration
elif rating == Rating.Good:
simulation_cost += avg_good_review_duration
elif rating == Rating.Easy:
simulation_cost += avg_easy_review_duration
card, _ = scheduler.review_card(
card=card, rating=rating, review_datetime=curr_date
)
curr_date = card.due
total_knowledge = desired_retention * num_cards_simulate
simulation_cost = simulation_cost / total_knowledge
return simulation_cost
def compute_optimal_retention(
self, parameters: tuple[float, ...] | list[float]
) -> list[float]:
def _validate_review_logs() -> None:
if len(self.review_logs) < 512:
raise ValueError(
"Not enough ReviewLog's: at least 512 ReviewLog objects are required to compute optimal retention"
)
for review_log in self.review_logs:
if review_log.review_duration is None:
raise ValueError(
"ReviewLog.review_duration cannot be None when computing optimal retention"
)
_validate_review_logs()
NUM_CARDS_SIMULATE = 1000
DESIRED_RETENTIONS = [0.7, 0.75, 0.8, 0.85, 0.9, 0.95]
probs_and_costs_dict = self._compute_probs_and_costs()
simulation_costs = []
for desired_retention in DESIRED_RETENTIONS:
simulation_cost = self._simulate_cost(
desired_retention=desired_retention,
parameters=parameters,
num_cards_simulate=NUM_CARDS_SIMULATE,
probs_and_costs_dict=probs_and_costs_dict,
)
simulation_costs.append(simulation_cost)
min_index = simulation_costs.index(min(simulation_costs))
optimal_retention = DESIRED_RETENTIONS[min_index]
return optimal_retention
except ImportError:
class Optimizer:
def __init__(self, *args, **kwargs) -> None:
raise ImportError(
'Optimizer is not installed.\nInstall it with: pip install "fsrs[optimizer]"'
)
__all__ = ["Optimizer"]

0
src/heurams/vendor/pyfsrs/py.typed vendored Normal file
View File

15
src/heurams/vendor/pyfsrs/rating.py vendored Normal file
View File

@@ -0,0 +1,15 @@
from enum import IntEnum
class Rating(IntEnum):
"""
Enum representing the four possible ratings when reviewing a card.
"""
Again = 1
Hard = 2
Good = 3
Easy = 4
__all__ = ["Rating"]

117
src/heurams/vendor/pyfsrs/review_log.py vendored Normal file
View File

@@ -0,0 +1,117 @@
"""
fsrs.review_log
---------
This module defines the ReviewLog and Rating classes.
Classes:
ReviewLog: Represents the log entry of a Card that has been reviewed.
Rating: Enum representing the four possible ratings when reviewing a card.
"""
from __future__ import annotations
from dataclasses import dataclass
from datetime import datetime
from typing import TypedDict
import json
from typing_extensions import Self
from fsrs.rating import Rating
class ReviewLogDict(TypedDict):
"""
JSON-serializable dictionary representation of a ReviewLog object.
"""
card_id: int
rating: int
review_datetime: str
review_duration: int | None
@dataclass
class ReviewLog:
"""
Represents the log entry of a Card object that has been reviewed.
Attributes:
card_id: The id of the card being reviewed.
rating: The rating given to the card during the review.
review_datetime: The date and time of the review.
review_duration: The number of milliseconds it took to review the card or None if unspecified.
"""
card_id: int
rating: Rating
review_datetime: datetime
review_duration: int | None
def to_dict(
self,
) -> ReviewLogDict:
"""
Returns a dictionary representation of the ReviewLog object.
Returns:
ReviewLogDict: A dictionary representation of the ReviewLog object.
"""
return {
"card_id": self.card_id,
"rating": int(self.rating),
"review_datetime": self.review_datetime.isoformat(),
"review_duration": self.review_duration,
}
@classmethod
def from_dict(
cls,
source_dict: ReviewLogDict,
) -> Self:
"""
Creates a ReviewLog object from an existing dictionary.
Args:
source_dict: A dictionary representing an existing ReviewLog object.
Returns:
Self: A ReviewLog object created from the provided dictionary.
"""
return cls(
card_id=source_dict["card_id"],
rating=Rating(int(source_dict["rating"])),
review_datetime=datetime.fromisoformat(source_dict["review_datetime"]),
review_duration=source_dict["review_duration"],
)
def to_json(self, indent: int | str | None = None) -> str:
"""
Returns a JSON-serialized string of the ReviewLog object.
Args:
indent: Equivalent argument to the indent in json.dumps()
Returns:
str: A JSON-serialized string of the ReviewLog object.
"""
return json.dumps(self.to_dict(), indent=indent)
@classmethod
def from_json(cls, source_json: str) -> Self:
"""
Creates a ReviewLog object from a JSON-serialized string.
Args:
source_json: A JSON-serialized string of an existing ReviewLog object.
Returns:
Self: A ReviewLog object created from the JSON string.
"""
source_dict: ReviewLogDict = json.loads(source_json)
return cls.from_dict(source_dict=source_dict)
__all__ = ["ReviewLog"]

856
src/heurams/vendor/pyfsrs/scheduler.py vendored Normal file
View File

@@ -0,0 +1,856 @@
"""
fsrs.scheduler
---------
This module defines the Scheduler class as well as the various constants used in its calculations.
Classes:
Scheduler: The FSRS spaced-repetition scheduler.
"""
from __future__ import annotations
from collections.abc import Sequence
import math
from datetime import datetime, timezone, timedelta
from copy import copy
import json
from random import random
from dataclasses import dataclass
from fsrs.state import State
from fsrs.card import Card
from fsrs.rating import Rating
from fsrs.review_log import ReviewLog
from typing import TYPE_CHECKING, TypedDict, overload
if TYPE_CHECKING:
from torch import Tensor # torch is optional; import only for type checking
from typing_extensions import Self
FSRS_DEFAULT_DECAY = 0.1542
DEFAULT_PARAMETERS = (
0.212,
1.2931,
2.3065,
8.2956,
6.4133,
0.8334,
3.0194,
0.001,
1.8722,
0.1666,
0.796,
1.4835,
0.0614,
0.2629,
1.6483,
0.6014,
1.8729,
0.5425,
0.0912,
0.0658,
FSRS_DEFAULT_DECAY,
)
STABILITY_MIN = 0.001
LOWER_BOUNDS_PARAMETERS = (
STABILITY_MIN,
STABILITY_MIN,
STABILITY_MIN,
STABILITY_MIN,
1.0,
0.001,
0.001,
0.001,
0.0,
0.0,
0.001,
0.001,
0.001,
0.001,
0.0,
0.0,
1.0,
0.0,
0.0,
0.0,
0.1,
)
INITIAL_STABILITY_MAX = 100.0
UPPER_BOUNDS_PARAMETERS = (
INITIAL_STABILITY_MAX,
INITIAL_STABILITY_MAX,
INITIAL_STABILITY_MAX,
INITIAL_STABILITY_MAX,
10.0,
4.0,
4.0,
0.75,
4.5,
0.8,
3.5,
5.0,
0.25,
0.9,
4.0,
1.0,
6.0,
2.0,
2.0,
0.8,
0.8,
)
MIN_DIFFICULTY = 1.0
MAX_DIFFICULTY = 10.0
FUZZ_RANGES = [
{
"start": 2.5,
"end": 7.0,
"factor": 0.15,
},
{
"start": 7.0,
"end": 20.0,
"factor": 0.1,
},
{
"start": 20.0,
"end": math.inf,
"factor": 0.05,
},
]
class SchedulerDict(TypedDict):
"""
JSON-serializable dictionary representation of a Scheduler object.
"""
parameters: list[float]
desired_retention: float
learning_steps: list[int]
relearning_steps: list[int]
maximum_interval: int
enable_fuzzing: bool
@dataclass(init=False)
class Scheduler:
"""
The FSRS scheduler.
Enables the reviewing and future scheduling of cards according to the FSRS algorithm.
Attributes:
parameters: The model weights of the FSRS scheduler.
desired_retention: The desired retention rate of cards scheduled with the scheduler.
learning_steps: Small time intervals that schedule cards in the Learning state.
relearning_steps: Small time intervals that schedule cards in the Relearning state.
maximum_interval: The maximum number of days a Review-state card can be scheduled into the future.
enable_fuzzing: Whether to apply a small amount of random 'fuzz' to calculated intervals.
"""
parameters: tuple[float, ...]
desired_retention: float
learning_steps: tuple[timedelta, ...]
relearning_steps: tuple[timedelta, ...]
maximum_interval: int
enable_fuzzing: bool
def __init__(
self,
parameters: Sequence[float] = DEFAULT_PARAMETERS,
desired_retention: float = 0.9,
learning_steps: tuple[timedelta, ...] | list[timedelta] = (
timedelta(minutes=1),
timedelta(minutes=10),
),
relearning_steps: tuple[timedelta, ...] | list[timedelta] = (
timedelta(minutes=10),
),
maximum_interval: int = 36500,
enable_fuzzing: bool = True,
) -> None:
self._validate_parameters(parameters=parameters)
self.parameters = tuple(parameters)
self.desired_retention = desired_retention
self.learning_steps = tuple(learning_steps)
self.relearning_steps = tuple(relearning_steps)
self.maximum_interval = maximum_interval
self.enable_fuzzing = enable_fuzzing
self._DECAY = -self.parameters[20]
self._FACTOR = 0.9 ** (1 / self._DECAY) - 1
def _validate_parameters(self, *, parameters: Sequence[float]) -> None:
if len(parameters) != len(LOWER_BOUNDS_PARAMETERS):
raise ValueError(
f"Expected {len(LOWER_BOUNDS_PARAMETERS)} parameters, got {len(parameters)}."
)
error_messages = []
for index, (parameter, lower_bound, upper_bound) in enumerate(
zip(parameters, LOWER_BOUNDS_PARAMETERS, UPPER_BOUNDS_PARAMETERS)
):
if not lower_bound <= parameter <= upper_bound:
error_message = f"parameters[{index}] = {parameter} is out of bounds: ({lower_bound}, {upper_bound})"
error_messages.append(error_message)
if len(error_messages) > 0:
raise ValueError(
"One or more parameters are out of bounds:\n"
+ "\n".join(error_messages)
)
def get_card_retrievability(
self, card: Card, current_datetime: datetime | None = None
) -> float:
"""
Calculates a Card object's current retrievability for a given date and time.
The retrievability of a card is the predicted probability that the card is correctly recalled at the provided datetime.
Args:
card: The card whose retrievability is to be calculated
current_datetime: The current date and time
Returns:
float: The retrievability of the Card object.
"""
if card.last_review is None or card.stability is None:
return 0
if current_datetime is None:
current_datetime = datetime.now(timezone.utc)
elapsed_days = max(0, (current_datetime - card.last_review).days)
return (1 + self._FACTOR * elapsed_days / card.stability) ** self._DECAY
def review_card(
self,
card: Card,
rating: Rating,
review_datetime: datetime | None = None,
review_duration: int | None = None,
) -> tuple[Card, ReviewLog]:
"""
Reviews a card with a given rating at a given time for a specified duration.
Args:
card: The card being reviewed.
rating: The chosen rating for the card being reviewed.
review_datetime: The date and time of the review.
review_duration: The number of miliseconds it took to review the card or None if unspecified.
Returns:
tuple[Card,ReviewLog]: A tuple containing the updated, reviewed card and its corresponding review log.
Raises:
ValueError: If the `review_datetime` argument is not timezone-aware and set to UTC.
"""
if review_datetime is not None and (
(review_datetime.tzinfo is None) or (review_datetime.tzinfo != timezone.utc)
):
raise ValueError("datetime must be timezone-aware and set to UTC")
card = copy(card)
if review_datetime is None:
review_datetime = datetime.now(timezone.utc)
days_since_last_review = (
(review_datetime - card.last_review).days if card.last_review else None
)
match card.state:
case State.Learning:
assert card.step is not None
# update the card's stability and difficulty
if card.stability is None or card.difficulty is None:
card.stability = self._initial_stability(rating=rating)
card.difficulty = self._initial_difficulty(
rating=rating, clamp=True
)
elif days_since_last_review is not None and days_since_last_review < 1:
card.stability = self._short_term_stability(
stability=card.stability, rating=rating
)
card.difficulty = self._next_difficulty(
difficulty=card.difficulty, rating=rating
)
else:
card.stability = self._next_stability(
difficulty=card.difficulty,
stability=card.stability,
retrievability=self.get_card_retrievability(
card,
current_datetime=review_datetime,
),
rating=rating,
)
card.difficulty = self._next_difficulty(
difficulty=card.difficulty, rating=rating
)
# calculate the card's next interval
## first if-clause handles edge case where the Card in the Learning state was previously
## scheduled with a Scheduler with more learning_steps than the current Scheduler
if len(self.learning_steps) == 0 or (
card.step >= len(self.learning_steps)
and rating in (Rating.Hard, Rating.Good, Rating.Easy)
):
card.state = State.Review
card.step = None
next_interval_days = self._next_interval(stability=card.stability)
next_interval = timedelta(days=next_interval_days)
else:
match rating:
case Rating.Again:
card.step = 0
next_interval = self.learning_steps[card.step]
case Rating.Hard:
# card step stays the same
if card.step == 0 and len(self.learning_steps) == 1:
next_interval = self.learning_steps[0] * 1.5
elif card.step == 0 and len(self.learning_steps) >= 2:
next_interval = (
self.learning_steps[0] + self.learning_steps[1]
) / 2.0
else:
next_interval = self.learning_steps[card.step]
case Rating.Good:
if card.step + 1 == len(
self.learning_steps
): # the last step
card.state = State.Review
card.step = None
next_interval_days = self._next_interval(
stability=card.stability
)
next_interval = timedelta(days=next_interval_days)
else:
card.step += 1
next_interval = self.learning_steps[card.step]
case Rating.Easy:
card.state = State.Review
card.step = None
next_interval_days = self._next_interval(
stability=card.stability
)
next_interval = timedelta(days=next_interval_days)
case _:
raise ValueError(f"Unknown rating: {rating}")
case State.Review:
assert card.stability is not None
assert card.difficulty is not None
# update the card's stability and difficulty
if days_since_last_review is not None and days_since_last_review < 1:
card.stability = self._short_term_stability(
stability=card.stability, rating=rating
)
else:
card.stability = self._next_stability(
difficulty=card.difficulty,
stability=card.stability,
retrievability=self.get_card_retrievability(
card,
current_datetime=review_datetime,
),
rating=rating,
)
card.difficulty = self._next_difficulty(
difficulty=card.difficulty, rating=rating
)
# calculate the card's next interval
match rating:
case Rating.Again:
# if there are no relearning steps (they were left blank)
if len(self.relearning_steps) == 0:
next_interval_days = self._next_interval(
stability=card.stability
)
next_interval = timedelta(days=next_interval_days)
else:
card.state = State.Relearning
card.step = 0
next_interval = self.relearning_steps[card.step]
case Rating.Hard | Rating.Good | Rating.Easy:
next_interval_days = self._next_interval(
stability=card.stability
)
next_interval = timedelta(days=next_interval_days)
case _:
raise ValueError(f"Unknown rating: {rating}")
case State.Relearning:
assert card.stability is not None
assert card.difficulty is not None
assert card.step is not None
# update the card's stability and difficulty
if days_since_last_review is not None and days_since_last_review < 1:
card.stability = self._short_term_stability(
stability=card.stability, rating=rating
)
card.difficulty = self._next_difficulty(
difficulty=card.difficulty, rating=rating
)
else:
card.stability = self._next_stability(
difficulty=card.difficulty,
stability=card.stability,
retrievability=self.get_card_retrievability(
card,
current_datetime=review_datetime,
),
rating=rating,
)
card.difficulty = self._next_difficulty(
difficulty=card.difficulty, rating=rating
)
# calculate the card's next interval
## first if-clause handles edge case where the Card in the Relearning state was previously
## scheduled with a Scheduler with more relearning_steps than the current Scheduler
if len(self.relearning_steps) == 0 or (
card.step >= len(self.relearning_steps)
and rating in (Rating.Hard, Rating.Good, Rating.Easy)
):
card.state = State.Review
card.step = None
next_interval_days = self._next_interval(stability=card.stability)
next_interval = timedelta(days=next_interval_days)
else:
match rating:
case Rating.Again:
card.step = 0
next_interval = self.relearning_steps[card.step]
case Rating.Hard:
# card step stays the same
if card.step == 0 and len(self.relearning_steps) == 1:
next_interval = self.relearning_steps[0] * 1.5
elif card.step == 0 and len(self.relearning_steps) >= 2:
next_interval = (
self.relearning_steps[0] + self.relearning_steps[1]
) / 2.0
else:
next_interval = self.relearning_steps[card.step]
case Rating.Good:
if card.step + 1 == len(
self.relearning_steps
): # the last step
card.state = State.Review
card.step = None
next_interval_days = self._next_interval(
stability=card.stability
)
next_interval = timedelta(days=next_interval_days)
else:
card.step += 1
next_interval = self.relearning_steps[card.step]
case Rating.Easy:
card.state = State.Review
card.step = None
next_interval_days = self._next_interval(
stability=card.stability
)
next_interval = timedelta(days=next_interval_days)
case _:
raise ValueError(f"Unknown rating: {rating}")
case _:
raise ValueError(f"Unknown card state: {card.state}")
if self.enable_fuzzing and card.state == State.Review:
next_interval = self._get_fuzzed_interval(interval=next_interval)
card.due = review_datetime + next_interval
card.last_review = review_datetime
review_log = ReviewLog(
card_id=card.card_id,
rating=rating,
review_datetime=review_datetime,
review_duration=review_duration,
)
return card, review_log
def reschedule_card(self, card: Card, review_logs: list[ReviewLog]) -> Card:
"""
Reschedules/updates the given card with the current scheduler provided that card's review logs.
If the current card was previously scheduled with a different scheduler, you may want to reschedule/update
it as if it had always been scheduled with this current scheduler. For example, you may want to reschedule
each of your cards with a new scheduler after computing the optimal parameters with the Optimizer.
Args:
card: The card to be rescheduled/updated.
review_logs: A list of that card's review logs (order doesn't matter).
Returns:
Card: A new card that has been rescheduled/updated with this current scheduler.
Raises:
ValueError: If any of the review logs are for a card other than the one specified, this will raise an error.
"""
for review_log in review_logs:
if review_log.card_id != card.card_id:
raise ValueError(
f"ReviewLog card_id {review_log.card_id} does not match Card card_id {card.card_id}"
)
review_logs = sorted(review_logs, key=lambda log: log.review_datetime)
rescheduled_card = Card(card_id=card.card_id, due=card.due)
for review_log in review_logs:
rescheduled_card, _ = self.review_card(
card=rescheduled_card,
rating=review_log.rating,
review_datetime=review_log.review_datetime,
)
return rescheduled_card
def to_dict(
self,
) -> SchedulerDict:
"""
Returns a dictionary representation of the Scheduler object.
Returns:
SchedulerDict: A dictionary representation of the Scheduler object.
"""
return {
"parameters": list(self.parameters),
"desired_retention": self.desired_retention,
"learning_steps": [
int(learning_step.total_seconds())
for learning_step in self.learning_steps
],
"relearning_steps": [
int(relearning_step.total_seconds())
for relearning_step in self.relearning_steps
],
"maximum_interval": self.maximum_interval,
"enable_fuzzing": self.enable_fuzzing,
}
@classmethod
def from_dict(cls, source_dict: SchedulerDict) -> Self:
"""
Creates a Scheduler object from an existing dictionary.
Args:
source_dict: A dictionary representing an existing Scheduler object.
Returns:
Self: A Scheduler object created from the provided dictionary.
"""
return cls(
parameters=source_dict["parameters"],
desired_retention=source_dict["desired_retention"],
learning_steps=[
timedelta(seconds=learning_step)
for learning_step in source_dict["learning_steps"]
],
relearning_steps=[
timedelta(seconds=relearning_step)
for relearning_step in source_dict["relearning_steps"]
],
maximum_interval=source_dict["maximum_interval"],
enable_fuzzing=source_dict["enable_fuzzing"],
)
def to_json(self, indent: int | str | None = None) -> str:
"""
Returns a JSON-serialized string of the Scheduler object.
Args:
indent: Equivalent argument to the indent in json.dumps()
Returns:
str: A JSON-serialized string of the Scheduler object.
"""
return json.dumps(self.to_dict(), indent=indent)
@classmethod
def from_json(cls, source_json: str) -> Self:
"""
Creates a Scheduler object from a JSON-serialized string.
Args:
source_json: A JSON-serialized string of an existing Scheduler object.
Returns:
Self: A Scheduler object created from the JSON string.
"""
source_dict: SchedulerDict = json.loads(source_json)
return cls.from_dict(source_dict=source_dict)
@overload
def _clamp_difficulty(self, *, difficulty: float) -> float: ...
@overload
def _clamp_difficulty(self, *, difficulty: Tensor) -> Tensor: ...
def _clamp_difficulty(self, *, difficulty: float | Tensor) -> float | Tensor:
if isinstance(difficulty, (int, float)):
difficulty = min(max(difficulty, MIN_DIFFICULTY), MAX_DIFFICULTY)
else:
difficulty = difficulty.clamp(min=MIN_DIFFICULTY, max=MAX_DIFFICULTY)
return difficulty
@overload
def _clamp_stability(self, *, stability: float) -> float: ...
@overload
def _clamp_stability(self, *, stability: Tensor) -> Tensor: ...
def _clamp_stability(self, *, stability: float | Tensor) -> float | Tensor:
if isinstance(stability, (int, float)):
stability = max(stability, STABILITY_MIN)
else:
stability = stability.clamp(min=STABILITY_MIN)
return stability
def _initial_stability(self, *, rating: Rating) -> float:
initial_stability = self.parameters[rating - 1]
initial_stability = self._clamp_stability(stability=initial_stability)
return initial_stability
def _initial_difficulty(self, *, rating: Rating, clamp: bool) -> float:
initial_difficulty = (
self.parameters[4] - (math.e ** (self.parameters[5] * (rating - 1))) + 1
)
if clamp:
initial_difficulty = self._clamp_difficulty(difficulty=initial_difficulty)
return initial_difficulty
def _next_interval(self, *, stability: float) -> int:
next_interval = (stability / self._FACTOR) * (
(self.desired_retention ** (1 / self._DECAY)) - 1
)
if not isinstance(next_interval, (int, float)):
next_interval = next_interval.detach().item()
next_interval = round(next_interval) # intervals are full days
# must be at least 1 day long
next_interval = max(next_interval, 1)
# can not be longer than the maximum interval
next_interval = min(next_interval, self.maximum_interval)
return next_interval
def _short_term_stability(self, *, stability: float, rating: Rating) -> float:
short_term_stability_increase = (
math.e ** (self.parameters[17] * (rating - 3 + self.parameters[18]))
) * (stability ** -self.parameters[19])
if rating in (Rating.Good, Rating.Easy):
if isinstance(short_term_stability_increase, (int, float)):
short_term_stability_increase = max(short_term_stability_increase, 1.0)
else:
short_term_stability_increase = short_term_stability_increase.clamp(
min=1.0
)
short_term_stability = stability * short_term_stability_increase
short_term_stability = self._clamp_stability(stability=short_term_stability)
return short_term_stability
def _next_difficulty(self, *, difficulty: float, rating: Rating) -> float:
def _linear_damping(*, delta_difficulty: float, difficulty: float) -> float:
return (10.0 - difficulty) * delta_difficulty / 9.0
def _mean_reversion(*, arg_1: float, arg_2: float) -> float:
return self.parameters[7] * arg_1 + (1 - self.parameters[7]) * arg_2
arg_1 = self._initial_difficulty(rating=Rating.Easy, clamp=False)
delta_difficulty = -(self.parameters[6] * (rating - 3))
arg_2 = difficulty + _linear_damping(
delta_difficulty=delta_difficulty, difficulty=difficulty
)
next_difficulty = _mean_reversion(arg_1=arg_1, arg_2=arg_2)
next_difficulty = self._clamp_difficulty(difficulty=next_difficulty)
return next_difficulty
def _next_stability(
self,
*,
difficulty: float,
stability: float,
retrievability: float,
rating: Rating,
) -> float:
if rating == Rating.Again:
next_stability = self._next_forget_stability(
difficulty=difficulty,
stability=stability,
retrievability=retrievability,
)
elif rating in (Rating.Hard, Rating.Good, Rating.Easy):
next_stability = self._next_recall_stability(
difficulty=difficulty,
stability=stability,
retrievability=retrievability,
rating=rating,
)
else:
raise ValueError(f"Unknown rating: {rating}")
next_stability = self._clamp_stability(stability=next_stability)
return next_stability
def _next_forget_stability(
self, *, difficulty: float, stability: float, retrievability: float
) -> float:
next_forget_stability_long_term_params = (
self.parameters[11]
* (difficulty ** -self.parameters[12])
* (((stability + 1) ** (self.parameters[13])) - 1)
* (math.e ** ((1 - retrievability) * self.parameters[14]))
)
next_forget_stability_short_term_params = stability / (
math.e ** (self.parameters[17] * self.parameters[18])
)
return min(
next_forget_stability_long_term_params,
next_forget_stability_short_term_params,
)
def _next_recall_stability(
self,
*,
difficulty: float,
stability: float,
retrievability: float,
rating: Rating,
) -> float:
hard_penalty = self.parameters[15] if rating == Rating.Hard else 1
easy_bonus = self.parameters[16] if rating == Rating.Easy else 1
return stability * (
1
+ (math.e ** (self.parameters[8]))
* (11 - difficulty)
* (stability ** -self.parameters[9])
* ((math.e ** ((1 - retrievability) * self.parameters[10])) - 1)
* hard_penalty
* easy_bonus
)
def _get_fuzzed_interval(self, *, interval: timedelta) -> timedelta:
"""
Takes the current calculated interval and adds a small amount of random fuzz to it.
For example, a card that would've been due in 50 days, after fuzzing, might be due in 49, or 51 days.
Args:
interval: The calculated next interval, before fuzzing.
Returns:
timedelta: The new interval, after fuzzing.
"""
interval_days = interval.days
if interval_days < 2.5: # fuzz is not applied to intervals less than 2.5
return interval
def _get_fuzz_range(*, interval_days: int) -> tuple[int, int]:
"""
Helper function that computes the possible upper and lower bounds of the interval after fuzzing.
"""
delta = 1.0
for fuzz_range in FUZZ_RANGES:
delta += fuzz_range["factor"] * max(
min(float(interval_days), fuzz_range["end"]) - fuzz_range["start"],
0.0,
)
min_ivl = int(round(interval_days - delta))
max_ivl = int(round(interval_days + delta))
# make sure the min_ivl and max_ivl fall into a valid range
min_ivl = max(2, min_ivl)
max_ivl = min(max_ivl, self.maximum_interval)
min_ivl = min(min_ivl, max_ivl)
return min_ivl, max_ivl
min_ivl, max_ivl = _get_fuzz_range(interval_days=interval_days)
fuzzed_interval_days = (
random() * (max_ivl - min_ivl + 1)
) + min_ivl # the next interval is a random value between min_ivl and max_ivl
fuzzed_interval_days = min(round(fuzzed_interval_days), self.maximum_interval)
fuzzed_interval = timedelta(days=fuzzed_interval_days)
return fuzzed_interval
__all__ = ["Scheduler"]

14
src/heurams/vendor/pyfsrs/state.py vendored Normal file
View File

@@ -0,0 +1,14 @@
from enum import IntEnum
class State(IntEnum):
"""
Enum representing the learning state of a Card object.
"""
Learning = 1
Review = 2
Relearning = 3
__all__ = ["State"]

118
uv.lock generated
View File

@@ -183,21 +183,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/33/21/6ebbc7fc6a4e58bcd49130273a072f7c2e4e6dc03735e078895b47148e30/edge_tts-7.0.2-py3-none-any.whl", hash = "sha256:effc554c249f02bd5013f28cd1faa22802e0757b031a7759be5960084ccb8d76", size = 26274, upload-time = "2025-05-03T10:34:15.872Z" }, { url = "https://files.pythonhosted.org/packages/33/21/6ebbc7fc6a4e58bcd49130273a072f7c2e4e6dc03735e078895b47148e30/edge_tts-7.0.2-py3-none-any.whl", hash = "sha256:effc554c249f02bd5013f28cd1faa22802e0757b031a7759be5960084ccb8d76", size = 26274, upload-time = "2025-05-03T10:34:15.872Z" },
] ]
[[package]]
name = "flet"
version = "0.80.1"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "httpx", marker = "platform_system != 'Pyodide'" },
{ name = "msgpack" },
{ name = "oauthlib", marker = "platform_system != 'Pyodide'" },
{ name = "repath" },
]
sdist = { url = "https://files.pythonhosted.org/packages/0a/80/0dbf92f0d829729be0f22721fa7dd6cffbdaca6017e51379f96666f20d65/flet-0.80.1.tar.gz", hash = "sha256:1ecfa713e46051c3b4b856ac9d46bb69aa476c51d35823b518387cfb2d415d64", size = 349528, upload-time = "2026-01-02T21:40:32.697Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/fe/20/085201355f6525756ab32fc766e26559983ca6cbc0bd24e073846cf5492f/flet-0.80.1-py3-none-any.whl", hash = "sha256:645dfa7e0ff9648b94a3a47b6e0f26944046200e631e25f397bd4c5295067b9d", size = 440587, upload-time = "2026-01-02T21:40:30.988Z" },
]
[[package]] [[package]]
name = "frozenlist" name = "frozenlist"
version = "1.8.0" version = "1.8.0"
@@ -306,18 +291,12 @@ dependencies = [
{ name = "openai" }, { name = "openai" },
{ name = "playsound" }, { name = "playsound" },
{ name = "psutil" }, { name = "psutil" },
{ name = "pygobject" },
{ name = "tabulate" }, { name = "tabulate" },
{ name = "textual" }, { name = "textual" },
{ name = "toml" }, { name = "toml" },
{ name = "transitions" }, { name = "transitions" },
] ]
[package.dev-dependencies]
dev = [
{ name = "flet" },
]
[package.metadata] [package.metadata]
requires-dist = [ requires-dist = [
{ name = "edge-tts", specifier = "==7.0.2" }, { name = "edge-tts", specifier = "==7.0.2" },
@@ -325,16 +304,12 @@ requires-dist = [
{ name = "openai", specifier = "==1.0.0" }, { name = "openai", specifier = "==1.0.0" },
{ name = "playsound", specifier = "==1.2.2" }, { name = "playsound", specifier = "==1.2.2" },
{ name = "psutil", specifier = ">=7.2.1" }, { name = "psutil", specifier = ">=7.2.1" },
{ name = "pygobject", specifier = ">=3.54.5" },
{ name = "tabulate", specifier = ">=0.9.0" }, { name = "tabulate", specifier = ">=0.9.0" },
{ name = "textual", specifier = "==7.0.0" }, { name = "textual", specifier = "==7.0.0" },
{ name = "toml", specifier = "==0.10.2" }, { name = "toml", specifier = "==0.10.2" },
{ name = "transitions", specifier = "==0.9.3" }, { name = "transitions", specifier = "==0.9.3" },
] ]
[package.metadata.requires-dev]
dev = [{ name = "flet", specifier = ">=0.80.1" }]
[[package]] [[package]]
name = "httpcore" name = "httpcore"
version = "1.0.9" version = "1.0.9"
@@ -428,50 +403,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/b3/38/89ba8ad64ae25be8de66a6d463314cf1eb366222074cfda9ee839c56a4b4/mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8", size = 9979, upload-time = "2022-08-14T12:40:09.779Z" }, { url = "https://files.pythonhosted.org/packages/b3/38/89ba8ad64ae25be8de66a6d463314cf1eb366222074cfda9ee839c56a4b4/mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8", size = 9979, upload-time = "2022-08-14T12:40:09.779Z" },
] ]
[[package]]
name = "msgpack"
version = "1.1.2"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/4d/f2/bfb55a6236ed8725a96b0aa3acbd0ec17588e6a2c3b62a93eb513ed8783f/msgpack-1.1.2.tar.gz", hash = "sha256:3b60763c1373dd60f398488069bcdc703cd08a711477b5d480eecc9f9626f47e", size = 173581, upload-time = "2025-10-08T09:15:56.596Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/ad/bd/8b0d01c756203fbab65d265859749860682ccd2a59594609aeec3a144efa/msgpack-1.1.2-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:70a0dff9d1f8da25179ffcf880e10cf1aad55fdb63cd59c9a49a1b82290062aa", size = 81939, upload-time = "2025-10-08T09:15:01.472Z" },
{ url = "https://files.pythonhosted.org/packages/34/68/ba4f155f793a74c1483d4bdef136e1023f7bcba557f0db4ef3db3c665cf1/msgpack-1.1.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:446abdd8b94b55c800ac34b102dffd2f6aa0ce643c55dfc017ad89347db3dbdb", size = 85064, upload-time = "2025-10-08T09:15:03.764Z" },
{ url = "https://files.pythonhosted.org/packages/f2/60/a064b0345fc36c4c3d2c743c82d9100c40388d77f0b48b2f04d6041dbec1/msgpack-1.1.2-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c63eea553c69ab05b6747901b97d620bb2a690633c77f23feb0c6a947a8a7b8f", size = 417131, upload-time = "2025-10-08T09:15:05.136Z" },
{ url = "https://files.pythonhosted.org/packages/65/92/a5100f7185a800a5d29f8d14041f61475b9de465ffcc0f3b9fba606e4505/msgpack-1.1.2-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:372839311ccf6bdaf39b00b61288e0557916c3729529b301c52c2d88842add42", size = 427556, upload-time = "2025-10-08T09:15:06.837Z" },
{ url = "https://files.pythonhosted.org/packages/f5/87/ffe21d1bf7d9991354ad93949286f643b2bb6ddbeab66373922b44c3b8cc/msgpack-1.1.2-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:2929af52106ca73fcb28576218476ffbb531a036c2adbcf54a3664de124303e9", size = 404920, upload-time = "2025-10-08T09:15:08.179Z" },
{ url = "https://files.pythonhosted.org/packages/ff/41/8543ed2b8604f7c0d89ce066f42007faac1eaa7d79a81555f206a5cdb889/msgpack-1.1.2-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:be52a8fc79e45b0364210eef5234a7cf8d330836d0a64dfbb878efa903d84620", size = 415013, upload-time = "2025-10-08T09:15:09.83Z" },
{ url = "https://files.pythonhosted.org/packages/41/0d/2ddfaa8b7e1cee6c490d46cb0a39742b19e2481600a7a0e96537e9c22f43/msgpack-1.1.2-cp312-cp312-win32.whl", hash = "sha256:1fff3d825d7859ac888b0fbda39a42d59193543920eda9d9bea44d958a878029", size = 65096, upload-time = "2025-10-08T09:15:11.11Z" },
{ url = "https://files.pythonhosted.org/packages/8c/ec/d431eb7941fb55a31dd6ca3404d41fbb52d99172df2e7707754488390910/msgpack-1.1.2-cp312-cp312-win_amd64.whl", hash = "sha256:1de460f0403172cff81169a30b9a92b260cb809c4cb7e2fc79ae8d0510c78b6b", size = 72708, upload-time = "2025-10-08T09:15:12.554Z" },
{ url = "https://files.pythonhosted.org/packages/c5/31/5b1a1f70eb0e87d1678e9624908f86317787b536060641d6798e3cf70ace/msgpack-1.1.2-cp312-cp312-win_arm64.whl", hash = "sha256:be5980f3ee0e6bd44f3a9e9dea01054f175b50c3e6cdb692bc9424c0bbb8bf69", size = 64119, upload-time = "2025-10-08T09:15:13.589Z" },
{ url = "https://files.pythonhosted.org/packages/6b/31/b46518ecc604d7edf3a4f94cb3bf021fc62aa301f0cb849936968164ef23/msgpack-1.1.2-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:4efd7b5979ccb539c221a4c4e16aac1a533efc97f3b759bb5a5ac9f6d10383bf", size = 81212, upload-time = "2025-10-08T09:15:14.552Z" },
{ url = "https://files.pythonhosted.org/packages/92/dc/c385f38f2c2433333345a82926c6bfa5ecfff3ef787201614317b58dd8be/msgpack-1.1.2-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:42eefe2c3e2af97ed470eec850facbe1b5ad1d6eacdbadc42ec98e7dcf68b4b7", size = 84315, upload-time = "2025-10-08T09:15:15.543Z" },
{ url = "https://files.pythonhosted.org/packages/d3/68/93180dce57f684a61a88a45ed13047558ded2be46f03acb8dec6d7c513af/msgpack-1.1.2-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1fdf7d83102bf09e7ce3357de96c59b627395352a4024f6e2458501f158bf999", size = 412721, upload-time = "2025-10-08T09:15:16.567Z" },
{ url = "https://files.pythonhosted.org/packages/5d/ba/459f18c16f2b3fc1a1ca871f72f07d70c07bf768ad0a507a698b8052ac58/msgpack-1.1.2-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:fac4be746328f90caa3cd4bc67e6fe36ca2bf61d5c6eb6d895b6527e3f05071e", size = 424657, upload-time = "2025-10-08T09:15:17.825Z" },
{ url = "https://files.pythonhosted.org/packages/38/f8/4398c46863b093252fe67368b44edc6c13b17f4e6b0e4929dbf0bdb13f23/msgpack-1.1.2-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:fffee09044073e69f2bad787071aeec727183e7580443dfeb8556cbf1978d162", size = 402668, upload-time = "2025-10-08T09:15:19.003Z" },
{ url = "https://files.pythonhosted.org/packages/28/ce/698c1eff75626e4124b4d78e21cca0b4cc90043afb80a507626ea354ab52/msgpack-1.1.2-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:5928604de9b032bc17f5099496417f113c45bc6bc21b5c6920caf34b3c428794", size = 419040, upload-time = "2025-10-08T09:15:20.183Z" },
{ url = "https://files.pythonhosted.org/packages/67/32/f3cd1667028424fa7001d82e10ee35386eea1408b93d399b09fb0aa7875f/msgpack-1.1.2-cp313-cp313-win32.whl", hash = "sha256:a7787d353595c7c7e145e2331abf8b7ff1e6673a6b974ded96e6d4ec09f00c8c", size = 65037, upload-time = "2025-10-08T09:15:21.416Z" },
{ url = "https://files.pythonhosted.org/packages/74/07/1ed8277f8653c40ebc65985180b007879f6a836c525b3885dcc6448ae6cb/msgpack-1.1.2-cp313-cp313-win_amd64.whl", hash = "sha256:a465f0dceb8e13a487e54c07d04ae3ba131c7c5b95e2612596eafde1dccf64a9", size = 72631, upload-time = "2025-10-08T09:15:22.431Z" },
{ url = "https://files.pythonhosted.org/packages/e5/db/0314e4e2db56ebcf450f277904ffd84a7988b9e5da8d0d61ab2d057df2b6/msgpack-1.1.2-cp313-cp313-win_arm64.whl", hash = "sha256:e69b39f8c0aa5ec24b57737ebee40be647035158f14ed4b40e6f150077e21a84", size = 64118, upload-time = "2025-10-08T09:15:23.402Z" },
{ url = "https://files.pythonhosted.org/packages/22/71/201105712d0a2ff07b7873ed3c220292fb2ea5120603c00c4b634bcdafb3/msgpack-1.1.2-cp314-cp314-macosx_10_13_x86_64.whl", hash = "sha256:e23ce8d5f7aa6ea6d2a2b326b4ba46c985dbb204523759984430db7114f8aa00", size = 81127, upload-time = "2025-10-08T09:15:24.408Z" },
{ url = "https://files.pythonhosted.org/packages/1b/9f/38ff9e57a2eade7bf9dfee5eae17f39fc0e998658050279cbb14d97d36d9/msgpack-1.1.2-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:6c15b7d74c939ebe620dd8e559384be806204d73b4f9356320632d783d1f7939", size = 84981, upload-time = "2025-10-08T09:15:25.812Z" },
{ url = "https://files.pythonhosted.org/packages/8e/a9/3536e385167b88c2cc8f4424c49e28d49a6fc35206d4a8060f136e71f94c/msgpack-1.1.2-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:99e2cb7b9031568a2a5c73aa077180f93dd2e95b4f8d3b8e14a73ae94a9e667e", size = 411885, upload-time = "2025-10-08T09:15:27.22Z" },
{ url = "https://files.pythonhosted.org/packages/2f/40/dc34d1a8d5f1e51fc64640b62b191684da52ca469da9cd74e84936ffa4a6/msgpack-1.1.2-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:180759d89a057eab503cf62eeec0aa61c4ea1200dee709f3a8e9397dbb3b6931", size = 419658, upload-time = "2025-10-08T09:15:28.4Z" },
{ url = "https://files.pythonhosted.org/packages/3b/ef/2b92e286366500a09a67e03496ee8b8ba00562797a52f3c117aa2b29514b/msgpack-1.1.2-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:04fb995247a6e83830b62f0b07bf36540c213f6eac8e851166d8d86d83cbd014", size = 403290, upload-time = "2025-10-08T09:15:29.764Z" },
{ url = "https://files.pythonhosted.org/packages/78/90/e0ea7990abea5764e4655b8177aa7c63cdfa89945b6e7641055800f6c16b/msgpack-1.1.2-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:8e22ab046fa7ede9e36eeb4cfad44d46450f37bb05d5ec482b02868f451c95e2", size = 415234, upload-time = "2025-10-08T09:15:31.022Z" },
{ url = "https://files.pythonhosted.org/packages/72/4e/9390aed5db983a2310818cd7d3ec0aecad45e1f7007e0cda79c79507bb0d/msgpack-1.1.2-cp314-cp314-win32.whl", hash = "sha256:80a0ff7d4abf5fecb995fcf235d4064b9a9a8a40a3ab80999e6ac1e30b702717", size = 66391, upload-time = "2025-10-08T09:15:32.265Z" },
{ url = "https://files.pythonhosted.org/packages/6e/f1/abd09c2ae91228c5f3998dbd7f41353def9eac64253de3c8105efa2082f7/msgpack-1.1.2-cp314-cp314-win_amd64.whl", hash = "sha256:9ade919fac6a3e7260b7f64cea89df6bec59104987cbea34d34a2fa15d74310b", size = 73787, upload-time = "2025-10-08T09:15:33.219Z" },
{ url = "https://files.pythonhosted.org/packages/6a/b0/9d9f667ab48b16ad4115c1935d94023b82b3198064cb84a123e97f7466c1/msgpack-1.1.2-cp314-cp314-win_arm64.whl", hash = "sha256:59415c6076b1e30e563eb732e23b994a61c159cec44deaf584e5cc1dd662f2af", size = 66453, upload-time = "2025-10-08T09:15:34.225Z" },
{ url = "https://files.pythonhosted.org/packages/16/67/93f80545eb1792b61a217fa7f06d5e5cb9e0055bed867f43e2b8e012e137/msgpack-1.1.2-cp314-cp314t-macosx_10_13_x86_64.whl", hash = "sha256:897c478140877e5307760b0ea66e0932738879e7aa68144d9b78ea4c8302a84a", size = 85264, upload-time = "2025-10-08T09:15:35.61Z" },
{ url = "https://files.pythonhosted.org/packages/87/1c/33c8a24959cf193966ef11a6f6a2995a65eb066bd681fd085afd519a57ce/msgpack-1.1.2-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:a668204fa43e6d02f89dbe79a30b0d67238d9ec4c5bd8a940fc3a004a47b721b", size = 89076, upload-time = "2025-10-08T09:15:36.619Z" },
{ url = "https://files.pythonhosted.org/packages/fc/6b/62e85ff7193663fbea5c0254ef32f0c77134b4059f8da89b958beb7696f3/msgpack-1.1.2-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:5559d03930d3aa0f3aacb4c42c776af1a2ace2611871c84a75afe436695e6245", size = 435242, upload-time = "2025-10-08T09:15:37.647Z" },
{ url = "https://files.pythonhosted.org/packages/c1/47/5c74ecb4cc277cf09f64e913947871682ffa82b3b93c8dad68083112f412/msgpack-1.1.2-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:70c5a7a9fea7f036b716191c29047374c10721c389c21e9ffafad04df8c52c90", size = 432509, upload-time = "2025-10-08T09:15:38.794Z" },
{ url = "https://files.pythonhosted.org/packages/24/a4/e98ccdb56dc4e98c929a3f150de1799831c0a800583cde9fa022fa90602d/msgpack-1.1.2-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:f2cb069d8b981abc72b41aea1c580ce92d57c673ec61af4c500153a626cb9e20", size = 415957, upload-time = "2025-10-08T09:15:40.238Z" },
{ url = "https://files.pythonhosted.org/packages/da/28/6951f7fb67bc0a4e184a6b38ab71a92d9ba58080b27a77d3e2fb0be5998f/msgpack-1.1.2-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:d62ce1f483f355f61adb5433ebfd8868c5f078d1a52d042b0a998682b4fa8c27", size = 422910, upload-time = "2025-10-08T09:15:41.505Z" },
{ url = "https://files.pythonhosted.org/packages/f0/03/42106dcded51f0a0b5284d3ce30a671e7bd3f7318d122b2ead66ad289fed/msgpack-1.1.2-cp314-cp314t-win32.whl", hash = "sha256:1d1418482b1ee984625d88aa9585db570180c286d942da463533b238b98b812b", size = 75197, upload-time = "2025-10-08T09:15:42.954Z" },
{ url = "https://files.pythonhosted.org/packages/15/86/d0071e94987f8db59d4eeb386ddc64d0bb9b10820a8d82bcd3e53eeb2da6/msgpack-1.1.2-cp314-cp314t-win_amd64.whl", hash = "sha256:5a46bf7e831d09470ad92dff02b8b1ac92175ca36b087f904a0519857c6be3ff", size = 85772, upload-time = "2025-10-08T09:15:43.954Z" },
{ url = "https://files.pythonhosted.org/packages/81/f2/08ace4142eb281c12701fc3b93a10795e4d4dc7f753911d836675050f886/msgpack-1.1.2-cp314-cp314t-win_arm64.whl", hash = "sha256:d99ef64f349d5ec3293688e91486c5fdb925ed03807f64d98d205d2713c60b46", size = 70868, upload-time = "2025-10-08T09:15:44.959Z" },
]
[[package]] [[package]]
name = "multidict" name = "multidict"
version = "6.7.0" version = "6.7.0"
@@ -571,15 +502,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/b7/da/7d22601b625e241d4f23ef1ebff8acfc60da633c9e7e7922e24d10f592b3/multidict-6.7.0-py3-none-any.whl", hash = "sha256:394fc5c42a333c9ffc3e421a4c85e08580d990e08b99f6bf35b4132114c5dcb3", size = 12317, upload-time = "2025-10-06T14:52:29.272Z" }, { url = "https://files.pythonhosted.org/packages/b7/da/7d22601b625e241d4f23ef1ebff8acfc60da633c9e7e7922e24d10f592b3/multidict-6.7.0-py3-none-any.whl", hash = "sha256:394fc5c42a333c9ffc3e421a4c85e08580d990e08b99f6bf35b4132114c5dcb3", size = 12317, upload-time = "2025-10-06T14:52:29.272Z" },
] ]
[[package]]
name = "oauthlib"
version = "3.3.1"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/0b/5f/19930f824ffeb0ad4372da4812c50edbd1434f678c90c2733e1188edfc63/oauthlib-3.3.1.tar.gz", hash = "sha256:0f0f8aa759826a193cf66c12ea1af1637f87b9b4622d46e866952bb022e538c9", size = 185918, upload-time = "2025-06-19T22:48:08.269Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/be/9c/92789c596b8df838baa98fa71844d84283302f7604ed565dafe5a6b5041a/oauthlib-3.3.1-py3-none-any.whl", hash = "sha256:88119c938d2b8fb88561af5f6ee0eec8cc8d552b7bb1f712743136eb7523b7a1", size = 160065, upload-time = "2025-06-19T22:48:06.508Z" },
]
[[package]] [[package]]
name = "openai" name = "openai"
version = "1.0.0" version = "1.0.0"
@@ -726,25 +648,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/3e/73/2ce007f4198c80fcf2cb24c169884f833fe93fbc03d55d302627b094ee91/psutil-7.2.1-cp37-abi3-win_arm64.whl", hash = "sha256:0d67c1822c355aa6f7314d92018fb4268a76668a536f133599b91edd48759442", size = 133836, upload-time = "2025-12-29T08:26:43.086Z" }, { url = "https://files.pythonhosted.org/packages/3e/73/2ce007f4198c80fcf2cb24c169884f833fe93fbc03d55d302627b094ee91/psutil-7.2.1-cp37-abi3-win_arm64.whl", hash = "sha256:0d67c1822c355aa6f7314d92018fb4268a76668a536f133599b91edd48759442", size = 133836, upload-time = "2025-12-29T08:26:43.086Z" },
] ]
[[package]]
name = "pycairo"
version = "1.29.0"
source = { registry = "https://pypi.org/simple" }
sdist = { url = "https://files.pythonhosted.org/packages/22/d9/1728840a22a4ef8a8f479b9156aa2943cd98c3907accd3849fb0d5f82bfd/pycairo-1.29.0.tar.gz", hash = "sha256:f3f7fde97325cae80224c09f12564ef58d0d0f655da0e3b040f5807bd5bd3142", size = 665871, upload-time = "2025-11-11T19:13:01.584Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/f6/28/6363087b9e60af031398a6ee5c248639eefc6cc742884fa2789411b1f73b/pycairo-1.29.0-cp312-cp312-win32.whl", hash = "sha256:91bcd7b5835764c616a615d9948a9afea29237b34d2ed013526807c3d79bb1d0", size = 751486, upload-time = "2025-11-11T19:11:54.451Z" },
{ url = "https://files.pythonhosted.org/packages/3a/d2/d146f1dd4ef81007686ac52231dd8f15ad54cf0aa432adaefc825475f286/pycairo-1.29.0-cp312-cp312-win_amd64.whl", hash = "sha256:3f01c3b5e49ef9411fff6bc7db1e765f542dc1c9cfed4542958a5afa3a8b8e76", size = 845383, upload-time = "2025-11-11T19:12:01.551Z" },
{ url = "https://files.pythonhosted.org/packages/01/16/6e6f33bb79ec4a527c9e633915c16dc55a60be26b31118dbd0d5859e8c51/pycairo-1.29.0-cp312-cp312-win_arm64.whl", hash = "sha256:eafe3d2076f3533535ad4a361fa0754e0ee66b90e548a3a0f558fed00b1248f2", size = 694518, upload-time = "2025-11-11T19:12:06.561Z" },
{ url = "https://files.pythonhosted.org/packages/f0/21/3f477dc318dd4e84a5ae6301e67284199d7e5a2384f3063714041086b65d/pycairo-1.29.0-cp313-cp313-win32.whl", hash = "sha256:3eb382a4141591807073274522f7aecab9e8fa2f14feafd11ac03a13a58141d7", size = 750949, upload-time = "2025-11-11T19:12:12.198Z" },
{ url = "https://files.pythonhosted.org/packages/43/34/7d27a333c558d6ac16dbc12a35061d389735e99e494ee4effa4ec6d99bed/pycairo-1.29.0-cp313-cp313-win_amd64.whl", hash = "sha256:91114e4b3fbf4287c2b0788f83e1f566ce031bda49cf1c3c3c19c3e986e95c38", size = 844149, upload-time = "2025-11-11T19:12:19.171Z" },
{ url = "https://files.pythonhosted.org/packages/15/43/e782131e23df69e5c8e631a016ed84f94bbc4981bf6411079f57af730a23/pycairo-1.29.0-cp313-cp313-win_arm64.whl", hash = "sha256:09b7f69a5ff6881e151354ea092137b97b0b1f0b2ab4eb81c92a02cc4a08e335", size = 693595, upload-time = "2025-11-11T19:12:23.445Z" },
{ url = "https://files.pythonhosted.org/packages/2d/fa/87eaeeb9d53344c769839d7b2854db7ff2cd596211e00dd1b702eeb1838f/pycairo-1.29.0-cp314-cp314-win32.whl", hash = "sha256:69e2a7968a3fbb839736257bae153f547bca787113cc8d21e9e08ca4526e0b6b", size = 767198, upload-time = "2025-11-11T19:12:42.336Z" },
{ url = "https://files.pythonhosted.org/packages/3c/90/3564d0f64d0a00926ab863dc3c4a129b1065133128e96900772e1c4421f8/pycairo-1.29.0-cp314-cp314-win_amd64.whl", hash = "sha256:e91243437a21cc4c67c401eff4433eadc45745275fa3ade1a0d877e50ffb90da", size = 871579, upload-time = "2025-11-11T19:12:48.982Z" },
{ url = "https://files.pythonhosted.org/packages/5e/91/93632b6ba12ad69c61991e3208bde88486fdfc152be8cfdd13444e9bc650/pycairo-1.29.0-cp314-cp314-win_arm64.whl", hash = "sha256:b72200ea0e5f73ae4c788cd2028a750062221385eb0e6d8f1ecc714d0b4fdf82", size = 719537, upload-time = "2025-11-11T19:12:55.016Z" },
{ url = "https://files.pythonhosted.org/packages/93/23/37053c039f8d3b9b5017af9bc64d27b680c48a898d48b72e6d6583cf0155/pycairo-1.29.0-cp314-cp314t-win_amd64.whl", hash = "sha256:5e45fce6185f553e79e4ef1722b8e98e6cde9900dbc48cb2637a9ccba86f627a", size = 874015, upload-time = "2025-11-11T19:12:28.47Z" },
{ url = "https://files.pythonhosted.org/packages/d7/54/123f6239685f5f3f2edc123f1e38d2eefacebee18cf3c532d2f4bd51d0ef/pycairo-1.29.0-cp314-cp314t-win_arm64.whl", hash = "sha256:caba0837a4b40d47c8dfb0f24cccc12c7831e3dd450837f2a356c75f21ce5a15", size = 721404, upload-time = "2025-11-11T19:12:36.919Z" },
]
[[package]] [[package]]
name = "pydantic" name = "pydantic"
version = "2.12.5" version = "2.12.5"
@@ -840,27 +743,6 @@ wheels = [
{ url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" }, { url = "https://files.pythonhosted.org/packages/c7/21/705964c7812476f378728bdf590ca4b771ec72385c533964653c68e86bdc/pygments-2.19.2-py3-none-any.whl", hash = "sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b", size = 1225217, upload-time = "2025-06-21T13:39:07.939Z" },
] ]
[[package]]
name = "pygobject"
version = "3.54.5"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "pycairo" },
]
sdist = { url = "https://files.pythonhosted.org/packages/d3/a5/68f883df1d8442e3b267cb92105a4b2f0de819bd64ac9981c2d680d3f49f/pygobject-3.54.5.tar.gz", hash = "sha256:b6656f6348f5245606cf15ea48c384c7f05156c75ead206c1b246c80a22fb585", size = 1274658, upload-time = "2025-10-18T13:45:03.121Z" }
[[package]]
name = "repath"
version = "0.9.0"
source = { registry = "https://pypi.org/simple" }
dependencies = [
{ name = "six" },
]
sdist = { url = "https://files.pythonhosted.org/packages/65/e1/824989291d0f01886074fdf9504ba54598f5665bc4dd373b589b87e76608/repath-0.9.0.tar.gz", hash = "sha256:8292139bac6a0e43fd9d70605d4e8daeb25d46672e484ed31a24c7ce0aef0fb7", size = 5492, upload-time = "2019-10-08T00:25:22.3Z" }
wheels = [
{ url = "https://files.pythonhosted.org/packages/87/ed/92e9b8a3ffc562f21df14ef2538f54e911df29730e1f0d79130a4edc86e7/repath-0.9.0-py3-none-any.whl", hash = "sha256:ee079d6c91faeb843274d22d8f786094ee01316ecfe293a1eb6546312bb6a318", size = 4738, upload-time = "2019-10-08T00:25:20.842Z" },
]
[[package]] [[package]]
name = "rich" name = "rich"
version = "14.2.0" version = "14.2.0"