test: 增加测试
This commit is contained in:
2
.gitignore
vendored
2
.gitignore
vendored
@@ -15,6 +15,8 @@ data/session
|
||||
build/
|
||||
dist/
|
||||
old/
|
||||
AGENT.md
|
||||
AGENTS.md
|
||||
*.log.*
|
||||
|
||||
# Byte-compiled / optimized / DLL files
|
||||
|
||||
485
ARCHITECTURE.md
485
ARCHITECTURE.md
@@ -1,62 +1,467 @@
|
||||
# 项目架构
|
||||
|
||||
## 架构图(待更新 0.5.0)
|
||||
|
||||
以下 Mermaid 图展示了 HeurAMS 的主要组件及其关系:
|
||||
## 整体架构概览
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "用户界面层 (TUI)"
|
||||
TUI[Textual TUI]
|
||||
Widgets[界面组件]
|
||||
TUI[Textual App]
|
||||
Screens[应用屏幕]
|
||||
Widgets[谜题组件]
|
||||
end
|
||||
|
||||
subgraph "内核层 Kernel"
|
||||
Reactor[调度反应器]
|
||||
Algorithms[算法模块]
|
||||
Particles[数据模型]
|
||||
Puzzles[谜题引擎]
|
||||
RepoLib[仓库系统]
|
||||
Auxiliary[辅助工具]
|
||||
end
|
||||
|
||||
subgraph "服务层"
|
||||
Config[配置管理]
|
||||
Config[配置管理 ConfigDict]
|
||||
Logger[日志系统]
|
||||
Timer[时间服务]
|
||||
AudioService[音频服务]
|
||||
TTSService[TTS服务]
|
||||
SyncService[同步服务]
|
||||
OtherServices[其他服务]
|
||||
end
|
||||
|
||||
subgraph "内核层"
|
||||
Algorithms[算法模块]
|
||||
Particles[数据模型]
|
||||
Puzzles[谜题模块]
|
||||
Reactor[调度反应器]
|
||||
Audio[音频服务]
|
||||
TTS[TTS 服务]
|
||||
Favorites[收藏管理]
|
||||
Attic[持久化]
|
||||
Hasher[哈希服务]
|
||||
end
|
||||
|
||||
subgraph "提供者层"
|
||||
AudioProvider[音频提供者]
|
||||
TTSProvider[TTS提供者]
|
||||
OtherProviders[其他提供者]
|
||||
AudioProv[音频提供者]
|
||||
TTSProv[TTS 提供者]
|
||||
LLMProv[LLM 提供者]
|
||||
end
|
||||
|
||||
subgraph "数据层"
|
||||
Files[本地文件数据]
|
||||
RepoDir[TOML/JSON 仓库目录]
|
||||
ConfigDir[TOML 配置目录]
|
||||
Logs[日志文件]
|
||||
end
|
||||
|
||||
subgraph "上下文管理"
|
||||
Context[ConfigContext]
|
||||
CtxVar[config_var]
|
||||
end
|
||||
|
||||
TUI --> Config
|
||||
TUI --> Logger
|
||||
TUI --> AudioService
|
||||
TUI --> TTSService
|
||||
TUI --> OtherServices
|
||||
Config --> Files
|
||||
Config --> Context
|
||||
AudioService --> AudioProvider
|
||||
TTSService --> TTSProvider
|
||||
OtherServices --> OtherProviders
|
||||
TUI --> Screens
|
||||
Screens --> Reactor
|
||||
Screens --> RepoLib
|
||||
Screens --> Widgets
|
||||
Widgets --> Puzzles
|
||||
Widgets --> Reactor
|
||||
Reactor --> Algorithms
|
||||
Reactor --> Particles
|
||||
Reactor --> Puzzles
|
||||
Particles --> Files
|
||||
Algorithms --> Files
|
||||
Particles --> RepoLib
|
||||
RepoLib --> Config
|
||||
RepoLib --> Auxiliary
|
||||
Auxiliary --> Lict
|
||||
Auxiliary --> Evalizer
|
||||
TUI --> Config
|
||||
TUI --> Logger
|
||||
TUI --> Audio
|
||||
TUI --> TTS
|
||||
Config --> ConfigDir
|
||||
Audio --> AudioProv
|
||||
TTS --> TTSProv
|
||||
Attic --> RepoDir
|
||||
```
|
||||
|
||||
## 数据模型
|
||||
|
||||
项目以物理粒子隐喻为核心, 将记忆单元拆解为三个模型:
|
||||
|
||||
### Nucleon (核子) — 内容层
|
||||
|
||||
```
|
||||
Nucleon(ident, payload, common)
|
||||
```
|
||||
|
||||
- **只读**内容容器. 通过 `Evalizer` (基于 `eval()` 的模板系统)对 payload 和 common 进行编译展开.
|
||||
- 包含 `puzzles` 字段, 定义该记忆单元支持哪些谜题类型.
|
||||
- 从 `repo.payload` 和 `repo.typedef["common"]` 配对创建.
|
||||
- 一旦创建, 内容不可修改 (`__setitem__` 抛出 `AttributeError`).
|
||||
|
||||
### Electron (电子) — 状态层
|
||||
|
||||
```
|
||||
Electron(ident, algodata, algo_name)
|
||||
```
|
||||
|
||||
- 算法状态数据的包装器. 每个 Electron 绑定一个算法 (`algorithms[algo_name]`).
|
||||
- `algodata` 是到仓库 `algodata.lict` 中对应字典的**引用**, 修改即持久化.
|
||||
- 核心方法:`activate()` (标记激活)、`revisor()` (评分迭代)、`is_due()` (到期判断).
|
||||
|
||||
### Orbital (轨道) — 策略层
|
||||
|
||||
```
|
||||
orbital = {
|
||||
"schedule": ["quick_review", "recognition"],
|
||||
"routes": {
|
||||
"quick_review": [["MCQ", "1.0"], ["Cloze", "0.5"]],
|
||||
"recognition": [["Recognition", "1.0"]],
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- 定义复习阶段流程和各阶段内谜题选择策略的纯字典.
|
||||
- 每个阶段对应一组 `(谜题类型, 概率系数)` 元组列表, 概率系数 >1 的部分表示强制重复次数.
|
||||
|
||||
### Atom (原子) — 运行时组装
|
||||
|
||||
```
|
||||
Atom(nucleon, electron, orbital)
|
||||
```
|
||||
|
||||
- 三者的运行时组合, 附带 `runtime` 运行时标志 (`locked`, `min_rate`, `new_activation`).
|
||||
- 是 UI 和调度层操作的基本单位.
|
||||
- `revise()` 方法在 `locked` 为真时调用 `electron.revisor(min_rate)`, 执行最终评分迭代.
|
||||
|
||||
**关系图**:
|
||||
|
||||
```mermaid
|
||||
graph LR
|
||||
subgraph "持久化存储"
|
||||
Payload[(payload.toml)]
|
||||
Common[(typedef.toml)]
|
||||
Algodata[(algodata.json)]
|
||||
Schedule[(schedule.toml)]
|
||||
end
|
||||
|
||||
subgraph "运行时组装"
|
||||
Nucleon -->|内容| Atom
|
||||
Electron -->|状态| Atom
|
||||
Orbital -->|策略| Atom
|
||||
end
|
||||
|
||||
Payload --> Nucleon
|
||||
Common --> Nucleon
|
||||
Algodata --> Electron
|
||||
Schedule --> Orbital
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 调度反应器 (Reactor)
|
||||
|
||||
调度反应器是核心业务流程引擎, 采用三层嵌套的有限状态机设计 (基于 `transitions` 库).
|
||||
|
||||
### 状态枚举定义
|
||||
|
||||
| 状态机 | 状态 | 说明 |
|
||||
|--------|------|------|
|
||||
| **RouterState** | `unsure` | 初始状态, 自动推进 |
|
||||
| | `quick_review` | 快速复习阶段 |
|
||||
| | `recognition` | 新记忆识别阶段 |
|
||||
| | `final_review` | 最终总复习阶段 |
|
||||
| | `finished` | 完成, 执行评分 |
|
||||
| **ProcessionState** | `active` | 进行中 |
|
||||
| | `finished` | 已完成 |
|
||||
| **ExpanderState** | `exammode` | 考试模式 (正面答题) |
|
||||
| | `retronly` | 回溯模式 (仅识别) |
|
||||
|
||||
### 三层嵌套结构
|
||||
|
||||
```mermaid
|
||||
graph TB
|
||||
subgraph "Router (全局路由器)"
|
||||
R[Router<br/>状态: unsure→quick_review→recognition→final_review→finished]
|
||||
P1[Procession 队列1: 快速复习]
|
||||
P2[Procession 队列2: 新记忆]
|
||||
P3[Procession 队列3: 总复习]
|
||||
R --> P1
|
||||
R --> P2
|
||||
R --> P3
|
||||
end
|
||||
|
||||
subgraph "Procession (单阶段队列)"
|
||||
P1 --> E1[Expander 原子A]
|
||||
P1 --> E2[Expander 原子B]
|
||||
P1 --> E3[Expander 原子C]
|
||||
M{forward 推进} --> |完成| Finish((FINISHED))
|
||||
end
|
||||
|
||||
subgraph "Expander (单原子展开器)"
|
||||
E1 --> S[(轨道策略)]
|
||||
S -->|概率展开| PZ1[谜题1: MCQ]
|
||||
S -->|概率展开| PZ2[谜题2: Cloze]
|
||||
PZ1 -->|评分| RPT[report]
|
||||
PZ2 -->|评分| RPT
|
||||
RPT -->|finish| RETRO[retronly 回溯模式]
|
||||
end
|
||||
```
|
||||
|
||||
### 数据流详解
|
||||
|
||||
```
|
||||
Router.__init__(atoms)
|
||||
│
|
||||
├─ 新旧原子分流
|
||||
│ ├─ old_atoms → Procession(quick_review) "初始复习"
|
||||
│ └─ new_atoms → Procession(recognition) "新记忆"
|
||||
│
|
||||
└─ 所有原子 → Procession(final_review) "总体复习"
|
||||
│
|
||||
└─ Procession.forward()
|
||||
│
|
||||
├─ cursor >= len(atoms) → finish()
|
||||
└─ cursor < len(atoms) → next_atom
|
||||
│
|
||||
└─ Procession.get_expander()
|
||||
│
|
||||
└─ Expander(atom, route)
|
||||
│
|
||||
├─ 读取 orbital.routes[route_value]
|
||||
├─ 概率展开为谜题列表 self.puzzles_inf
|
||||
├─ exammode → 依次展示谜题
|
||||
├─ report(rating) → 记录最低评分
|
||||
├─ forward() → 下一个谜题或 finish → retronly
|
||||
└─ retronly → 展示 Recognition
|
||||
│
|
||||
└─ Atom.revise()
|
||||
│
|
||||
└─ Electron.revisor(min_rate)
|
||||
│
|
||||
└─ Algorithm.revisor(algodata, feedback)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 算法系统
|
||||
|
||||
所有算法继承自 `BaseAlgorithm`, 以类方法的风格实现, 通过 `algorithms` 字典注册.
|
||||
|
||||
| 算法 | 文件 | 状态 | 说明 |
|
||||
|------|------|------|------|
|
||||
| **SM-2** | `sm2.py` | ✅ 完成 | 经典 SuperMemo 1987 算法 |
|
||||
| **NSP-0** | `nsp0.py` | ✅ 完成 | 非间隔过滤调度器 |
|
||||
| **SM-15M** | `sm15m.py` + `sm15m_calc.py` | ✅ 完成 | 从 CoffeeScript 移植的 SM-15 |
|
||||
| **FSRS** | `fsrs.py` | ❌ 未实现 | 进度: `logger.info("尚未实现")` |
|
||||
| **Base** | `base.py` | ✅ 基类 | 定义 `AlgodataDict` 结构和默认值 |
|
||||
|
||||
每个算法提供以下类方法:
|
||||
|
||||
| 方法 | 功能 |
|
||||
|------|------|
|
||||
| `revisor(algodata, feedback, is_new_activation)` | 根据评分迭代记忆数据 |
|
||||
| `is_due(algodata)` | 判断是否到期复习 |
|
||||
| `get_rating(algodata)` | 获取评分信息 |
|
||||
| `nextdate(algodata)` | 获取下一次复习时间戳 |
|
||||
| `check_integrity(algodata)` | 校验 algodata 数据结构完整性 |
|
||||
|
||||
### 算法数据结构 (AlgodataDict)
|
||||
|
||||
```python
|
||||
{
|
||||
"real_rept": int, # 实际复习次数
|
||||
"rept": int, # 当前重复计数
|
||||
"interval": int, # 间隔天数
|
||||
"last_date": int, # 上次复习日期
|
||||
"next_date": int, # 下次到期日期
|
||||
"is_activated": int, # 是否已激活 (0/1)
|
||||
"last_modify": float, # 最后修改时间戳
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 仓库系统 (Repo)
|
||||
|
||||
仓库是 TOML/JSON 文件目录, 无数据库依赖.
|
||||
|
||||
### 目录结构
|
||||
|
||||
```
|
||||
data/repo/<package_name>/
|
||||
├── manifest.toml # 元信息: title, author, package, desc
|
||||
├── typedef.toml # 通用元数据、谜题定义、注解
|
||||
├── payload.toml # 记忆条目 (key=ident)
|
||||
├── algodata.json # 算法状态 (key=ident)
|
||||
└── schedule.toml # 轨道/复习策略
|
||||
```
|
||||
|
||||
### Repo 类设计
|
||||
|
||||
```mermaid
|
||||
classDiagram
|
||||
class Repo {
|
||||
+dict schedule
|
||||
+Lict payload
|
||||
+dict manifest
|
||||
+dict typedef
|
||||
+Lict algodata
|
||||
+Path source
|
||||
+Lict nucleonic_data_lict
|
||||
+dict orbitic_data
|
||||
+Lict electronic_data_lict
|
||||
+from_repodir(source) ~Repo
|
||||
+from_dict(dictdata) ~Repo
|
||||
+create_new_repo() ~Repo
|
||||
+persist_to_repodir(save_list, source)
|
||||
+export_to_dict() dict
|
||||
}
|
||||
```
|
||||
|
||||
- `payload` 和 `algodata` 使用 `Lict` (列表+字典混合容器), 支持双模式访问.
|
||||
- `_generate_particles_data()` 在初始化时自动将 payload 数据转换为 `Nucleon` 所需的格式.
|
||||
- 默认保存列表 `default_save_list = ["algodata"]`, 仅持久化算法状态.
|
||||
|
||||
---
|
||||
|
||||
## Lict 集合
|
||||
|
||||
`Lict` 继承 `MutableSequence`, 同时维护列表和字典访问:
|
||||
|
||||
```python
|
||||
lict = Lict()
|
||||
lict.append(("key1", value1)) # 列表追加
|
||||
lict["key1"] # 字典访问
|
||||
lict[0] # 索引访问
|
||||
lict.keys() # 所有键
|
||||
lict.dicted_data # 纯字典导出
|
||||
```
|
||||
|
||||
脏同步机制:修改列表时自动同步字典, 修改字典时自动同步列表. 用于 `payload` 和 `algodata` 的双模式访问需求.
|
||||
|
||||
---
|
||||
|
||||
## 配置系统 (ConfigDict)
|
||||
|
||||
`ConfigDict` 继承 `UserDict`, 是**单例模式**的 TOML 懒加载配置管理器.
|
||||
|
||||
### 配置目录约定
|
||||
|
||||
```
|
||||
data/config/
|
||||
├── _.toml # 顶层默认值 (递归合并)
|
||||
├── interface/
|
||||
│ ├── _.toml # interface 层默认值
|
||||
│ ├── global.toml
|
||||
│ └── puzzles.toml
|
||||
├── services/
|
||||
│ ├── _.toml # services 层默认值
|
||||
│ ├── audio.toml
|
||||
│ └── tts.toml
|
||||
└── repo/
|
||||
└── _.toml
|
||||
```
|
||||
|
||||
- `_.toml` 文件 = 该目录层级的默认值, 合并到父级
|
||||
- 带后缀文件 = 按需懒加载
|
||||
- 子目录 = 递归子配置
|
||||
|
||||
### 上下文管理
|
||||
|
||||
```python
|
||||
from heurams.context import config_var, ConfigContext
|
||||
|
||||
# 全局访问
|
||||
config = config_var.get()
|
||||
algo = config["interface"]["global"]["algorithm"]
|
||||
|
||||
# 作用域覆盖
|
||||
with ConfigContext(test_config):
|
||||
... # 临时使用测试配置
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 提供者系统 (Providers)
|
||||
|
||||
可插拔的后端实现, 通过 `providers/__init__.py` 中的字典注册.
|
||||
|
||||
| 类别 | 提供者 | 说明 |
|
||||
|------|--------|------|
|
||||
| **TTS** | `edge_tts` | Microsoft Edge TTS (在线) |
|
||||
| | `basetts` | 桩基类 (未实现) |
|
||||
| **Audio** | `playsound` | 跨平台音频播放 |
|
||||
| | `termux` | Android Termux 环境 |
|
||||
| **LLM** | `openai` | OpenAI 兼容 API (未完整实现) |
|
||||
|
||||
选择方式:`services/*.toml` 中的 `provider` 字段.
|
||||
|
||||
---
|
||||
|
||||
## 谜题系统 (Puzzles)
|
||||
|
||||
谜题引擎用于在复习阶段生成评估视图:
|
||||
|
||||
| 谜题 | 文件 | 说明 |
|
||||
|------|------|------|
|
||||
| **MCQ** | `mcq.py` | 选择题 (Multiple Choice) |
|
||||
| **Cloze** | `cloze.py` | 完形填空 (Cloze Deletion) |
|
||||
| **Recognition** | `recognition.py` | 认读识别 |
|
||||
| **Guess** | `guess.py` | 猜测词义 |
|
||||
| **Base** | `base.py` | 抽象基类 |
|
||||
|
||||
谜题通过轨道策略 (Orbital)在 `Expander` 中按概率展开, 每个原子可产生多个谜题, 每个谜题独立评分.
|
||||
|
||||
---
|
||||
|
||||
## 服务层
|
||||
|
||||
| 服务 | 文件 | 说明 |
|
||||
|------|------|------|
|
||||
| **Config** | `config.py` | `ConfigDict(UserDict)` TOML 懒加载单例 |
|
||||
| **Logger** | `logger.py` | `get_logger(name)` → 层级日志 (`heurams.*`) |
|
||||
| **Timer** | `timer.py` | `get_daystamp()` / `get_timestamp()`, 支持可配置覆盖 |
|
||||
| **Audio** | `audio_service.py` | 音频播放, 路由到配置的音频提供者 |
|
||||
| **TTS** | `tts_service.py` | 文本转语音, 路由到配置的 TTS 提供者 |
|
||||
| **Favorites** | `favorite_service.py` | JSON5 持久化的收藏管理器 (单例) |
|
||||
| **Attic** | `attic.py` | 结构化 pickle 持久化, 支持 `<DAYSTAMP>`/`<TIMESTAMP>` 占位符 |
|
||||
| **Hasher** | `hasher.py` | MD5 哈希 |
|
||||
| **Epath** | `epath.py` | 点符号嵌套字典访问 (`epath(dct, "a.b.c")`) |
|
||||
| **TextProc** | `textproc.py` | `truncate()`, `domize()`, `undomize()` |
|
||||
|
||||
日志系统:每个模块通过 `get_logger(__name__)` 创建自己的日志器, 日志文件 10MB 轮转, 最多 5 个备份, 追加到 `heurams.log`.
|
||||
|
||||
---
|
||||
|
||||
## 复习全流程
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
participant User as 用户
|
||||
participant UI as TUI
|
||||
participant Router as Router
|
||||
participant Procession as Procession
|
||||
participant Expander as Expander
|
||||
participant Atom as Atom
|
||||
participant Electron as Electron
|
||||
participant Algo as Algorithm
|
||||
|
||||
User->>UI: 开始复习
|
||||
UI->>Router: Router(atoms)
|
||||
Router->>Procession: 创建复习队列
|
||||
Router->>Procession: 创建新记忆队列
|
||||
Router->>Procession: 创建总复习队列
|
||||
Procession->>Expander: 展开当前原子
|
||||
Expander->>Expander: 解析轨道策略
|
||||
Expander-->>UI: 展示谜题
|
||||
User->>UI: 评分 (1-5)
|
||||
UI->>Expander: report(rating)
|
||||
Expander->>Expander: forward() 下一个谜题
|
||||
Expander-->>UI: 下一个谜题或回溯
|
||||
Expander->>Expander: finish() → retronly
|
||||
Expander-->>UI: Recognition 回溯
|
||||
User->>UI: 最终评分
|
||||
UI->>Atom: revise()
|
||||
Atom->>Electron: revisor(min_rate)
|
||||
Electron->>Algo: revisor(algodata, feedback)
|
||||
Algo-->>Electron: 更新 algodata
|
||||
Algo-->>Atom: 更新 interval, next_date
|
||||
Procession->>Procession: forward() 下一原子
|
||||
Procession-->>Router: 队列完成
|
||||
Router->>Router: 切换阶段
|
||||
Router-->>UI: 完成 (finished)
|
||||
UI->>User: 显示总结
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 关键设计决策
|
||||
|
||||
1. **无数据库** — 所有持久化基于 TOML/JSON 文件目录, 方便版本管理和手动编辑.
|
||||
2. **Lict 双模式访问** — payload 和 algodata 同时支持列表迭代和字典查找, 兼顾批处理和随机访问.
|
||||
3. **物理隐喻分离** — 内容 (Nucleon)、状态 (Electron)、策略 (Orbital) 三者正交, 可独立替换, 便于组合不同算法和内容类型.
|
||||
4. **transitions 状态机** — 使用 `transitions` 库实现 Router → Procession → Expander 三层嵌套状态机, 每个层次职责明确.
|
||||
5. **Evalizer eval 模板** — 使用 `eval()` 实现动态模板替换, 功能灵活但存在安全风险 (标记为待替换).
|
||||
6. **配置单例** — `ConfigDict` 以规范化路径为键实现单例, 避免多实例导致的配置不一致问题.
|
||||
7. **评分累积** — 原子在多谜题阶段的最终评分取所有谜题的最低评分 (`min_rate`), 确保严格评估.
|
||||
|
||||
@@ -13,6 +13,7 @@ license = "AGPL-3.0-or-later"
|
||||
license-files = ["LICENSE"]
|
||||
|
||||
dependencies = [
|
||||
"fsrs>=6.3.1",
|
||||
#"edge-tts>=7.2.8",
|
||||
#"jieba>=0.42.1",
|
||||
#"openai>=2.32.0",
|
||||
@@ -34,6 +35,21 @@ Issues = "https://github.com/heurams/heurams/issues"
|
||||
heurams = "heurams.__main__:main"
|
||||
tui = "heurams.interface.__main__:main"
|
||||
|
||||
[dependency-groups]
|
||||
dev = [
|
||||
"pytest>=8.0.0",
|
||||
"pytest-cov>=6.0.0",
|
||||
]
|
||||
|
||||
[tool.pytest.ini_options]
|
||||
testpaths = ["tests"]
|
||||
python_files = ["test_*.py", "*_test.py"]
|
||||
pythonpath = ["src"]
|
||||
markers = [
|
||||
"slow: marks tests as slow (deselect with '-m \"not slow\"')",
|
||||
"integration: marks tests as integration tests",
|
||||
]
|
||||
|
||||
[build-system]
|
||||
requires = ["uv_build>=0.7.19"]
|
||||
build-backend = "uv_build"
|
||||
|
||||
21
src/heurams/vendor/pyfsrs/LICENSE
vendored
21
src/heurams/vendor/pyfsrs/LICENSE
vendored
@@ -1,21 +0,0 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2022 Open Spaced Repetition
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to deal
|
||||
in the Software without restriction, including without limitation the rights
|
||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||
copies of the Software, and to permit persons to whom the Software is
|
||||
furnished to do so, subject to the following conditions:
|
||||
|
||||
The above copyright notice and this permission notice shall be included in all
|
||||
copies or substantial portions of the Software.
|
||||
|
||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||
SOFTWARE.
|
||||
29
src/heurams/vendor/pyfsrs/__init__.py
vendored
29
src/heurams/vendor/pyfsrs/__init__.py
vendored
@@ -1,29 +0,0 @@
|
||||
"""
|
||||
py-fsrs
|
||||
-------
|
||||
|
||||
Py-FSRS is the official Python implementation of the FSRS scheduler algorithm, which can be used to develop spaced repetition systems.
|
||||
"""
|
||||
|
||||
from fsrs.scheduler import Scheduler
|
||||
from fsrs.state import State
|
||||
from fsrs.card import Card
|
||||
from fsrs.rating import Rating
|
||||
from fsrs.review_log import ReviewLog
|
||||
from typing import TYPE_CHECKING
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from fsrs.optimizer import Optimizer
|
||||
|
||||
|
||||
# lazy load the Optimizer module due to heavy dependencies
|
||||
def __getattr__(name: str) -> type:
|
||||
if name == "Optimizer":
|
||||
global Optimizer
|
||||
from fsrs.optimizer import Optimizer
|
||||
|
||||
return Optimizer
|
||||
raise AttributeError
|
||||
|
||||
|
||||
__all__ = ["Scheduler", "Card", "Rating", "ReviewLog", "State", "Optimizer"]
|
||||
167
src/heurams/vendor/pyfsrs/card.py
vendored
167
src/heurams/vendor/pyfsrs/card.py
vendored
@@ -1,167 +0,0 @@
|
||||
"""
|
||||
fsrs.card
|
||||
---------
|
||||
|
||||
This module defines the Card and State classes.
|
||||
|
||||
Classes:
|
||||
Card: Represents a flashcard in the FSRS system.
|
||||
State: Enum representing the learning state of a Card object.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime, timezone
|
||||
import time
|
||||
import json
|
||||
from typing import TypedDict
|
||||
from typing_extensions import Self
|
||||
from fsrs.state import State
|
||||
|
||||
|
||||
class CardDict(TypedDict):
|
||||
"""
|
||||
JSON-serializable dictionary representation of a Card object.
|
||||
"""
|
||||
|
||||
card_id: int
|
||||
state: int
|
||||
step: int | None
|
||||
stability: float | None
|
||||
difficulty: float | None
|
||||
due: str
|
||||
last_review: str | None
|
||||
|
||||
|
||||
@dataclass(init=False)
|
||||
class Card:
|
||||
"""
|
||||
Represents a flashcard in the FSRS system.
|
||||
|
||||
Attributes:
|
||||
card_id: The id of the card. Defaults to the epoch milliseconds of when the card was created.
|
||||
state: The card's current learning state.
|
||||
step: The card's current learning or relearning step or None if the card is in the Review state.
|
||||
stability: Core mathematical parameter used for future scheduling.
|
||||
difficulty: Core mathematical parameter used for future scheduling.
|
||||
due: The date and time when the card is due next.
|
||||
last_review: The date and time of the card's last review.
|
||||
"""
|
||||
|
||||
card_id: int
|
||||
state: State
|
||||
step: int | None
|
||||
stability: float | None
|
||||
difficulty: float | None
|
||||
due: datetime
|
||||
last_review: datetime | None
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
card_id: int | None = None,
|
||||
state: State = State.Learning,
|
||||
step: int | None = None,
|
||||
stability: float | None = None,
|
||||
difficulty: float | None = None,
|
||||
due: datetime | None = None,
|
||||
last_review: datetime | None = None,
|
||||
) -> None:
|
||||
if card_id is None:
|
||||
# epoch milliseconds of when the card was created
|
||||
card_id = int(datetime.now(timezone.utc).timestamp() * 1000)
|
||||
# wait 1ms to prevent potential card_id collision on next Card creation
|
||||
time.sleep(0.001)
|
||||
self.card_id = card_id
|
||||
|
||||
self.state = state
|
||||
|
||||
if self.state == State.Learning and step is None:
|
||||
step = 0
|
||||
self.step = step
|
||||
|
||||
self.stability = stability
|
||||
self.difficulty = difficulty
|
||||
|
||||
if due is None:
|
||||
due = datetime.now(timezone.utc)
|
||||
self.due = due
|
||||
|
||||
self.last_review = last_review
|
||||
|
||||
def to_dict(self) -> CardDict:
|
||||
"""
|
||||
Returns a dictionary representation of the Card object.
|
||||
|
||||
Returns:
|
||||
CardDict: A dictionary representation of the Card object.
|
||||
"""
|
||||
|
||||
return {
|
||||
"card_id": self.card_id,
|
||||
"state": self.state.value,
|
||||
"step": self.step,
|
||||
"stability": self.stability,
|
||||
"difficulty": self.difficulty,
|
||||
"due": self.due.isoformat(),
|
||||
"last_review": self.last_review.isoformat() if self.last_review else None,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, source_dict: CardDict) -> Self:
|
||||
"""
|
||||
Creates a Card object from an existing dictionary.
|
||||
|
||||
Args:
|
||||
source_dict: A dictionary representing an existing Card object.
|
||||
|
||||
Returns:
|
||||
Self: A Card object created from the provided dictionary.
|
||||
"""
|
||||
|
||||
return cls(
|
||||
card_id=int(source_dict["card_id"]),
|
||||
state=State(int(source_dict["state"])),
|
||||
step=source_dict["step"],
|
||||
stability=(
|
||||
float(source_dict["stability"]) if source_dict["stability"] else None
|
||||
),
|
||||
difficulty=(
|
||||
float(source_dict["difficulty"]) if source_dict["difficulty"] else None
|
||||
),
|
||||
due=datetime.fromisoformat(source_dict["due"]),
|
||||
last_review=(
|
||||
datetime.fromisoformat(source_dict["last_review"])
|
||||
if source_dict["last_review"]
|
||||
else None
|
||||
),
|
||||
)
|
||||
|
||||
def to_json(self, indent: int | str | None = None) -> str:
|
||||
"""
|
||||
Returns a JSON-serialized string of the Card object.
|
||||
|
||||
Args:
|
||||
indent: Equivalent argument to the indent in json.dumps()
|
||||
|
||||
Returns:
|
||||
str: A JSON-serialized string of the Card object.
|
||||
"""
|
||||
return json.dumps(self.to_dict(), indent=indent)
|
||||
|
||||
@classmethod
|
||||
def from_json(cls, source_json: str) -> Self:
|
||||
"""
|
||||
Creates a Card object from a JSON-serialized string.
|
||||
|
||||
Args:
|
||||
source_json: A JSON-serialized string of an existing Card object.
|
||||
|
||||
Returns:
|
||||
Self: A Card object created from the JSON string.
|
||||
"""
|
||||
|
||||
source_dict: CardDict = json.loads(source_json)
|
||||
return cls.from_dict(source_dict=source_dict)
|
||||
|
||||
|
||||
__all__ = ["Card"]
|
||||
674
src/heurams/vendor/pyfsrs/optimizer.py
vendored
674
src/heurams/vendor/pyfsrs/optimizer.py
vendored
@@ -1,674 +0,0 @@
|
||||
"""
|
||||
fsrs.optimizer
|
||||
---------
|
||||
|
||||
This module defines the optional Optimizer class.
|
||||
"""
|
||||
|
||||
from fsrs.card import Card
|
||||
from fsrs.review_log import ReviewLog, Rating
|
||||
from fsrs.scheduler import (
|
||||
Scheduler,
|
||||
DEFAULT_PARAMETERS,
|
||||
LOWER_BOUNDS_PARAMETERS,
|
||||
UPPER_BOUNDS_PARAMETERS,
|
||||
)
|
||||
|
||||
import math
|
||||
from datetime import datetime, timezone
|
||||
from copy import deepcopy
|
||||
from random import Random
|
||||
from statistics import mean
|
||||
|
||||
try:
|
||||
import torch
|
||||
from torch.nn import BCELoss
|
||||
from torch import optim
|
||||
import pandas as pd
|
||||
from tqdm import tqdm
|
||||
|
||||
# weight clipping
|
||||
LOWER_BOUNDS_PARAMETERS_TENSORS = torch.tensor(
|
||||
LOWER_BOUNDS_PARAMETERS,
|
||||
dtype=torch.float64,
|
||||
)
|
||||
|
||||
UPPER_BOUNDS_PARAMETERS_TENSORS = torch.tensor(
|
||||
UPPER_BOUNDS_PARAMETERS,
|
||||
dtype=torch.float64,
|
||||
)
|
||||
|
||||
# hyper parameters
|
||||
num_epochs = 5
|
||||
mini_batch_size = 512
|
||||
learning_rate = 4e-2
|
||||
max_seq_len = (
|
||||
64 # up to the first 64 reviews of each card are used for optimization
|
||||
)
|
||||
|
||||
class Optimizer:
|
||||
"""
|
||||
The FSRS optimizer.
|
||||
|
||||
Enables the optimization of FSRS scheduler parameters from existing review logs for more accurate interval calculations.
|
||||
|
||||
Attributes:
|
||||
review_logs: A collection of previous ReviewLog objects from a user.
|
||||
_revlogs_train: The collection of review logs, sorted and formatted for optimization.
|
||||
"""
|
||||
|
||||
review_logs: tuple[ReviewLog, ...]
|
||||
_revlogs_train: dict
|
||||
|
||||
def __init__(
|
||||
self, review_logs: tuple[ReviewLog, ...] | list[ReviewLog]
|
||||
) -> None:
|
||||
"""
|
||||
Initializes the Optimizer with a set of ReviewLogs. Also formats a copy of the review logs for optimization.
|
||||
|
||||
Note that the ReviewLogs provided by the user don't need to be in order.
|
||||
"""
|
||||
|
||||
def _format_revlogs() -> dict:
|
||||
"""
|
||||
Sorts and converts the tuple of ReviewLog objects to a dictionary format for optimizing
|
||||
"""
|
||||
|
||||
revlogs_train = {}
|
||||
for review_log in self.review_logs:
|
||||
# pull data out of current ReviewLog object
|
||||
card_id = review_log.card_id
|
||||
rating = review_log.rating
|
||||
review_datetime = review_log.review_datetime
|
||||
review_duration = review_log.review_duration
|
||||
|
||||
# if the card was rated Again, it was not recalled
|
||||
recall = 0 if rating == Rating.Again else 1
|
||||
|
||||
# as a ML problem, [x, y] = [ [review_datetime, rating, review_duration], recall ]
|
||||
datum = [[review_datetime, rating, review_duration], recall]
|
||||
|
||||
if card_id not in revlogs_train:
|
||||
revlogs_train[card_id] = []
|
||||
|
||||
revlogs_train[card_id].append((datum))
|
||||
revlogs_train[card_id] = sorted(
|
||||
revlogs_train[card_id], key=lambda x: x[0][0]
|
||||
) # keep reviews sorted
|
||||
|
||||
# sort the dictionary in order of when each card history starts
|
||||
revlogs_train = dict(sorted(revlogs_train.items()))
|
||||
|
||||
return revlogs_train
|
||||
|
||||
self.review_logs = deepcopy(tuple(review_logs))
|
||||
|
||||
# format the ReviewLog data for optimization
|
||||
self._revlogs_train = _format_revlogs()
|
||||
|
||||
def _compute_batch_loss(self, *, parameters: list[float]) -> float:
|
||||
"""
|
||||
Computes the current total loss for the entire batch of review logs.
|
||||
"""
|
||||
|
||||
card_ids = list(self._revlogs_train.keys())
|
||||
params = torch.tensor(parameters, dtype=torch.float64)
|
||||
loss_fn = BCELoss()
|
||||
scheduler = Scheduler(parameters=params)
|
||||
step_losses = []
|
||||
|
||||
for card_id in card_ids:
|
||||
card_review_history = self._revlogs_train[card_id][:max_seq_len]
|
||||
|
||||
for i in range(len(card_review_history)):
|
||||
review = card_review_history[i]
|
||||
|
||||
x_date = review[0][0]
|
||||
y_retrievability = review[1]
|
||||
u_rating = review[0][1]
|
||||
|
||||
if i == 0:
|
||||
card = Card(card_id=card_id, due=x_date)
|
||||
|
||||
y_pred_retrievability = scheduler.get_card_retrievability(
|
||||
card=card, current_datetime=x_date
|
||||
)
|
||||
y_retrievability = torch.tensor(
|
||||
y_retrievability, dtype=torch.float64
|
||||
)
|
||||
|
||||
if card.last_review and (x_date - card.last_review).days > 0:
|
||||
step_loss = loss_fn(y_pred_retrievability, y_retrievability)
|
||||
step_losses.append(step_loss)
|
||||
|
||||
card, _ = scheduler.review_card(
|
||||
card=card,
|
||||
rating=u_rating,
|
||||
review_datetime=x_date,
|
||||
review_duration=None,
|
||||
)
|
||||
|
||||
batch_loss = torch.sum(torch.stack(step_losses))
|
||||
batch_loss = batch_loss.item() / len(step_losses)
|
||||
|
||||
return batch_loss
|
||||
|
||||
def compute_optimal_parameters(self, verbose: bool = False) -> list[float]:
|
||||
"""
|
||||
Computes a set of optimized parameters for the FSRS scheduler and returns it as a list of floats.
|
||||
|
||||
High level explanation of optimization:
|
||||
---------------------------------------
|
||||
FSRS is a many-to-many sequence model where the "State" at each step is a Card object at a given point in time,
|
||||
the input is the time of the review and the output is the predicted retrievability of the card at the time of review.
|
||||
|
||||
Each card's review history can be thought of as a sequence, each review as a step and each collection of card review histories
|
||||
as a batch.
|
||||
|
||||
The loss is computed by comparing the predicted retrievability of the Card at each step with whether the Card was actually
|
||||
sucessfully recalled or not (0/1).
|
||||
|
||||
Finally, the card objects at each step in their sequences are updated using the current parameters of the Scheduler
|
||||
as well as the rating given to that card by the user. The parameters of the Scheduler is what is being optimized.
|
||||
"""
|
||||
|
||||
def _num_reviews() -> int:
|
||||
"""
|
||||
Computes how many Review-state reviews there are in the dataset.
|
||||
Only the loss from Review-state reviews count for optimization and their number must
|
||||
be computed in advance to properly initialize the Cosine Annealing learning rate scheduler.
|
||||
"""
|
||||
|
||||
scheduler = Scheduler()
|
||||
num_reviews = 0
|
||||
# iterate through the card review histories
|
||||
card_ids = list(self._revlogs_train.keys())
|
||||
for card_id in card_ids:
|
||||
card_review_history = self._revlogs_train[card_id][:max_seq_len]
|
||||
|
||||
# iterate through the current Card's review history
|
||||
for i in range(len(card_review_history)):
|
||||
review = card_review_history[i]
|
||||
|
||||
review_datetime = review[0][0]
|
||||
rating = review[0][1]
|
||||
|
||||
# if this is the first review, create the Card object
|
||||
if i == 0:
|
||||
card = Card(card_id=card_id, due=review_datetime)
|
||||
|
||||
# only non-same-day reviews count
|
||||
if (
|
||||
card.last_review
|
||||
and (review_datetime - card.last_review).days > 0
|
||||
):
|
||||
num_reviews += 1
|
||||
|
||||
card, _ = scheduler.review_card(
|
||||
card=card,
|
||||
rating=rating,
|
||||
review_datetime=review_datetime,
|
||||
review_duration=None,
|
||||
)
|
||||
|
||||
return num_reviews
|
||||
|
||||
def _update_parameters(
|
||||
*,
|
||||
step_losses: list,
|
||||
adam_optimizer: torch.optim.Adam,
|
||||
params: torch.Tensor,
|
||||
lr_scheduler: torch.optim.lr_scheduler.CosineAnnealingLR,
|
||||
) -> None:
|
||||
"""
|
||||
Computes and updates the current FSRS parameters based on the step losses. Also updates the learning rate scheduler.
|
||||
"""
|
||||
|
||||
# Backpropagate through the loss
|
||||
mini_batch_loss = torch.sum(torch.stack(step_losses))
|
||||
adam_optimizer.zero_grad() # clear previous gradients
|
||||
mini_batch_loss.backward() # compute gradients
|
||||
adam_optimizer.step() # Update parameters
|
||||
|
||||
# clamp the weights in place without modifying the computational graph
|
||||
with torch.no_grad():
|
||||
params.clamp_(
|
||||
min=LOWER_BOUNDS_PARAMETERS_TENSORS,
|
||||
max=UPPER_BOUNDS_PARAMETERS_TENSORS,
|
||||
)
|
||||
|
||||
# update the learning rate
|
||||
lr_scheduler.step()
|
||||
|
||||
# set local random seed for reproducibility
|
||||
rng = Random(42)
|
||||
|
||||
card_ids = list(self._revlogs_train.keys())
|
||||
|
||||
num_reviews = _num_reviews()
|
||||
|
||||
if num_reviews < mini_batch_size:
|
||||
return list(DEFAULT_PARAMETERS)
|
||||
|
||||
# Define FSRS Scheduler parameters as torch tensors with gradients
|
||||
params = torch.tensor(
|
||||
DEFAULT_PARAMETERS, requires_grad=True, dtype=torch.float64
|
||||
)
|
||||
|
||||
loss_fn = BCELoss()
|
||||
adam_optimizer = optim.Adam([params], lr=learning_rate)
|
||||
lr_scheduler = optim.lr_scheduler.CosineAnnealingLR(
|
||||
optimizer=adam_optimizer,
|
||||
T_max=math.ceil(num_reviews / mini_batch_size) * num_epochs,
|
||||
)
|
||||
|
||||
best_params = None
|
||||
best_loss = math.inf
|
||||
# iterate through the epochs
|
||||
for _ in tqdm(
|
||||
range(num_epochs),
|
||||
desc="Optimizing",
|
||||
unit="epoch",
|
||||
disable=(not verbose),
|
||||
):
|
||||
# randomly shuffle the order of which Card's review histories get computed first
|
||||
# at the beginning of each new epoch
|
||||
rng.shuffle(card_ids)
|
||||
|
||||
# initialize new scheduler with updated parameters each epoch
|
||||
scheduler = Scheduler(parameters=params)
|
||||
|
||||
# stores the computed loss of each individual review
|
||||
step_losses = []
|
||||
|
||||
# iterate through the card review histories (sequences)
|
||||
for card_id in card_ids:
|
||||
card_review_history = self._revlogs_train[card_id][:max_seq_len]
|
||||
|
||||
# iterate through the current Card's review history (steps)
|
||||
for i in range(len(card_review_history)):
|
||||
review = card_review_history[i]
|
||||
|
||||
# input
|
||||
x_date = review[0][0]
|
||||
# target
|
||||
y_retrievability = review[1]
|
||||
# update
|
||||
u_rating = review[0][1]
|
||||
|
||||
# if this is the first review, create the Card object
|
||||
if i == 0:
|
||||
card = Card(card_id=card_id, due=x_date)
|
||||
|
||||
# predicted target
|
||||
y_pred_retrievability = scheduler.get_card_retrievability(
|
||||
card=card, current_datetime=x_date
|
||||
)
|
||||
y_retrievability = torch.tensor(
|
||||
y_retrievability, dtype=torch.float64
|
||||
)
|
||||
|
||||
# only compute step-loss on non-same-day reviews
|
||||
if card.last_review and (x_date - card.last_review).days > 0:
|
||||
step_loss = loss_fn(y_pred_retrievability, y_retrievability)
|
||||
step_losses.append(step_loss)
|
||||
|
||||
# update the card's state
|
||||
card, _ = scheduler.review_card(
|
||||
card=card,
|
||||
rating=u_rating,
|
||||
review_datetime=x_date,
|
||||
review_duration=None,
|
||||
)
|
||||
|
||||
# take a gradient step after each mini-batch
|
||||
if len(step_losses) == mini_batch_size:
|
||||
_update_parameters(
|
||||
step_losses=step_losses,
|
||||
adam_optimizer=adam_optimizer,
|
||||
params=params,
|
||||
lr_scheduler=lr_scheduler,
|
||||
)
|
||||
|
||||
# update the scheduler's with the new parameters
|
||||
scheduler = Scheduler(parameters=params)
|
||||
# clear the step losses for next batch
|
||||
step_losses = []
|
||||
|
||||
# remove gradient history from tensor card parameters for next batch
|
||||
card.stability = card.stability.detach()
|
||||
card.difficulty = card.difficulty.detach()
|
||||
|
||||
# update params on remaining review logs
|
||||
if len(step_losses) > 0:
|
||||
_update_parameters(
|
||||
step_losses=step_losses,
|
||||
adam_optimizer=adam_optimizer,
|
||||
params=params,
|
||||
lr_scheduler=lr_scheduler,
|
||||
)
|
||||
|
||||
# compute the current batch loss after each epoch
|
||||
detached_params = [
|
||||
x.detach().item() for x in list(params.detach())
|
||||
] # convert to floats
|
||||
with torch.no_grad():
|
||||
epoch_batch_loss = self._compute_batch_loss(
|
||||
parameters=detached_params
|
||||
)
|
||||
|
||||
# if the batch loss is better with the current parameters, update the current best parameters
|
||||
if epoch_batch_loss < best_loss:
|
||||
best_loss = epoch_batch_loss
|
||||
best_params = detached_params
|
||||
|
||||
return best_params
|
||||
|
||||
def _compute_probs_and_costs(self) -> dict[str, float]:
|
||||
review_log_df = pd.DataFrame(
|
||||
vars(review_log) for review_log in self.review_logs
|
||||
)
|
||||
|
||||
review_log_df = review_log_df.sort_values(
|
||||
by=["card_id", "review_datetime"], ascending=[True, True]
|
||||
).reset_index(drop=True)
|
||||
|
||||
# dictionary to return
|
||||
probs_and_costs_dict = {}
|
||||
|
||||
# compute the probabilities and costs of the first rating
|
||||
first_reviews_df = review_log_df.loc[
|
||||
~review_log_df["card_id"].duplicated(keep="first")
|
||||
].reset_index(drop=True)
|
||||
|
||||
first_again_reviews_df = first_reviews_df.loc[
|
||||
first_reviews_df["rating"] == Rating.Again
|
||||
]
|
||||
first_hard_reviews_df = first_reviews_df.loc[
|
||||
first_reviews_df["rating"] == Rating.Hard
|
||||
]
|
||||
first_good_reviews_df = first_reviews_df.loc[
|
||||
first_reviews_df["rating"] == Rating.Good
|
||||
]
|
||||
first_easy_reviews_df = first_reviews_df.loc[
|
||||
first_reviews_df["rating"] == Rating.Easy
|
||||
]
|
||||
|
||||
# compute the probability of the user clicking again/hard/good/easy given it's their first review
|
||||
num_first_again = len(first_again_reviews_df)
|
||||
num_first_hard = len(first_hard_reviews_df)
|
||||
num_first_good = len(first_good_reviews_df)
|
||||
num_first_easy = len(first_easy_reviews_df)
|
||||
|
||||
num_first_review = (
|
||||
num_first_again + num_first_hard + num_first_good + num_first_easy
|
||||
)
|
||||
|
||||
prob_first_again = num_first_again / num_first_review
|
||||
prob_first_hard = num_first_hard / num_first_review
|
||||
prob_first_good = num_first_good / num_first_review
|
||||
prob_first_easy = num_first_easy / num_first_review
|
||||
|
||||
probs_and_costs_dict["prob_first_again"] = prob_first_again
|
||||
probs_and_costs_dict["prob_first_hard"] = prob_first_hard
|
||||
probs_and_costs_dict["prob_first_good"] = prob_first_good
|
||||
probs_and_costs_dict["prob_first_easy"] = prob_first_easy
|
||||
|
||||
# compute the cost of the user clicking again/hard/good/easy on their first review
|
||||
first_again_review_durations = list(
|
||||
first_again_reviews_df["review_duration"]
|
||||
)
|
||||
first_hard_review_durations = list(first_hard_reviews_df["review_duration"])
|
||||
first_good_review_durations = list(first_good_reviews_df["review_duration"])
|
||||
first_easy_review_durations = list(first_easy_reviews_df["review_duration"])
|
||||
|
||||
avg_first_again_review_duration = (
|
||||
mean(first_again_review_durations)
|
||||
if first_again_review_durations
|
||||
else 0
|
||||
)
|
||||
avg_first_hard_review_duration = (
|
||||
mean(first_hard_review_durations) if first_hard_review_durations else 0
|
||||
)
|
||||
avg_first_good_review_duration = (
|
||||
mean(first_good_review_durations) if first_good_review_durations else 0
|
||||
)
|
||||
avg_first_easy_review_duration = (
|
||||
mean(first_easy_review_durations) if first_easy_review_durations else 0
|
||||
)
|
||||
|
||||
probs_and_costs_dict["avg_first_again_review_duration"] = (
|
||||
avg_first_again_review_duration
|
||||
)
|
||||
probs_and_costs_dict["avg_first_hard_review_duration"] = (
|
||||
avg_first_hard_review_duration
|
||||
)
|
||||
probs_and_costs_dict["avg_first_good_review_duration"] = (
|
||||
avg_first_good_review_duration
|
||||
)
|
||||
probs_and_costs_dict["avg_first_easy_review_duration"] = (
|
||||
avg_first_easy_review_duration
|
||||
)
|
||||
|
||||
# compute the probabilities and costs of non-first ratings
|
||||
non_first_reviews_df = review_log_df.loc[
|
||||
review_log_df["card_id"].duplicated(keep="first")
|
||||
].reset_index(drop=True)
|
||||
|
||||
again_reviews_df = non_first_reviews_df.loc[
|
||||
non_first_reviews_df["rating"] == Rating.Again
|
||||
]
|
||||
hard_reviews_df = non_first_reviews_df.loc[
|
||||
non_first_reviews_df["rating"] == Rating.Hard
|
||||
]
|
||||
good_reviews_df = non_first_reviews_df.loc[
|
||||
non_first_reviews_df["rating"] == Rating.Good
|
||||
]
|
||||
easy_reviews_df = non_first_reviews_df.loc[
|
||||
non_first_reviews_df["rating"] == Rating.Easy
|
||||
]
|
||||
|
||||
# compute the probability of the user clicking hard/good/easy given they correctly recalled the card
|
||||
num_hard = len(hard_reviews_df)
|
||||
num_good = len(good_reviews_df)
|
||||
num_easy = len(easy_reviews_df)
|
||||
|
||||
num_recall = num_hard + num_good + num_easy
|
||||
|
||||
prob_hard = num_hard / num_recall
|
||||
prob_good = num_good / num_recall
|
||||
prob_easy = num_easy / num_recall
|
||||
|
||||
probs_and_costs_dict["prob_hard"] = prob_hard
|
||||
probs_and_costs_dict["prob_good"] = prob_good
|
||||
probs_and_costs_dict["prob_easy"] = prob_easy
|
||||
|
||||
again_review_durations = list(again_reviews_df["review_duration"])
|
||||
hard_review_durations = list(hard_reviews_df["review_duration"])
|
||||
good_review_durations = list(good_reviews_df["review_duration"])
|
||||
easy_review_durations = list(easy_reviews_df["review_duration"])
|
||||
|
||||
avg_again_review_duration = (
|
||||
mean(again_review_durations) if again_review_durations else 0
|
||||
)
|
||||
avg_hard_review_duration = (
|
||||
mean(hard_review_durations) if hard_review_durations else 0
|
||||
)
|
||||
avg_good_review_duration = (
|
||||
mean(good_review_durations) if good_review_durations else 0
|
||||
)
|
||||
avg_easy_review_duration = (
|
||||
mean(easy_review_durations) if easy_review_durations else 0
|
||||
)
|
||||
|
||||
probs_and_costs_dict["avg_again_review_duration"] = (
|
||||
avg_again_review_duration
|
||||
)
|
||||
probs_and_costs_dict["avg_hard_review_duration"] = avg_hard_review_duration
|
||||
probs_and_costs_dict["avg_good_review_duration"] = avg_good_review_duration
|
||||
probs_and_costs_dict["avg_easy_review_duration"] = avg_easy_review_duration
|
||||
|
||||
return probs_and_costs_dict
|
||||
|
||||
def _simulate_cost(
|
||||
self,
|
||||
*,
|
||||
desired_retention: float,
|
||||
parameters: tuple[float, ...] | list[float],
|
||||
num_cards_simulate: int,
|
||||
probs_and_costs_dict: dict[str, float],
|
||||
) -> float:
|
||||
rng = Random(42)
|
||||
|
||||
# simulate from the beginning of 2025 till before the beginning of 2026
|
||||
start_date = datetime(2025, 1, 1, 0, 0, 0, 0, timezone.utc)
|
||||
end_date = datetime(2026, 1, 1, 0, 0, 0, 0, timezone.utc)
|
||||
|
||||
scheduler = Scheduler(
|
||||
parameters=parameters,
|
||||
desired_retention=desired_retention,
|
||||
enable_fuzzing=False,
|
||||
)
|
||||
|
||||
# unpack probs_and_costs_dict
|
||||
prob_first_again = probs_and_costs_dict["prob_first_again"]
|
||||
prob_first_hard = probs_and_costs_dict["prob_first_hard"]
|
||||
prob_first_good = probs_and_costs_dict["prob_first_good"]
|
||||
prob_first_easy = probs_and_costs_dict["prob_first_easy"]
|
||||
|
||||
avg_first_again_review_duration = probs_and_costs_dict[
|
||||
"avg_first_again_review_duration"
|
||||
]
|
||||
avg_first_hard_review_duration = probs_and_costs_dict[
|
||||
"avg_first_hard_review_duration"
|
||||
]
|
||||
avg_first_good_review_duration = probs_and_costs_dict[
|
||||
"avg_first_good_review_duration"
|
||||
]
|
||||
avg_first_easy_review_duration = probs_and_costs_dict[
|
||||
"avg_first_easy_review_duration"
|
||||
]
|
||||
|
||||
prob_hard = probs_and_costs_dict["prob_hard"]
|
||||
prob_good = probs_and_costs_dict["prob_good"]
|
||||
prob_easy = probs_and_costs_dict["prob_easy"]
|
||||
|
||||
avg_again_review_duration = probs_and_costs_dict[
|
||||
"avg_again_review_duration"
|
||||
]
|
||||
avg_hard_review_duration = probs_and_costs_dict["avg_hard_review_duration"]
|
||||
avg_good_review_duration = probs_and_costs_dict["avg_good_review_duration"]
|
||||
avg_easy_review_duration = probs_and_costs_dict["avg_easy_review_duration"]
|
||||
|
||||
simulation_cost = 0
|
||||
for i in range(num_cards_simulate):
|
||||
card = Card()
|
||||
curr_date = start_date
|
||||
while curr_date < end_date:
|
||||
# the card is new
|
||||
if curr_date == start_date:
|
||||
rating = rng.choices(
|
||||
[Rating.Again, Rating.Hard, Rating.Good, Rating.Easy],
|
||||
weights=[
|
||||
prob_first_again,
|
||||
prob_first_hard,
|
||||
prob_first_good,
|
||||
prob_first_easy,
|
||||
],
|
||||
)[0]
|
||||
|
||||
if rating == Rating.Again:
|
||||
simulation_cost += avg_first_again_review_duration
|
||||
|
||||
elif rating == Rating.Hard:
|
||||
simulation_cost += avg_first_hard_review_duration
|
||||
|
||||
elif rating == Rating.Good:
|
||||
simulation_cost += avg_first_good_review_duration
|
||||
|
||||
elif rating == Rating.Easy:
|
||||
simulation_cost += avg_first_easy_review_duration
|
||||
|
||||
# the card is not new
|
||||
else:
|
||||
rating = rng.choices(
|
||||
["recall", Rating.Again],
|
||||
weights=[desired_retention, 1.0 - desired_retention],
|
||||
)[0]
|
||||
|
||||
if rating == "recall":
|
||||
# compute probability that the user chose hard/good/easy, GIVEN that they correctly recalled the card
|
||||
rating = rng.choices(
|
||||
[Rating.Hard, Rating.Good, Rating.Easy],
|
||||
weights=[prob_hard, prob_good, prob_easy],
|
||||
)[0]
|
||||
|
||||
if rating == Rating.Again:
|
||||
simulation_cost += avg_again_review_duration
|
||||
|
||||
elif rating == Rating.Hard:
|
||||
simulation_cost += avg_hard_review_duration
|
||||
|
||||
elif rating == Rating.Good:
|
||||
simulation_cost += avg_good_review_duration
|
||||
|
||||
elif rating == Rating.Easy:
|
||||
simulation_cost += avg_easy_review_duration
|
||||
|
||||
card, _ = scheduler.review_card(
|
||||
card=card, rating=rating, review_datetime=curr_date
|
||||
)
|
||||
curr_date = card.due
|
||||
|
||||
total_knowledge = desired_retention * num_cards_simulate
|
||||
simulation_cost = simulation_cost / total_knowledge
|
||||
|
||||
return simulation_cost
|
||||
|
||||
def compute_optimal_retention(
|
||||
self, parameters: tuple[float, ...] | list[float]
|
||||
) -> list[float]:
|
||||
def _validate_review_logs() -> None:
|
||||
if len(self.review_logs) < 512:
|
||||
raise ValueError(
|
||||
"Not enough ReviewLog's: at least 512 ReviewLog objects are required to compute optimal retention"
|
||||
)
|
||||
|
||||
for review_log in self.review_logs:
|
||||
if review_log.review_duration is None:
|
||||
raise ValueError(
|
||||
"ReviewLog.review_duration cannot be None when computing optimal retention"
|
||||
)
|
||||
|
||||
_validate_review_logs()
|
||||
|
||||
NUM_CARDS_SIMULATE = 1000
|
||||
DESIRED_RETENTIONS = [0.7, 0.75, 0.8, 0.85, 0.9, 0.95]
|
||||
|
||||
probs_and_costs_dict = self._compute_probs_and_costs()
|
||||
|
||||
simulation_costs = []
|
||||
for desired_retention in DESIRED_RETENTIONS:
|
||||
simulation_cost = self._simulate_cost(
|
||||
desired_retention=desired_retention,
|
||||
parameters=parameters,
|
||||
num_cards_simulate=NUM_CARDS_SIMULATE,
|
||||
probs_and_costs_dict=probs_and_costs_dict,
|
||||
)
|
||||
simulation_costs.append(simulation_cost)
|
||||
|
||||
min_index = simulation_costs.index(min(simulation_costs))
|
||||
optimal_retention = DESIRED_RETENTIONS[min_index]
|
||||
|
||||
return optimal_retention
|
||||
|
||||
except ImportError:
|
||||
|
||||
class Optimizer:
|
||||
def __init__(self, *args, **kwargs) -> None:
|
||||
raise ImportError(
|
||||
'Optimizer is not installed.\nInstall it with: pip install "fsrs[optimizer]"'
|
||||
)
|
||||
|
||||
|
||||
__all__ = ["Optimizer"]
|
||||
0
src/heurams/vendor/pyfsrs/py.typed
vendored
0
src/heurams/vendor/pyfsrs/py.typed
vendored
15
src/heurams/vendor/pyfsrs/rating.py
vendored
15
src/heurams/vendor/pyfsrs/rating.py
vendored
@@ -1,15 +0,0 @@
|
||||
from enum import IntEnum
|
||||
|
||||
|
||||
class Rating(IntEnum):
|
||||
"""
|
||||
Enum representing the four possible ratings when reviewing a card.
|
||||
"""
|
||||
|
||||
Again = 1
|
||||
Hard = 2
|
||||
Good = 3
|
||||
Easy = 4
|
||||
|
||||
|
||||
__all__ = ["Rating"]
|
||||
117
src/heurams/vendor/pyfsrs/review_log.py
vendored
117
src/heurams/vendor/pyfsrs/review_log.py
vendored
@@ -1,117 +0,0 @@
|
||||
"""
|
||||
fsrs.review_log
|
||||
---------
|
||||
|
||||
This module defines the ReviewLog and Rating classes.
|
||||
|
||||
Classes:
|
||||
ReviewLog: Represents the log entry of a Card that has been reviewed.
|
||||
Rating: Enum representing the four possible ratings when reviewing a card.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
from dataclasses import dataclass
|
||||
from datetime import datetime
|
||||
from typing import TypedDict
|
||||
import json
|
||||
from typing_extensions import Self
|
||||
from fsrs.rating import Rating
|
||||
|
||||
|
||||
class ReviewLogDict(TypedDict):
|
||||
"""
|
||||
JSON-serializable dictionary representation of a ReviewLog object.
|
||||
"""
|
||||
|
||||
card_id: int
|
||||
rating: int
|
||||
review_datetime: str
|
||||
review_duration: int | None
|
||||
|
||||
|
||||
@dataclass
|
||||
class ReviewLog:
|
||||
"""
|
||||
Represents the log entry of a Card object that has been reviewed.
|
||||
|
||||
Attributes:
|
||||
card_id: The id of the card being reviewed.
|
||||
rating: The rating given to the card during the review.
|
||||
review_datetime: The date and time of the review.
|
||||
review_duration: The number of milliseconds it took to review the card or None if unspecified.
|
||||
"""
|
||||
|
||||
card_id: int
|
||||
rating: Rating
|
||||
review_datetime: datetime
|
||||
review_duration: int | None
|
||||
|
||||
def to_dict(
|
||||
self,
|
||||
) -> ReviewLogDict:
|
||||
"""
|
||||
Returns a dictionary representation of the ReviewLog object.
|
||||
|
||||
Returns:
|
||||
ReviewLogDict: A dictionary representation of the ReviewLog object.
|
||||
"""
|
||||
|
||||
return {
|
||||
"card_id": self.card_id,
|
||||
"rating": int(self.rating),
|
||||
"review_datetime": self.review_datetime.isoformat(),
|
||||
"review_duration": self.review_duration,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_dict(
|
||||
cls,
|
||||
source_dict: ReviewLogDict,
|
||||
) -> Self:
|
||||
"""
|
||||
Creates a ReviewLog object from an existing dictionary.
|
||||
|
||||
Args:
|
||||
source_dict: A dictionary representing an existing ReviewLog object.
|
||||
|
||||
Returns:
|
||||
Self: A ReviewLog object created from the provided dictionary.
|
||||
"""
|
||||
|
||||
return cls(
|
||||
card_id=source_dict["card_id"],
|
||||
rating=Rating(int(source_dict["rating"])),
|
||||
review_datetime=datetime.fromisoformat(source_dict["review_datetime"]),
|
||||
review_duration=source_dict["review_duration"],
|
||||
)
|
||||
|
||||
def to_json(self, indent: int | str | None = None) -> str:
|
||||
"""
|
||||
Returns a JSON-serialized string of the ReviewLog object.
|
||||
|
||||
Args:
|
||||
indent: Equivalent argument to the indent in json.dumps()
|
||||
|
||||
Returns:
|
||||
str: A JSON-serialized string of the ReviewLog object.
|
||||
"""
|
||||
|
||||
return json.dumps(self.to_dict(), indent=indent)
|
||||
|
||||
@classmethod
|
||||
def from_json(cls, source_json: str) -> Self:
|
||||
"""
|
||||
Creates a ReviewLog object from a JSON-serialized string.
|
||||
|
||||
Args:
|
||||
source_json: A JSON-serialized string of an existing ReviewLog object.
|
||||
|
||||
Returns:
|
||||
Self: A ReviewLog object created from the JSON string.
|
||||
"""
|
||||
|
||||
source_dict: ReviewLogDict = json.loads(source_json)
|
||||
return cls.from_dict(source_dict=source_dict)
|
||||
|
||||
|
||||
__all__ = ["ReviewLog"]
|
||||
856
src/heurams/vendor/pyfsrs/scheduler.py
vendored
856
src/heurams/vendor/pyfsrs/scheduler.py
vendored
@@ -1,856 +0,0 @@
|
||||
"""
|
||||
fsrs.scheduler
|
||||
---------
|
||||
|
||||
This module defines the Scheduler class as well as the various constants used in its calculations.
|
||||
|
||||
Classes:
|
||||
Scheduler: The FSRS spaced-repetition scheduler.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
from collections.abc import Sequence
|
||||
import math
|
||||
from datetime import datetime, timezone, timedelta
|
||||
from copy import copy
|
||||
import json
|
||||
from random import random
|
||||
from dataclasses import dataclass
|
||||
from fsrs.state import State
|
||||
from fsrs.card import Card
|
||||
from fsrs.rating import Rating
|
||||
from fsrs.review_log import ReviewLog
|
||||
from typing import TYPE_CHECKING, TypedDict, overload
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from torch import Tensor # torch is optional; import only for type checking
|
||||
from typing_extensions import Self
|
||||
|
||||
FSRS_DEFAULT_DECAY = 0.1542
|
||||
DEFAULT_PARAMETERS = (
|
||||
0.212,
|
||||
1.2931,
|
||||
2.3065,
|
||||
8.2956,
|
||||
6.4133,
|
||||
0.8334,
|
||||
3.0194,
|
||||
0.001,
|
||||
1.8722,
|
||||
0.1666,
|
||||
0.796,
|
||||
1.4835,
|
||||
0.0614,
|
||||
0.2629,
|
||||
1.6483,
|
||||
0.6014,
|
||||
1.8729,
|
||||
0.5425,
|
||||
0.0912,
|
||||
0.0658,
|
||||
FSRS_DEFAULT_DECAY,
|
||||
)
|
||||
|
||||
STABILITY_MIN = 0.001
|
||||
LOWER_BOUNDS_PARAMETERS = (
|
||||
STABILITY_MIN,
|
||||
STABILITY_MIN,
|
||||
STABILITY_MIN,
|
||||
STABILITY_MIN,
|
||||
1.0,
|
||||
0.001,
|
||||
0.001,
|
||||
0.001,
|
||||
0.0,
|
||||
0.0,
|
||||
0.001,
|
||||
0.001,
|
||||
0.001,
|
||||
0.001,
|
||||
0.0,
|
||||
0.0,
|
||||
1.0,
|
||||
0.0,
|
||||
0.0,
|
||||
0.0,
|
||||
0.1,
|
||||
)
|
||||
|
||||
INITIAL_STABILITY_MAX = 100.0
|
||||
UPPER_BOUNDS_PARAMETERS = (
|
||||
INITIAL_STABILITY_MAX,
|
||||
INITIAL_STABILITY_MAX,
|
||||
INITIAL_STABILITY_MAX,
|
||||
INITIAL_STABILITY_MAX,
|
||||
10.0,
|
||||
4.0,
|
||||
4.0,
|
||||
0.75,
|
||||
4.5,
|
||||
0.8,
|
||||
3.5,
|
||||
5.0,
|
||||
0.25,
|
||||
0.9,
|
||||
4.0,
|
||||
1.0,
|
||||
6.0,
|
||||
2.0,
|
||||
2.0,
|
||||
0.8,
|
||||
0.8,
|
||||
)
|
||||
|
||||
MIN_DIFFICULTY = 1.0
|
||||
MAX_DIFFICULTY = 10.0
|
||||
|
||||
FUZZ_RANGES = [
|
||||
{
|
||||
"start": 2.5,
|
||||
"end": 7.0,
|
||||
"factor": 0.15,
|
||||
},
|
||||
{
|
||||
"start": 7.0,
|
||||
"end": 20.0,
|
||||
"factor": 0.1,
|
||||
},
|
||||
{
|
||||
"start": 20.0,
|
||||
"end": math.inf,
|
||||
"factor": 0.05,
|
||||
},
|
||||
]
|
||||
|
||||
|
||||
class SchedulerDict(TypedDict):
|
||||
"""
|
||||
JSON-serializable dictionary representation of a Scheduler object.
|
||||
"""
|
||||
|
||||
parameters: list[float]
|
||||
desired_retention: float
|
||||
learning_steps: list[int]
|
||||
relearning_steps: list[int]
|
||||
maximum_interval: int
|
||||
enable_fuzzing: bool
|
||||
|
||||
|
||||
@dataclass(init=False)
|
||||
class Scheduler:
|
||||
"""
|
||||
The FSRS scheduler.
|
||||
|
||||
Enables the reviewing and future scheduling of cards according to the FSRS algorithm.
|
||||
|
||||
Attributes:
|
||||
parameters: The model weights of the FSRS scheduler.
|
||||
desired_retention: The desired retention rate of cards scheduled with the scheduler.
|
||||
learning_steps: Small time intervals that schedule cards in the Learning state.
|
||||
relearning_steps: Small time intervals that schedule cards in the Relearning state.
|
||||
maximum_interval: The maximum number of days a Review-state card can be scheduled into the future.
|
||||
enable_fuzzing: Whether to apply a small amount of random 'fuzz' to calculated intervals.
|
||||
"""
|
||||
|
||||
parameters: tuple[float, ...]
|
||||
desired_retention: float
|
||||
learning_steps: tuple[timedelta, ...]
|
||||
relearning_steps: tuple[timedelta, ...]
|
||||
maximum_interval: int
|
||||
enable_fuzzing: bool
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
parameters: Sequence[float] = DEFAULT_PARAMETERS,
|
||||
desired_retention: float = 0.9,
|
||||
learning_steps: tuple[timedelta, ...] | list[timedelta] = (
|
||||
timedelta(minutes=1),
|
||||
timedelta(minutes=10),
|
||||
),
|
||||
relearning_steps: tuple[timedelta, ...] | list[timedelta] = (
|
||||
timedelta(minutes=10),
|
||||
),
|
||||
maximum_interval: int = 36500,
|
||||
enable_fuzzing: bool = True,
|
||||
) -> None:
|
||||
self._validate_parameters(parameters=parameters)
|
||||
|
||||
self.parameters = tuple(parameters)
|
||||
self.desired_retention = desired_retention
|
||||
self.learning_steps = tuple(learning_steps)
|
||||
self.relearning_steps = tuple(relearning_steps)
|
||||
self.maximum_interval = maximum_interval
|
||||
self.enable_fuzzing = enable_fuzzing
|
||||
|
||||
self._DECAY = -self.parameters[20]
|
||||
self._FACTOR = 0.9 ** (1 / self._DECAY) - 1
|
||||
|
||||
def _validate_parameters(self, *, parameters: Sequence[float]) -> None:
|
||||
if len(parameters) != len(LOWER_BOUNDS_PARAMETERS):
|
||||
raise ValueError(
|
||||
f"Expected {len(LOWER_BOUNDS_PARAMETERS)} parameters, got {len(parameters)}."
|
||||
)
|
||||
|
||||
error_messages = []
|
||||
for index, (parameter, lower_bound, upper_bound) in enumerate(
|
||||
zip(parameters, LOWER_BOUNDS_PARAMETERS, UPPER_BOUNDS_PARAMETERS)
|
||||
):
|
||||
if not lower_bound <= parameter <= upper_bound:
|
||||
error_message = f"parameters[{index}] = {parameter} is out of bounds: ({lower_bound}, {upper_bound})"
|
||||
error_messages.append(error_message)
|
||||
|
||||
if len(error_messages) > 0:
|
||||
raise ValueError(
|
||||
"One or more parameters are out of bounds:\n"
|
||||
+ "\n".join(error_messages)
|
||||
)
|
||||
|
||||
def get_card_retrievability(
|
||||
self, card: Card, current_datetime: datetime | None = None
|
||||
) -> float:
|
||||
"""
|
||||
Calculates a Card object's current retrievability for a given date and time.
|
||||
|
||||
The retrievability of a card is the predicted probability that the card is correctly recalled at the provided datetime.
|
||||
|
||||
Args:
|
||||
card: The card whose retrievability is to be calculated
|
||||
current_datetime: The current date and time
|
||||
|
||||
Returns:
|
||||
float: The retrievability of the Card object.
|
||||
"""
|
||||
|
||||
if card.last_review is None or card.stability is None:
|
||||
return 0
|
||||
|
||||
if current_datetime is None:
|
||||
current_datetime = datetime.now(timezone.utc)
|
||||
|
||||
elapsed_days = max(0, (current_datetime - card.last_review).days)
|
||||
|
||||
return (1 + self._FACTOR * elapsed_days / card.stability) ** self._DECAY
|
||||
|
||||
def review_card(
|
||||
self,
|
||||
card: Card,
|
||||
rating: Rating,
|
||||
review_datetime: datetime | None = None,
|
||||
review_duration: int | None = None,
|
||||
) -> tuple[Card, ReviewLog]:
|
||||
"""
|
||||
Reviews a card with a given rating at a given time for a specified duration.
|
||||
|
||||
Args:
|
||||
card: The card being reviewed.
|
||||
rating: The chosen rating for the card being reviewed.
|
||||
review_datetime: The date and time of the review.
|
||||
review_duration: The number of miliseconds it took to review the card or None if unspecified.
|
||||
|
||||
Returns:
|
||||
tuple[Card,ReviewLog]: A tuple containing the updated, reviewed card and its corresponding review log.
|
||||
|
||||
Raises:
|
||||
ValueError: If the `review_datetime` argument is not timezone-aware and set to UTC.
|
||||
"""
|
||||
|
||||
if review_datetime is not None and (
|
||||
(review_datetime.tzinfo is None) or (review_datetime.tzinfo != timezone.utc)
|
||||
):
|
||||
raise ValueError("datetime must be timezone-aware and set to UTC")
|
||||
|
||||
card = copy(card)
|
||||
|
||||
if review_datetime is None:
|
||||
review_datetime = datetime.now(timezone.utc)
|
||||
|
||||
days_since_last_review = (
|
||||
(review_datetime - card.last_review).days if card.last_review else None
|
||||
)
|
||||
|
||||
match card.state:
|
||||
case State.Learning:
|
||||
assert card.step is not None
|
||||
|
||||
# update the card's stability and difficulty
|
||||
if card.stability is None or card.difficulty is None:
|
||||
card.stability = self._initial_stability(rating=rating)
|
||||
card.difficulty = self._initial_difficulty(
|
||||
rating=rating, clamp=True
|
||||
)
|
||||
|
||||
elif days_since_last_review is not None and days_since_last_review < 1:
|
||||
card.stability = self._short_term_stability(
|
||||
stability=card.stability, rating=rating
|
||||
)
|
||||
card.difficulty = self._next_difficulty(
|
||||
difficulty=card.difficulty, rating=rating
|
||||
)
|
||||
|
||||
else:
|
||||
card.stability = self._next_stability(
|
||||
difficulty=card.difficulty,
|
||||
stability=card.stability,
|
||||
retrievability=self.get_card_retrievability(
|
||||
card,
|
||||
current_datetime=review_datetime,
|
||||
),
|
||||
rating=rating,
|
||||
)
|
||||
card.difficulty = self._next_difficulty(
|
||||
difficulty=card.difficulty, rating=rating
|
||||
)
|
||||
|
||||
# calculate the card's next interval
|
||||
## first if-clause handles edge case where the Card in the Learning state was previously
|
||||
## scheduled with a Scheduler with more learning_steps than the current Scheduler
|
||||
if len(self.learning_steps) == 0 or (
|
||||
card.step >= len(self.learning_steps)
|
||||
and rating in (Rating.Hard, Rating.Good, Rating.Easy)
|
||||
):
|
||||
card.state = State.Review
|
||||
card.step = None
|
||||
|
||||
next_interval_days = self._next_interval(stability=card.stability)
|
||||
next_interval = timedelta(days=next_interval_days)
|
||||
|
||||
else:
|
||||
match rating:
|
||||
case Rating.Again:
|
||||
card.step = 0
|
||||
next_interval = self.learning_steps[card.step]
|
||||
|
||||
case Rating.Hard:
|
||||
# card step stays the same
|
||||
|
||||
if card.step == 0 and len(self.learning_steps) == 1:
|
||||
next_interval = self.learning_steps[0] * 1.5
|
||||
elif card.step == 0 and len(self.learning_steps) >= 2:
|
||||
next_interval = (
|
||||
self.learning_steps[0] + self.learning_steps[1]
|
||||
) / 2.0
|
||||
else:
|
||||
next_interval = self.learning_steps[card.step]
|
||||
|
||||
case Rating.Good:
|
||||
if card.step + 1 == len(
|
||||
self.learning_steps
|
||||
): # the last step
|
||||
card.state = State.Review
|
||||
card.step = None
|
||||
|
||||
next_interval_days = self._next_interval(
|
||||
stability=card.stability
|
||||
)
|
||||
next_interval = timedelta(days=next_interval_days)
|
||||
|
||||
else:
|
||||
card.step += 1
|
||||
next_interval = self.learning_steps[card.step]
|
||||
|
||||
case Rating.Easy:
|
||||
card.state = State.Review
|
||||
card.step = None
|
||||
|
||||
next_interval_days = self._next_interval(
|
||||
stability=card.stability
|
||||
)
|
||||
next_interval = timedelta(days=next_interval_days)
|
||||
|
||||
case _:
|
||||
raise ValueError(f"Unknown rating: {rating}")
|
||||
|
||||
case State.Review:
|
||||
assert card.stability is not None
|
||||
assert card.difficulty is not None
|
||||
|
||||
# update the card's stability and difficulty
|
||||
if days_since_last_review is not None and days_since_last_review < 1:
|
||||
card.stability = self._short_term_stability(
|
||||
stability=card.stability, rating=rating
|
||||
)
|
||||
else:
|
||||
card.stability = self._next_stability(
|
||||
difficulty=card.difficulty,
|
||||
stability=card.stability,
|
||||
retrievability=self.get_card_retrievability(
|
||||
card,
|
||||
current_datetime=review_datetime,
|
||||
),
|
||||
rating=rating,
|
||||
)
|
||||
|
||||
card.difficulty = self._next_difficulty(
|
||||
difficulty=card.difficulty, rating=rating
|
||||
)
|
||||
|
||||
# calculate the card's next interval
|
||||
match rating:
|
||||
case Rating.Again:
|
||||
# if there are no relearning steps (they were left blank)
|
||||
if len(self.relearning_steps) == 0:
|
||||
next_interval_days = self._next_interval(
|
||||
stability=card.stability
|
||||
)
|
||||
next_interval = timedelta(days=next_interval_days)
|
||||
|
||||
else:
|
||||
card.state = State.Relearning
|
||||
card.step = 0
|
||||
|
||||
next_interval = self.relearning_steps[card.step]
|
||||
|
||||
case Rating.Hard | Rating.Good | Rating.Easy:
|
||||
next_interval_days = self._next_interval(
|
||||
stability=card.stability
|
||||
)
|
||||
next_interval = timedelta(days=next_interval_days)
|
||||
|
||||
case _:
|
||||
raise ValueError(f"Unknown rating: {rating}")
|
||||
|
||||
case State.Relearning:
|
||||
assert card.stability is not None
|
||||
assert card.difficulty is not None
|
||||
assert card.step is not None
|
||||
|
||||
# update the card's stability and difficulty
|
||||
if days_since_last_review is not None and days_since_last_review < 1:
|
||||
card.stability = self._short_term_stability(
|
||||
stability=card.stability, rating=rating
|
||||
)
|
||||
card.difficulty = self._next_difficulty(
|
||||
difficulty=card.difficulty, rating=rating
|
||||
)
|
||||
|
||||
else:
|
||||
card.stability = self._next_stability(
|
||||
difficulty=card.difficulty,
|
||||
stability=card.stability,
|
||||
retrievability=self.get_card_retrievability(
|
||||
card,
|
||||
current_datetime=review_datetime,
|
||||
),
|
||||
rating=rating,
|
||||
)
|
||||
card.difficulty = self._next_difficulty(
|
||||
difficulty=card.difficulty, rating=rating
|
||||
)
|
||||
|
||||
# calculate the card's next interval
|
||||
## first if-clause handles edge case where the Card in the Relearning state was previously
|
||||
## scheduled with a Scheduler with more relearning_steps than the current Scheduler
|
||||
if len(self.relearning_steps) == 0 or (
|
||||
card.step >= len(self.relearning_steps)
|
||||
and rating in (Rating.Hard, Rating.Good, Rating.Easy)
|
||||
):
|
||||
card.state = State.Review
|
||||
card.step = None
|
||||
|
||||
next_interval_days = self._next_interval(stability=card.stability)
|
||||
next_interval = timedelta(days=next_interval_days)
|
||||
|
||||
else:
|
||||
match rating:
|
||||
case Rating.Again:
|
||||
card.step = 0
|
||||
next_interval = self.relearning_steps[card.step]
|
||||
|
||||
case Rating.Hard:
|
||||
# card step stays the same
|
||||
|
||||
if card.step == 0 and len(self.relearning_steps) == 1:
|
||||
next_interval = self.relearning_steps[0] * 1.5
|
||||
elif card.step == 0 and len(self.relearning_steps) >= 2:
|
||||
next_interval = (
|
||||
self.relearning_steps[0] + self.relearning_steps[1]
|
||||
) / 2.0
|
||||
else:
|
||||
next_interval = self.relearning_steps[card.step]
|
||||
|
||||
case Rating.Good:
|
||||
if card.step + 1 == len(
|
||||
self.relearning_steps
|
||||
): # the last step
|
||||
card.state = State.Review
|
||||
card.step = None
|
||||
|
||||
next_interval_days = self._next_interval(
|
||||
stability=card.stability
|
||||
)
|
||||
next_interval = timedelta(days=next_interval_days)
|
||||
|
||||
else:
|
||||
card.step += 1
|
||||
next_interval = self.relearning_steps[card.step]
|
||||
|
||||
case Rating.Easy:
|
||||
card.state = State.Review
|
||||
card.step = None
|
||||
|
||||
next_interval_days = self._next_interval(
|
||||
stability=card.stability
|
||||
)
|
||||
next_interval = timedelta(days=next_interval_days)
|
||||
|
||||
case _:
|
||||
raise ValueError(f"Unknown rating: {rating}")
|
||||
|
||||
case _:
|
||||
raise ValueError(f"Unknown card state: {card.state}")
|
||||
|
||||
if self.enable_fuzzing and card.state == State.Review:
|
||||
next_interval = self._get_fuzzed_interval(interval=next_interval)
|
||||
|
||||
card.due = review_datetime + next_interval
|
||||
card.last_review = review_datetime
|
||||
|
||||
review_log = ReviewLog(
|
||||
card_id=card.card_id,
|
||||
rating=rating,
|
||||
review_datetime=review_datetime,
|
||||
review_duration=review_duration,
|
||||
)
|
||||
|
||||
return card, review_log
|
||||
|
||||
def reschedule_card(self, card: Card, review_logs: list[ReviewLog]) -> Card:
|
||||
"""
|
||||
Reschedules/updates the given card with the current scheduler provided that card's review logs.
|
||||
|
||||
If the current card was previously scheduled with a different scheduler, you may want to reschedule/update
|
||||
it as if it had always been scheduled with this current scheduler. For example, you may want to reschedule
|
||||
each of your cards with a new scheduler after computing the optimal parameters with the Optimizer.
|
||||
|
||||
Args:
|
||||
card: The card to be rescheduled/updated.
|
||||
review_logs: A list of that card's review logs (order doesn't matter).
|
||||
|
||||
Returns:
|
||||
Card: A new card that has been rescheduled/updated with this current scheduler.
|
||||
|
||||
Raises:
|
||||
ValueError: If any of the review logs are for a card other than the one specified, this will raise an error.
|
||||
|
||||
"""
|
||||
|
||||
for review_log in review_logs:
|
||||
if review_log.card_id != card.card_id:
|
||||
raise ValueError(
|
||||
f"ReviewLog card_id {review_log.card_id} does not match Card card_id {card.card_id}"
|
||||
)
|
||||
|
||||
review_logs = sorted(review_logs, key=lambda log: log.review_datetime)
|
||||
|
||||
rescheduled_card = Card(card_id=card.card_id, due=card.due)
|
||||
|
||||
for review_log in review_logs:
|
||||
rescheduled_card, _ = self.review_card(
|
||||
card=rescheduled_card,
|
||||
rating=review_log.rating,
|
||||
review_datetime=review_log.review_datetime,
|
||||
)
|
||||
|
||||
return rescheduled_card
|
||||
|
||||
def to_dict(
|
||||
self,
|
||||
) -> SchedulerDict:
|
||||
"""
|
||||
Returns a dictionary representation of the Scheduler object.
|
||||
|
||||
Returns:
|
||||
SchedulerDict: A dictionary representation of the Scheduler object.
|
||||
"""
|
||||
|
||||
return {
|
||||
"parameters": list(self.parameters),
|
||||
"desired_retention": self.desired_retention,
|
||||
"learning_steps": [
|
||||
int(learning_step.total_seconds())
|
||||
for learning_step in self.learning_steps
|
||||
],
|
||||
"relearning_steps": [
|
||||
int(relearning_step.total_seconds())
|
||||
for relearning_step in self.relearning_steps
|
||||
],
|
||||
"maximum_interval": self.maximum_interval,
|
||||
"enable_fuzzing": self.enable_fuzzing,
|
||||
}
|
||||
|
||||
@classmethod
|
||||
def from_dict(cls, source_dict: SchedulerDict) -> Self:
|
||||
"""
|
||||
Creates a Scheduler object from an existing dictionary.
|
||||
|
||||
Args:
|
||||
source_dict: A dictionary representing an existing Scheduler object.
|
||||
|
||||
Returns:
|
||||
Self: A Scheduler object created from the provided dictionary.
|
||||
"""
|
||||
|
||||
return cls(
|
||||
parameters=source_dict["parameters"],
|
||||
desired_retention=source_dict["desired_retention"],
|
||||
learning_steps=[
|
||||
timedelta(seconds=learning_step)
|
||||
for learning_step in source_dict["learning_steps"]
|
||||
],
|
||||
relearning_steps=[
|
||||
timedelta(seconds=relearning_step)
|
||||
for relearning_step in source_dict["relearning_steps"]
|
||||
],
|
||||
maximum_interval=source_dict["maximum_interval"],
|
||||
enable_fuzzing=source_dict["enable_fuzzing"],
|
||||
)
|
||||
|
||||
def to_json(self, indent: int | str | None = None) -> str:
|
||||
"""
|
||||
Returns a JSON-serialized string of the Scheduler object.
|
||||
|
||||
Args:
|
||||
indent: Equivalent argument to the indent in json.dumps()
|
||||
|
||||
Returns:
|
||||
str: A JSON-serialized string of the Scheduler object.
|
||||
"""
|
||||
|
||||
return json.dumps(self.to_dict(), indent=indent)
|
||||
|
||||
@classmethod
|
||||
def from_json(cls, source_json: str) -> Self:
|
||||
"""
|
||||
Creates a Scheduler object from a JSON-serialized string.
|
||||
|
||||
Args:
|
||||
source_json: A JSON-serialized string of an existing Scheduler object.
|
||||
|
||||
Returns:
|
||||
Self: A Scheduler object created from the JSON string.
|
||||
"""
|
||||
|
||||
source_dict: SchedulerDict = json.loads(source_json)
|
||||
return cls.from_dict(source_dict=source_dict)
|
||||
|
||||
@overload
|
||||
def _clamp_difficulty(self, *, difficulty: float) -> float: ...
|
||||
@overload
|
||||
def _clamp_difficulty(self, *, difficulty: Tensor) -> Tensor: ...
|
||||
def _clamp_difficulty(self, *, difficulty: float | Tensor) -> float | Tensor:
|
||||
if isinstance(difficulty, (int, float)):
|
||||
difficulty = min(max(difficulty, MIN_DIFFICULTY), MAX_DIFFICULTY)
|
||||
else:
|
||||
difficulty = difficulty.clamp(min=MIN_DIFFICULTY, max=MAX_DIFFICULTY)
|
||||
|
||||
return difficulty
|
||||
|
||||
@overload
|
||||
def _clamp_stability(self, *, stability: float) -> float: ...
|
||||
@overload
|
||||
def _clamp_stability(self, *, stability: Tensor) -> Tensor: ...
|
||||
def _clamp_stability(self, *, stability: float | Tensor) -> float | Tensor:
|
||||
if isinstance(stability, (int, float)):
|
||||
stability = max(stability, STABILITY_MIN)
|
||||
else:
|
||||
stability = stability.clamp(min=STABILITY_MIN)
|
||||
|
||||
return stability
|
||||
|
||||
def _initial_stability(self, *, rating: Rating) -> float:
|
||||
initial_stability = self.parameters[rating - 1]
|
||||
|
||||
initial_stability = self._clamp_stability(stability=initial_stability)
|
||||
|
||||
return initial_stability
|
||||
|
||||
def _initial_difficulty(self, *, rating: Rating, clamp: bool) -> float:
|
||||
initial_difficulty = (
|
||||
self.parameters[4] - (math.e ** (self.parameters[5] * (rating - 1))) + 1
|
||||
)
|
||||
|
||||
if clamp:
|
||||
initial_difficulty = self._clamp_difficulty(difficulty=initial_difficulty)
|
||||
|
||||
return initial_difficulty
|
||||
|
||||
def _next_interval(self, *, stability: float) -> int:
|
||||
next_interval = (stability / self._FACTOR) * (
|
||||
(self.desired_retention ** (1 / self._DECAY)) - 1
|
||||
)
|
||||
|
||||
if not isinstance(next_interval, (int, float)):
|
||||
next_interval = next_interval.detach().item()
|
||||
|
||||
next_interval = round(next_interval) # intervals are full days
|
||||
|
||||
# must be at least 1 day long
|
||||
next_interval = max(next_interval, 1)
|
||||
|
||||
# can not be longer than the maximum interval
|
||||
next_interval = min(next_interval, self.maximum_interval)
|
||||
|
||||
return next_interval
|
||||
|
||||
def _short_term_stability(self, *, stability: float, rating: Rating) -> float:
|
||||
short_term_stability_increase = (
|
||||
math.e ** (self.parameters[17] * (rating - 3 + self.parameters[18]))
|
||||
) * (stability ** -self.parameters[19])
|
||||
|
||||
if rating in (Rating.Good, Rating.Easy):
|
||||
if isinstance(short_term_stability_increase, (int, float)):
|
||||
short_term_stability_increase = max(short_term_stability_increase, 1.0)
|
||||
else:
|
||||
short_term_stability_increase = short_term_stability_increase.clamp(
|
||||
min=1.0
|
||||
)
|
||||
|
||||
short_term_stability = stability * short_term_stability_increase
|
||||
|
||||
short_term_stability = self._clamp_stability(stability=short_term_stability)
|
||||
|
||||
return short_term_stability
|
||||
|
||||
def _next_difficulty(self, *, difficulty: float, rating: Rating) -> float:
|
||||
def _linear_damping(*, delta_difficulty: float, difficulty: float) -> float:
|
||||
return (10.0 - difficulty) * delta_difficulty / 9.0
|
||||
|
||||
def _mean_reversion(*, arg_1: float, arg_2: float) -> float:
|
||||
return self.parameters[7] * arg_1 + (1 - self.parameters[7]) * arg_2
|
||||
|
||||
arg_1 = self._initial_difficulty(rating=Rating.Easy, clamp=False)
|
||||
|
||||
delta_difficulty = -(self.parameters[6] * (rating - 3))
|
||||
arg_2 = difficulty + _linear_damping(
|
||||
delta_difficulty=delta_difficulty, difficulty=difficulty
|
||||
)
|
||||
|
||||
next_difficulty = _mean_reversion(arg_1=arg_1, arg_2=arg_2)
|
||||
|
||||
next_difficulty = self._clamp_difficulty(difficulty=next_difficulty)
|
||||
|
||||
return next_difficulty
|
||||
|
||||
def _next_stability(
|
||||
self,
|
||||
*,
|
||||
difficulty: float,
|
||||
stability: float,
|
||||
retrievability: float,
|
||||
rating: Rating,
|
||||
) -> float:
|
||||
if rating == Rating.Again:
|
||||
next_stability = self._next_forget_stability(
|
||||
difficulty=difficulty,
|
||||
stability=stability,
|
||||
retrievability=retrievability,
|
||||
)
|
||||
|
||||
elif rating in (Rating.Hard, Rating.Good, Rating.Easy):
|
||||
next_stability = self._next_recall_stability(
|
||||
difficulty=difficulty,
|
||||
stability=stability,
|
||||
retrievability=retrievability,
|
||||
rating=rating,
|
||||
)
|
||||
|
||||
else:
|
||||
raise ValueError(f"Unknown rating: {rating}")
|
||||
|
||||
next_stability = self._clamp_stability(stability=next_stability)
|
||||
|
||||
return next_stability
|
||||
|
||||
def _next_forget_stability(
|
||||
self, *, difficulty: float, stability: float, retrievability: float
|
||||
) -> float:
|
||||
next_forget_stability_long_term_params = (
|
||||
self.parameters[11]
|
||||
* (difficulty ** -self.parameters[12])
|
||||
* (((stability + 1) ** (self.parameters[13])) - 1)
|
||||
* (math.e ** ((1 - retrievability) * self.parameters[14]))
|
||||
)
|
||||
|
||||
next_forget_stability_short_term_params = stability / (
|
||||
math.e ** (self.parameters[17] * self.parameters[18])
|
||||
)
|
||||
|
||||
return min(
|
||||
next_forget_stability_long_term_params,
|
||||
next_forget_stability_short_term_params,
|
||||
)
|
||||
|
||||
def _next_recall_stability(
|
||||
self,
|
||||
*,
|
||||
difficulty: float,
|
||||
stability: float,
|
||||
retrievability: float,
|
||||
rating: Rating,
|
||||
) -> float:
|
||||
hard_penalty = self.parameters[15] if rating == Rating.Hard else 1
|
||||
easy_bonus = self.parameters[16] if rating == Rating.Easy else 1
|
||||
|
||||
return stability * (
|
||||
1
|
||||
+ (math.e ** (self.parameters[8]))
|
||||
* (11 - difficulty)
|
||||
* (stability ** -self.parameters[9])
|
||||
* ((math.e ** ((1 - retrievability) * self.parameters[10])) - 1)
|
||||
* hard_penalty
|
||||
* easy_bonus
|
||||
)
|
||||
|
||||
def _get_fuzzed_interval(self, *, interval: timedelta) -> timedelta:
|
||||
"""
|
||||
Takes the current calculated interval and adds a small amount of random fuzz to it.
|
||||
For example, a card that would've been due in 50 days, after fuzzing, might be due in 49, or 51 days.
|
||||
|
||||
Args:
|
||||
interval: The calculated next interval, before fuzzing.
|
||||
|
||||
Returns:
|
||||
timedelta: The new interval, after fuzzing.
|
||||
"""
|
||||
|
||||
interval_days = interval.days
|
||||
|
||||
if interval_days < 2.5: # fuzz is not applied to intervals less than 2.5
|
||||
return interval
|
||||
|
||||
def _get_fuzz_range(*, interval_days: int) -> tuple[int, int]:
|
||||
"""
|
||||
Helper function that computes the possible upper and lower bounds of the interval after fuzzing.
|
||||
"""
|
||||
|
||||
delta = 1.0
|
||||
for fuzz_range in FUZZ_RANGES:
|
||||
delta += fuzz_range["factor"] * max(
|
||||
min(float(interval_days), fuzz_range["end"]) - fuzz_range["start"],
|
||||
0.0,
|
||||
)
|
||||
|
||||
min_ivl = int(round(interval_days - delta))
|
||||
max_ivl = int(round(interval_days + delta))
|
||||
|
||||
# make sure the min_ivl and max_ivl fall into a valid range
|
||||
min_ivl = max(2, min_ivl)
|
||||
max_ivl = min(max_ivl, self.maximum_interval)
|
||||
min_ivl = min(min_ivl, max_ivl)
|
||||
|
||||
return min_ivl, max_ivl
|
||||
|
||||
min_ivl, max_ivl = _get_fuzz_range(interval_days=interval_days)
|
||||
|
||||
fuzzed_interval_days = (
|
||||
random() * (max_ivl - min_ivl + 1)
|
||||
) + min_ivl # the next interval is a random value between min_ivl and max_ivl
|
||||
|
||||
fuzzed_interval_days = min(round(fuzzed_interval_days), self.maximum_interval)
|
||||
|
||||
fuzzed_interval = timedelta(days=fuzzed_interval_days)
|
||||
|
||||
return fuzzed_interval
|
||||
|
||||
|
||||
__all__ = ["Scheduler"]
|
||||
14
src/heurams/vendor/pyfsrs/state.py
vendored
14
src/heurams/vendor/pyfsrs/state.py
vendored
@@ -1,14 +0,0 @@
|
||||
from enum import IntEnum
|
||||
|
||||
|
||||
class State(IntEnum):
|
||||
"""
|
||||
Enum representing the learning state of a Card object.
|
||||
"""
|
||||
|
||||
Learning = 1
|
||||
Review = 2
|
||||
Relearning = 3
|
||||
|
||||
|
||||
__all__ = ["State"]
|
||||
54
tests/conftest.py
Normal file
54
tests/conftest.py
Normal file
@@ -0,0 +1,54 @@
|
||||
"""
|
||||
Pytest shared fixtures for HeurAMS test suite.
|
||||
|
||||
Provides:
|
||||
- timer_config: A ConfigDict with deterministic timer overrides for reproducible tests.
|
||||
- timer_context: A ConfigContext that applies timer_config for the duration of a test.
|
||||
- sample_algodata_sm2: A fresh SM-2 algodata dict.
|
||||
- sample_algodata_nsp0: A fresh NSP-0 algodata dict.
|
||||
"""
|
||||
|
||||
import pathlib
|
||||
from copy import deepcopy
|
||||
|
||||
import pytest
|
||||
|
||||
from heurams.context import ConfigContext, config_var, workdir
|
||||
from heurams.kernel.algorithms import algorithms, nsp0
|
||||
from heurams.services.config import ConfigDict
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def timer_config():
|
||||
"""A ConfigDict with deterministic timer overrides (daystamp=20000, timestamp=1e9).
|
||||
|
||||
This allows reproducible algorithm tests without relying on wall-clock time.
|
||||
"""
|
||||
# Use the real config path as base, then overlay timer overrides
|
||||
config = ConfigDict(workdir / "data" / "config")
|
||||
# Override timer values in-place via the nested ConfigDict
|
||||
timer_cfg = config["services"]["timer"]
|
||||
timer_cfg["daystamp_override"] = 20000
|
||||
timer_cfg["timestamp_override"] = 1000000000.0
|
||||
return config
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def timer_context(timer_config):
|
||||
"""Context manager fixture that applies the timer overrides."""
|
||||
with ConfigContext(timer_config):
|
||||
yield
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_algodata_sm2():
|
||||
"""A fresh SM-2 algodata dict (pre-activation)."""
|
||||
algo = algorithms["SM-2"]
|
||||
return {algo.algo_name: deepcopy(algo.defaults)}
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def sample_algodata_nsp0():
|
||||
"""A fresh NSP-0 algodata dict (pre-activation)."""
|
||||
algo = algorithms["NSP-0"]
|
||||
return {algo.algo_name: deepcopy(algo.defaults)}
|
||||
58
tests/test_base_algorithm.py
Normal file
58
tests/test_base_algorithm.py
Normal file
@@ -0,0 +1,58 @@
|
||||
"""Tests for heurams.kernel.algorithms.base.BaseAlgorithm"""
|
||||
|
||||
from copy import deepcopy
|
||||
|
||||
import pytest
|
||||
|
||||
from heurams.kernel.algorithms import BaseAlgorithm
|
||||
from heurams.services import timer
|
||||
|
||||
|
||||
class TestBaseAlgorithmDefaults:
|
||||
def test_defaults_have_required_keys(self):
|
||||
required = {
|
||||
"real_rept",
|
||||
"rept",
|
||||
"interval",
|
||||
"last_date",
|
||||
"next_date",
|
||||
"is_activated",
|
||||
"last_modify",
|
||||
}
|
||||
assert required.issubset(BaseAlgorithm.defaults.keys())
|
||||
|
||||
def test_defaults_last_modify_is_reasonable(self):
|
||||
# defaults is evaluated at module import, not test time
|
||||
ts = BaseAlgorithm.defaults["last_modify"]
|
||||
assert isinstance(ts, float)
|
||||
assert ts > 1e9 # reasonable UNIX timestamp
|
||||
|
||||
|
||||
class TestBaseAlgorithmMethods:
|
||||
def test_revisor_does_nothing(self):
|
||||
d = {"SM-2": {"rept": 0}}
|
||||
BaseAlgorithm.revisor(d, feedback=5)
|
||||
# Base.revisor is a no-op — dict unchanged
|
||||
assert d["SM-2"]["rept"] == 0
|
||||
|
||||
def test_is_due_returns_one(self):
|
||||
assert BaseAlgorithm.is_due({}) == 1
|
||||
|
||||
def test_get_rating_returns_empty(self):
|
||||
assert BaseAlgorithm.get_rating({}) == ""
|
||||
|
||||
def test_nextdate_returns_negative_one(self):
|
||||
assert BaseAlgorithm.nextdate({}) == -1
|
||||
|
||||
|
||||
class TestBaseAlgorithmIntegrity:
|
||||
def test_check_integrity_valid(self, sample_algodata_sm2):
|
||||
# BaseAlgorithm.algo_name is "BaseAlgorithm", not "SM-2"
|
||||
data = {"BaseAlgorithm": sample_algodata_sm2["SM-2"]}
|
||||
assert BaseAlgorithm.check_integrity(data) == 1
|
||||
|
||||
def test_check_integrity_invalid(self):
|
||||
assert BaseAlgorithm.check_integrity({"SM-2": {}}) == 0
|
||||
|
||||
def test_check_integrity_missing_key(self):
|
||||
assert BaseAlgorithm.check_integrity({}) == 0
|
||||
150
tests/test_electron.py
Normal file
150
tests/test_electron.py
Normal file
@@ -0,0 +1,150 @@
|
||||
"""Tests for heurams.kernel.particles.electron.Electron"""
|
||||
|
||||
from copy import deepcopy
|
||||
|
||||
import pytest
|
||||
|
||||
from heurams.kernel.algorithms import algorithms
|
||||
from heurams.kernel.particles.electron import Electron
|
||||
from heurams.services import timer
|
||||
|
||||
|
||||
class TestElectronInit:
|
||||
def test_default_algo_is_sm2(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
assert e.algoname == "SM-2"
|
||||
assert e.ident == "test-id"
|
||||
|
||||
def test_specific_algo(self, timer_context):
|
||||
e = Electron("test-id", {}, algo_name="NSP-0")
|
||||
assert e.algoname == "NSP-0"
|
||||
|
||||
def test_integrity_check_fills_defaults(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
assert "SM-2" in e.algodata
|
||||
assert e.algodata["SM-2"]["efactor"] == 2.5
|
||||
|
||||
def test_existing_data_preserved(self, timer_context):
|
||||
data = {"SM-2": {"efactor": 1.5, "rept": 3, "real_rept": 5, "interval": 10,
|
||||
"last_date": 100, "next_date": 200, "is_activated": 1,
|
||||
"last_modify": 1e9}}
|
||||
e = Electron("test-id", data)
|
||||
assert e.algodata["SM-2"]["efactor"] == 1.5
|
||||
assert e.algodata["SM-2"]["rept"] == 3
|
||||
|
||||
|
||||
class TestElectronActivation:
|
||||
def test_activate_sets_flag(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
assert e.is_activated() == 0
|
||||
e.activate()
|
||||
assert e.is_activated() == 1
|
||||
|
||||
def test_is_due_requires_activation(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
e.algodata["SM-2"]["next_date"] = 0 # past
|
||||
assert e.is_due() == 0 # not activated
|
||||
|
||||
e.activate()
|
||||
assert e.is_due() == 1
|
||||
|
||||
def test_is_due_returns_false_when_not_due(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
e.activate()
|
||||
e.algodata["SM-2"]["next_date"] = 999999
|
||||
assert e.is_due() is False
|
||||
|
||||
|
||||
class TestElectronModify:
|
||||
def test_modify_valid_key(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
e.modify("efactor", 3.0)
|
||||
assert e.algodata["SM-2"]["efactor"] == 3.0
|
||||
|
||||
def test_modify_invalid_key_raises(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
with pytest.raises(AttributeError):
|
||||
e.modify("nonexistent", 42)
|
||||
|
||||
|
||||
class TestElectronRevisor:
|
||||
def test_revisor_delegates_to_algo(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
e.activate()
|
||||
e.algodata["SM-2"]["next_date"] = 0
|
||||
assert e.is_due() == 1
|
||||
|
||||
e.revisor(quality=5)
|
||||
# After good review, interval > 0
|
||||
assert e.algodata["SM-2"]["interval"] >= 1
|
||||
|
||||
def test_revisor_nsp0(self, timer_context):
|
||||
e = Electron("test-id", {}, algo_name="NSP-0")
|
||||
e.activate()
|
||||
e.algodata["NSP-0"]["next_date"] = 0
|
||||
assert e.is_due() == 1
|
||||
|
||||
e.revisor(quality=3) # bad feedback
|
||||
assert e.algodata["NSP-0"]["interval"] == 1
|
||||
|
||||
|
||||
class TestElectronProperties:
|
||||
def test_rept(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
assert e.rept() == 0
|
||||
|
||||
def test_rept_real(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
assert e.rept(real_rept=True) == 0
|
||||
|
||||
def test_get_rating(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
rating = e.get_rating()
|
||||
assert isinstance(rating, str)
|
||||
|
||||
def test_nextdate_returns_int(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
nd = e.nextdate()
|
||||
assert isinstance(nd, int)
|
||||
|
||||
def test_hash(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
assert hash(e) == hash("test-id")
|
||||
|
||||
def test_len(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
assert len(e) == len(algorithms["SM-2"].defaults)
|
||||
|
||||
|
||||
class TestElectronGetSetItem:
|
||||
def test_getitem_ident(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
assert e["ident"] == "test-id"
|
||||
|
||||
def test_getitem_algo_key(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
assert e["efactor"] == 2.5
|
||||
|
||||
def test_getitem_missing_key_raises(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
with pytest.raises(KeyError):
|
||||
_ = e["nonexistent"]
|
||||
|
||||
def test_setitem_valid_key(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
e["efactor"] = 3.5
|
||||
assert e["efactor"] == 3.5
|
||||
|
||||
def test_setitem_ident_raises(self, timer_context):
|
||||
e = Electron("test-id", {})
|
||||
with pytest.raises(AttributeError):
|
||||
e["ident"] = "new-id"
|
||||
|
||||
|
||||
class TestElectronFromData:
|
||||
def test_from_data_creates_electron(self, timer_context):
|
||||
data = {"SM-2": {}}
|
||||
e = Electron.from_data(("my-ident", data), algo_name="SM-2")
|
||||
assert e.ident == "my-ident"
|
||||
assert e.algoname == "SM-2"
|
||||
assert "SM-2" in e.algodata
|
||||
72
tests/test_epath.py
Normal file
72
tests/test_epath.py
Normal file
@@ -0,0 +1,72 @@
|
||||
"""Tests for heurams.services.epath"""
|
||||
|
||||
from heurams.services.epath import epath
|
||||
|
||||
|
||||
class TestEpathRead:
|
||||
def test_empty_path_returns_self(self):
|
||||
d = {"a": 1}
|
||||
assert epath(d, "") is d
|
||||
|
||||
def test_simple_key(self):
|
||||
d = {"a": 1}
|
||||
assert epath(d, "a") == 1
|
||||
|
||||
def test_nested_key(self):
|
||||
d = {"a": {"b": {"c": 42}}}
|
||||
assert epath(d, "a.b.c") == 42
|
||||
|
||||
def test_missing_key_returns_default(self):
|
||||
d = {"a": 1}
|
||||
assert epath(d, "b", default=None) is None
|
||||
|
||||
def test_missing_key_no_default(self):
|
||||
d = {"a": 1}
|
||||
assert epath(d, "b") is None
|
||||
|
||||
def test_list_index_access(self):
|
||||
d = {"items": [10, 20, 30]}
|
||||
assert epath(d, "items.[1]") == 20
|
||||
|
||||
def test_leading_dot_stripped(self):
|
||||
d = {"a": 1}
|
||||
assert epath(d, ".a") == 1
|
||||
|
||||
def test_trailing_dot_stripped(self):
|
||||
d = {"a": 1}
|
||||
assert epath(d, "a.") == 1
|
||||
|
||||
|
||||
class TestEpathParents:
|
||||
def test_parents_creates_missing_dict_keys(self):
|
||||
d = {}
|
||||
result = epath(d, "a.b.c", parents=True, default=None)
|
||||
# parents=True creates all intermediate keys including the leaf
|
||||
assert result == {}
|
||||
assert d == {"a": {"b": {"c": {}}}}
|
||||
|
||||
|
||||
class TestEpathModify:
|
||||
def test_modify_dict_key(self):
|
||||
d = {"a": 1}
|
||||
result = epath(d, "a", enable_modify=True, new_value=99)
|
||||
assert result == 99
|
||||
assert d["a"] == 99
|
||||
|
||||
def test_modify_nested_key(self):
|
||||
d = {"a": {"b": 2}}
|
||||
epath(d, "a.b", enable_modify=True, new_value=42)
|
||||
assert d["a"]["b"] == 42
|
||||
|
||||
def test_modify_list_index(self):
|
||||
d = {"items": [10, 20]}
|
||||
epath(d, "items.[0]", enable_modify=True, new_value=99)
|
||||
assert d["items"][0] == 99
|
||||
|
||||
def test_modify_list_index_with_parents(self):
|
||||
d = {"items": []}
|
||||
result = epath(
|
||||
d, "items.[3]", enable_modify=True, new_value=42, parents=True
|
||||
)
|
||||
assert result == 42
|
||||
assert d["items"] == [None, None, None, 42]
|
||||
47
tests/test_evalizor.py
Normal file
47
tests/test_evalizor.py
Normal file
@@ -0,0 +1,47 @@
|
||||
"""Tests for heurams.kernel.auxiliary.evalizor.Evalizer"""
|
||||
|
||||
from heurams.kernel.auxiliary.evalizor import Evalizer
|
||||
|
||||
|
||||
class TestEvalizer:
|
||||
def test_noop_on_plain_string(self):
|
||||
e = Evalizer({"x": 42})
|
||||
assert e("hello") == "hello"
|
||||
|
||||
def test_eval_expression(self):
|
||||
e = Evalizer({"x": 42})
|
||||
assert e("eval: x") == 42
|
||||
|
||||
def test_eval_arithmetic(self):
|
||||
e = Evalizer({"a": 10, "b": 20})
|
||||
assert e("eval: a + b") == 30
|
||||
|
||||
def test_traverses_dict(self):
|
||||
e = Evalizer({"val": 99})
|
||||
data = {"key_a": "plain", "key_b": "eval: val + 1"}
|
||||
result = e(data)
|
||||
assert result == {"key_a": "plain", "key_b": 100}
|
||||
|
||||
def test_traverses_list(self):
|
||||
e = Evalizer({"val": 5})
|
||||
data = ["eval: val", "plain", "eval: val * 2"]
|
||||
result = e(data)
|
||||
assert result == [5, "plain", 10]
|
||||
|
||||
def test_traverses_nested(self):
|
||||
e = Evalizer({"val": 3})
|
||||
data = {"outer": {"inner": "eval: val ** 2"}}
|
||||
result = e(data)
|
||||
assert result == {"outer": {"inner": 9}}
|
||||
|
||||
def test_traverses_tuple(self):
|
||||
e = Evalizer({"val": 7})
|
||||
data = ("eval: val", "other")
|
||||
result = e(data)
|
||||
assert result == (7, "other")
|
||||
|
||||
def test_non_string_passthrough(self):
|
||||
e = Evalizer({})
|
||||
assert e(42) == 42
|
||||
assert e(None) is None
|
||||
assert e([1, 2, 3]) == [1, 2, 3]
|
||||
25
tests/test_hasher.py
Normal file
25
tests/test_hasher.py
Normal file
@@ -0,0 +1,25 @@
|
||||
"""Tests for heurams.services.hasher"""
|
||||
|
||||
from heurams.services.hasher import get_md5, hash
|
||||
|
||||
|
||||
class TestGetMD5:
|
||||
def test_known_value(self):
|
||||
# MD5 of "hello" is known
|
||||
assert get_md5("hello") == "5d41402abc4b2a76b9719d911017c592"
|
||||
|
||||
def test_empty_string(self):
|
||||
assert get_md5("") == "d41d8cd98f00b204e9800998ecf8427e"
|
||||
|
||||
def test_unicode(self):
|
||||
result = get_md5("中文测试")
|
||||
assert isinstance(result, str)
|
||||
assert len(result) == 32
|
||||
|
||||
def test_different_inputs_differ(self):
|
||||
assert get_md5("abc") != get_md5("abcd")
|
||||
|
||||
|
||||
class TestHash:
|
||||
def test_hash_delegates_to_md5(self):
|
||||
assert hash("hello") == get_md5("hello")
|
||||
198
tests/test_lict.py
Normal file
198
tests/test_lict.py
Normal file
@@ -0,0 +1,198 @@
|
||||
"""Tests for heurams.kernel.auxiliary.lict.Lict"""
|
||||
|
||||
import pytest
|
||||
|
||||
from heurams.kernel.auxiliary.lict import Lict
|
||||
|
||||
|
||||
class TestLictInit:
|
||||
def test_empty(self):
|
||||
l = Lict()
|
||||
assert len(l) == 0
|
||||
assert list(l) == []
|
||||
|
||||
def test_from_list(self):
|
||||
l = Lict(initlist=[("a", 1), ("b", 2)])
|
||||
assert l["a"] == 1
|
||||
assert l["b"] == 2
|
||||
assert len(l) == 2
|
||||
|
||||
def test_from_dict(self):
|
||||
l = Lict(initdict={"x": 10, "y": 20})
|
||||
assert l["x"] == 10
|
||||
assert l["y"] == 20
|
||||
assert len(l) == 2
|
||||
|
||||
|
||||
class TestLictListInterface:
|
||||
def test_list_getitem(self):
|
||||
l = Lict(initlist=[("a", 1), ("b", 2)])
|
||||
assert l[0] == ("a", 1)
|
||||
assert l[1] == ("b", 2)
|
||||
|
||||
def test_list_setitem(self):
|
||||
l = Lict(initlist=[("a", 1), ("b", 2)])
|
||||
l[0] = ("c", 3)
|
||||
assert l["c"] == 3
|
||||
assert l[0] == ("c", 3)
|
||||
|
||||
def test_list_delitem(self):
|
||||
l = Lict(initlist=[("a", 1), ("b", 2)])
|
||||
del l[0]
|
||||
assert "a" not in l
|
||||
assert len(l) == 1
|
||||
|
||||
def test_append(self):
|
||||
l = Lict()
|
||||
l.append(("k", "v"))
|
||||
assert l["k"] == "v"
|
||||
assert l[0] == ("k", "v")
|
||||
|
||||
def test_insert(self):
|
||||
l = Lict(initlist=[("a", 1), ("c", 3)])
|
||||
l.insert(1, ("b", 2))
|
||||
assert l[1] == ("b", 2)
|
||||
assert l["b"] == 2
|
||||
assert len(l) == 3
|
||||
|
||||
def test_pop(self):
|
||||
l = Lict(initlist=[("a", 1), ("b", 2)])
|
||||
item = l.pop()
|
||||
assert item == ("b", 2)
|
||||
assert "b" not in l
|
||||
|
||||
def test_remove_by_key(self):
|
||||
l = Lict(initlist=[("a", 1), ("b", 2)])
|
||||
l.remove("a")
|
||||
assert "a" not in l
|
||||
assert len(l) == 1
|
||||
|
||||
def test_remove_by_tuple(self):
|
||||
l = Lict(initlist=[("a", 1), ("b", 2)])
|
||||
l.remove(("a", 1))
|
||||
assert "a" not in l
|
||||
|
||||
def test_clear(self):
|
||||
l = Lict(initlist=[("a", 1)])
|
||||
l.clear()
|
||||
assert len(l) == 0
|
||||
assert list(l) == []
|
||||
|
||||
|
||||
class TestLictDictInterface:
|
||||
def test_dict_getitem(self):
|
||||
l = Lict(initlist=[("a", 1)])
|
||||
assert l["a"] == 1
|
||||
|
||||
def test_dict_setitem(self):
|
||||
l = Lict()
|
||||
l["k"] = "v"
|
||||
assert l["k"] == "v"
|
||||
# dict set marks list dirty — sync on access
|
||||
assert l[0] == ("k", "v")
|
||||
|
||||
def test_dict_delitem(self):
|
||||
l = Lict(initlist=[("a", 1), ("b", 2)])
|
||||
del l["a"]
|
||||
assert "a" not in l
|
||||
assert len(l) == 1
|
||||
|
||||
def test_keys(self):
|
||||
l = Lict(initlist=[("a", 1), ("b", 2)])
|
||||
assert set(l.keys()) == {"a", "b"}
|
||||
|
||||
def test_values(self):
|
||||
l = Lict(initlist=[("a", 1), ("b", 2)])
|
||||
assert set(l.values()) == {1, 2}
|
||||
|
||||
def test_items(self):
|
||||
l = Lict(initlist=[("a", 1), ("b", 2)])
|
||||
assert set(l.items()) == {("a", 1), ("b", 2)}
|
||||
|
||||
def test_get_itemic_unit(self):
|
||||
l = Lict(initlist=[("a", 1)])
|
||||
assert l.get_itemic_unit("a") == ("a", 1)
|
||||
|
||||
|
||||
class TestLictSync:
|
||||
def test_dict_to_list_sync(self):
|
||||
"""After dict modification, list access triggers sync."""
|
||||
l = Lict(initdict={"a": 1})
|
||||
assert l[0] == ("a", 1)
|
||||
|
||||
def test_list_to_dict_sync(self):
|
||||
"""After list modification, dict access triggers sync."""
|
||||
l = Lict(initlist=[("a", 1)])
|
||||
assert l["a"] == 1
|
||||
|
||||
def test_append_list_maintained(self):
|
||||
l = Lict()
|
||||
l.append(("x", 100))
|
||||
l.append(("y", 200))
|
||||
# List order preserved
|
||||
assert list(l) == [("x", 100), ("y", 200)]
|
||||
|
||||
|
||||
class TestLictEdgeCases:
|
||||
def test_append_non_tuple_raises(self):
|
||||
l = Lict()
|
||||
with pytest.raises(NotImplementedError):
|
||||
l.append("not_a_tuple") # type: ignore
|
||||
|
||||
def test_append_bad_tuple_raises(self):
|
||||
l = Lict()
|
||||
with pytest.raises(NotImplementedError):
|
||||
l.append((1, 2, 3)) # type: ignore
|
||||
|
||||
def test_contains_by_key(self):
|
||||
l = Lict(initlist=[("a", 1)])
|
||||
assert "a" in l
|
||||
|
||||
def test_contains_by_value(self):
|
||||
l = Lict(initlist=[("a", 1)])
|
||||
assert 1 in l
|
||||
|
||||
def test_contains_by_tuple(self):
|
||||
l = Lict(initlist=[("a", 1)])
|
||||
assert ("a", 1) in l
|
||||
|
||||
def test_not_contains(self):
|
||||
l = Lict(initlist=[("a", 1)])
|
||||
assert "z" not in l
|
||||
|
||||
def test_forced_order(self):
|
||||
l = Lict(initlist=[("b", 2), ("a", 1)], forced_order=True)
|
||||
assert l[0] == ("a", 1)
|
||||
assert l[1] == ("b", 2)
|
||||
|
||||
def test_append_if_not_exists(self):
|
||||
l = Lict()
|
||||
l.append_if_it_doesnt_exist_before(("k", "v"))
|
||||
assert l["k"] == "v"
|
||||
l.append_if_it_doesnt_exist_before(("k", "v"))
|
||||
assert len(l) == 1
|
||||
|
||||
def test_keys_equal_with(self):
|
||||
a = Lict(initlist=[("x", 1), ("y", 2)])
|
||||
b = Lict(initlist=[("y", 3), ("x", 4)])
|
||||
assert a.keys_equal_with(b)
|
||||
|
||||
def test_index_raises(self):
|
||||
l = Lict()
|
||||
with pytest.raises(NotImplementedError):
|
||||
l.index()
|
||||
|
||||
def test_extend_raises(self):
|
||||
l = Lict()
|
||||
with pytest.raises(NotImplementedError):
|
||||
l.extend()
|
||||
|
||||
def test_sort_raises(self):
|
||||
l = Lict()
|
||||
with pytest.raises(NotImplementedError):
|
||||
l.sort()
|
||||
|
||||
def test_reverse_raises(self):
|
||||
l = Lict()
|
||||
with pytest.raises(NotImplementedError):
|
||||
l.reverse()
|
||||
70
tests/test_nsp0.py
Normal file
70
tests/test_nsp0.py
Normal file
@@ -0,0 +1,70 @@
|
||||
"""Tests for heurams.kernel.algorithms.nsp0.NSP0Algorithm"""
|
||||
|
||||
from copy import deepcopy
|
||||
|
||||
import pytest
|
||||
|
||||
from heurams.kernel.algorithms import algorithms
|
||||
from heurams.services import timer
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def algo():
|
||||
return algorithms["NSP-0"]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def algodata(sample_algodata_nsp0):
|
||||
return sample_algodata_nsp0
|
||||
|
||||
|
||||
class TestNSP0Defaults:
|
||||
def test_defaults_have_important(self, algo):
|
||||
assert algo.defaults["important"] == 0
|
||||
|
||||
def test_algo_name(self, algo):
|
||||
assert algo.algo_name == "NSP-0"
|
||||
|
||||
|
||||
class TestNSP0Revisor:
|
||||
def test_negative_one_skip(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
algo.revisor(d, feedback=-1)
|
||||
assert d == algodata
|
||||
|
||||
def test_feedback_three_or_less_sets_interval_one(self, algo, algodata):
|
||||
for fb in (0, 1, 2, 3):
|
||||
d = deepcopy(algodata)
|
||||
algo.revisor(d, feedback=fb)
|
||||
assert d["NSP-0"]["interval"] == 1
|
||||
assert d["NSP-0"]["important"] == 1
|
||||
|
||||
def test_feedback_greater_than_three_sets_infinite_interval(self, algo, algodata):
|
||||
for fb in (4, 5):
|
||||
d = deepcopy(algodata)
|
||||
algo.revisor(d, feedback=fb)
|
||||
assert d["NSP-0"]["interval"] == float("inf")
|
||||
assert d["NSP-0"]["important"] == 0
|
||||
|
||||
def test_revisor_updates_dates(self, algo, algodata, timer_context):
|
||||
d = deepcopy(algodata)
|
||||
algo.revisor(d, feedback=3)
|
||||
assert d["NSP-0"]["last_date"] == timer.get_daystamp()
|
||||
assert d["NSP-0"]["next_date"] == timer.get_daystamp() + 1
|
||||
|
||||
|
||||
class TestNSP0IsDue:
|
||||
def test_due_when_past(self, algo, algodata, timer_context):
|
||||
d = deepcopy(algodata)
|
||||
d["NSP-0"]["next_date"] = 100
|
||||
assert algo.is_due(d) is True
|
||||
|
||||
def test_not_due_when_future(self, algo, algodata, timer_context):
|
||||
d = deepcopy(algodata)
|
||||
d["NSP-0"]["next_date"] = 999999
|
||||
assert algo.is_due(d) is False
|
||||
|
||||
def test_nextdate_returns_stored(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
d["NSP-0"]["next_date"] = 42
|
||||
assert algo.nextdate(d) == 42
|
||||
123
tests/test_sm2.py
Normal file
123
tests/test_sm2.py
Normal file
@@ -0,0 +1,123 @@
|
||||
"""Tests for heurams.kernel.algorithms.sm2.SM2Algorithm"""
|
||||
|
||||
from copy import deepcopy
|
||||
|
||||
import pytest
|
||||
|
||||
from heurams.kernel.algorithms import algorithms
|
||||
from heurams.services import timer
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def algo():
|
||||
return algorithms["SM-2"]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def algodata(sample_algodata_sm2):
|
||||
return sample_algodata_sm2
|
||||
|
||||
|
||||
class TestSM2Defaults:
|
||||
def test_defaults_have_efactor(self, algo):
|
||||
assert algo.defaults["efactor"] == 2.5
|
||||
|
||||
def test_algo_name(self, algo):
|
||||
assert algo.algo_name == "SM-2"
|
||||
|
||||
|
||||
class TestSM2Revisor:
|
||||
def test_feedback_negative_one_skips(self, algo, algodata):
|
||||
"""feedback == -1 should be a no-op."""
|
||||
d = deepcopy(algodata)
|
||||
algo.revisor(d, feedback=-1)
|
||||
assert d == algodata # unchanged
|
||||
|
||||
def test_good_feedback_increases_efactor(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
ef_before = d["SM-2"]["efactor"]
|
||||
algo.revisor(d, feedback=5)
|
||||
assert d["SM-2"]["efactor"] > ef_before
|
||||
|
||||
def test_bad_feedback_resets_rept(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
d["SM-2"]["rept"] = 5
|
||||
algo.revisor(d, feedback=2)
|
||||
assert d["SM-2"]["rept"] == 0
|
||||
assert d["SM-2"]["interval"] == 1
|
||||
|
||||
def test_efactor_minimum_floor(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
d["SM-2"]["efactor"] = 0.5
|
||||
algo.revisor(d, feedback=2)
|
||||
assert d["SM-2"]["efactor"] >= 1.3
|
||||
|
||||
def test_rept_increments_on_good_feedback(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
algo.revisor(d, feedback=4)
|
||||
assert d["SM-2"]["rept"] == 1
|
||||
|
||||
def test_new_activation_resets_state(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
d["SM-2"]["rept"] = 10
|
||||
d["SM-2"]["efactor"] = 3.0
|
||||
algo.revisor(d, feedback=5, is_new_activation=True)
|
||||
assert d["SM-2"]["rept"] == 0
|
||||
assert d["SM-2"]["efactor"] == 2.5
|
||||
|
||||
def test_interval_at_rept_zero(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
algo.revisor(d, feedback=2)
|
||||
assert d["SM-2"]["interval"] == 1
|
||||
|
||||
def test_interval_at_rept_one(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
# rept=0 + feedback>=3 -> rept becomes 1 -> interval=6
|
||||
algo.revisor(d, feedback=5)
|
||||
assert d["SM-2"]["interval"] == 6
|
||||
|
||||
def test_interval_for_rept_gt_one(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
d["SM-2"]["rept"] = 2
|
||||
d["SM-2"]["interval"] = 6
|
||||
d["SM-2"]["efactor"] = 2.0
|
||||
algo.revisor(d, feedback=5)
|
||||
# efactor 2.0 + 0.1(feedback=5) = 2.1; interval = round(6 * 2.1) = 13
|
||||
assert d["SM-2"]["interval"] == 13
|
||||
|
||||
def test_real_rept_always_increments(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
algo.revisor(d, feedback=5)
|
||||
assert d["SM-2"]["real_rept"] == 1
|
||||
algo.revisor(d, feedback=0)
|
||||
assert d["SM-2"]["real_rept"] == 2
|
||||
|
||||
|
||||
class TestSM2DueDate:
|
||||
def test_is_due_when_past(self, algo, algodata, timer_context):
|
||||
d = deepcopy(algodata)
|
||||
d["SM-2"]["next_date"] = 100 # far in the past
|
||||
assert algo.is_due(d) is True
|
||||
|
||||
def test_not_due_when_future(self, algo, algodata, timer_context):
|
||||
d = deepcopy(algodata)
|
||||
d["SM-2"]["next_date"] = 999999 # far in the future
|
||||
assert algo.is_due(d) is False
|
||||
|
||||
def test_nextdate_returns_stored(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
d["SM-2"]["next_date"] = 12345
|
||||
assert algo.nextdate(d) == 12345
|
||||
|
||||
def test_revisor_updates_dates(self, algo, algodata, timer_context):
|
||||
d = deepcopy(algodata)
|
||||
algo.revisor(d, feedback=5)
|
||||
assert d["SM-2"]["last_date"] == timer.get_daystamp()
|
||||
assert d["SM-2"]["next_date"] > timer.get_daystamp()
|
||||
|
||||
|
||||
class TestSM2Rating:
|
||||
def test_get_rating_returns_efactor(self, algo, algodata):
|
||||
d = deepcopy(algodata)
|
||||
d["SM-2"]["efactor"] = 2.5
|
||||
assert algo.get_rating(d) == "2.5"
|
||||
35
tests/test_textproc.py
Normal file
35
tests/test_textproc.py
Normal file
@@ -0,0 +1,35 @@
|
||||
"""Tests for heurams.services.textproc"""
|
||||
|
||||
from heurams.services.textproc import domize, truncate, undomize
|
||||
|
||||
|
||||
class TestTruncate:
|
||||
def test_short_string_unchanged(self):
|
||||
assert truncate("ab") == "ab"
|
||||
|
||||
def test_three_char_unchanged(self):
|
||||
assert truncate("abc") == "abc"
|
||||
|
||||
def test_longer_string_truncated(self):
|
||||
assert truncate("abcd") == "abc>"
|
||||
|
||||
def test_empty_string(self):
|
||||
assert truncate("") == ""
|
||||
|
||||
|
||||
class TestDomizeUndomize:
|
||||
def test_domize_replaces_dot(self):
|
||||
assert domize("a.b.c") == "a--DOT--b--DOT--c"
|
||||
|
||||
def test_domize_no_dot(self):
|
||||
assert domize("abc") == "abc"
|
||||
|
||||
def test_undomize_restores_dot(self):
|
||||
assert undomize("a--DOT--b") == "a.b"
|
||||
|
||||
def test_undomize_no_marker(self):
|
||||
assert undomize("abc") == "abc"
|
||||
|
||||
def test_roundtrip(self):
|
||||
original = "config.key.subkey"
|
||||
assert undomize(domize(original)) == original
|
||||
278
uv.lock
generated
278
uv.lock
generated
@@ -84,11 +84,163 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/ae/3a/dbeec9d1ee0844c679f6bb5d6ad4e9f198b1224f4e7a32825f47f6192b0c/cffi-2.0.0-cp314-cp314t-win_arm64.whl", hash = "sha256:0a1527a803f0a659de1af2e1fd700213caba79377e27e4693648c2923da066f9", size = 184195, upload-time = "2025-09-08T23:23:43.004Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "colorama"
|
||||
version = "0.4.6"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/d8/53/6f443c9a4a8358a93a6792e2acffb9d9d5cb0a5cfd8802644b7b1c9a02e4/colorama-0.4.6.tar.gz", hash = "sha256:08695f5cb7ed6e0531a20572697297273c47b8cae5a63ffc6d6ed5c201be6e44", size = 27697, upload-time = "2022-10-25T02:36:22.414Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl", hash = "sha256:4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6", size = 25335, upload-time = "2022-10-25T02:36:20.889Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "coverage"
|
||||
version = "7.13.5"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/9d/e0/70553e3000e345daff267cec284ce4cbf3fc141b6da229ac52775b5428f1/coverage-7.13.5.tar.gz", hash = "sha256:c81f6515c4c40141f83f502b07bbfa5c240ba25bbe73da7b33f1e5b6120ff179", size = 915967, upload-time = "2026-03-17T10:33:18.341Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/69/33/e8c48488c29a73fd089f9d71f9653c1be7478f2ad6b5bc870db11a55d23d/coverage-7.13.5-cp310-cp310-macosx_10_9_x86_64.whl", hash = "sha256:e0723d2c96324561b9aa76fb982406e11d93cdb388a7a7da2b16e04719cf7ca5", size = 219255, upload-time = "2026-03-17T10:29:51.081Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/bd/b0ebe9f677d7f4b74a3e115eec7ddd4bcf892074963a00d91e8b164a6386/coverage-7.13.5-cp310-cp310-macosx_11_0_arm64.whl", hash = "sha256:52f444e86475992506b32d4e5ca55c24fc88d73bcbda0e9745095b28ef4dc0cf", size = 219772, upload-time = "2026-03-17T10:29:52.867Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/48/cc/5cb9502f4e01972f54eedd48218bb203fe81e294be606a2bc93970208013/coverage-7.13.5-cp310-cp310-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:704de6328e3d612a8f6c07000a878ff38181ec3263d5a11da1db294fa6a9bdf8", size = 246532, upload-time = "2026-03-17T10:29:54.688Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/d8/3217636d86c7e7b12e126e4f30ef1581047da73140614523af7495ed5f2d/coverage-7.13.5-cp310-cp310-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:a1a6d79a14e1ec1832cabc833898636ad5f3754a678ef8bb4908515208bf84f4", size = 248333, upload-time = "2026-03-17T10:29:56.221Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2b/30/2002ac6729ba2d4357438e2ed3c447ad8562866c8c63fc16f6dfc33afe56/coverage-7.13.5-cp310-cp310-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:79060214983769c7ba3f0cee10b54c97609dca4d478fa1aa32b914480fd5738d", size = 250211, upload-time = "2026-03-17T10:29:57.938Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6c/85/552496626d6b9359eb0e2f86f920037c9cbfba09b24d914c6e1528155f7d/coverage-7.13.5-cp310-cp310-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:356e76b46783a98c2a2fe81ec79df4883a1e62895ea952968fb253c114e7f930", size = 252125, upload-time = "2026-03-17T10:29:59.388Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/44/21/40256eabdcbccdb6acf6b381b3016a154399a75fe39d406f790ae84d1f3c/coverage-7.13.5-cp310-cp310-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:0cef0cdec915d11254a7f549c1170afecce708d30610c6abdded1f74e581666d", size = 247219, upload-time = "2026-03-17T10:30:01.199Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b1/e8/96e2a6c3f21a0ea77d7830b254a1542d0328acc8d7bdf6a284ba7e529f77/coverage-7.13.5-cp310-cp310-musllinux_1_2_aarch64.whl", hash = "sha256:dc022073d063b25a402454e5712ef9e007113e3a676b96c5f29b2bda29352f40", size = 248248, upload-time = "2026-03-17T10:30:03.317Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/ba/8477f549e554827da390ec659f3c38e4b6d95470f4daafc2d8ff94eaa9c2/coverage-7.13.5-cp310-cp310-musllinux_1_2_i686.whl", hash = "sha256:9b74db26dfea4f4e50d48a4602207cd1e78be33182bc9cbf22da94f332f99878", size = 246254, upload-time = "2026-03-17T10:30:04.832Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/55/59/bc22aef0e6aa179d5b1b001e8b3654785e9adf27ef24c93dc4228ebd5d68/coverage-7.13.5-cp310-cp310-musllinux_1_2_ppc64le.whl", hash = "sha256:ad146744ca4fd09b50c482650e3c1b1f4dfa1d4792e0a04a369c7f23336f0400", size = 250067, upload-time = "2026-03-17T10:30:06.535Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/de/1b/c6a023a160806a5137dca53468fd97530d6acad24a22003b1578a9c2e429/coverage-7.13.5-cp310-cp310-musllinux_1_2_riscv64.whl", hash = "sha256:c555b48be1853fe3997c11c4bd521cdd9a9612352de01fa4508f16ec341e6fe0", size = 246521, upload-time = "2026-03-17T10:30:08.486Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2d/3f/3532c85a55aa2f899fa17c186f831cfa1aa434d88ff792a709636f64130e/coverage-7.13.5-cp310-cp310-musllinux_1_2_x86_64.whl", hash = "sha256:7034b5c56a58ae5e85f23949d52c14aca2cfc6848a31764995b7de88f13a1ea0", size = 247126, upload-time = "2026-03-17T10:30:09.966Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/aa/2e/b9d56af4a24ef45dfbcda88e06870cb7d57b2b0bfa3a888d79b4c8debd76/coverage-7.13.5-cp310-cp310-win32.whl", hash = "sha256:eb7fdf1ef130660e7415e0253a01a7d5a88c9c4d158bcf75cbbd922fd65a5b58", size = 221860, upload-time = "2026-03-17T10:30:11.393Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9f/cc/d938417e7a4d7f0433ad4edee8bb2acdc60dc7ac5af19e2a07a048ecbee3/coverage-7.13.5-cp310-cp310-win_amd64.whl", hash = "sha256:3e1bb5f6c78feeb1be3475789b14a0f0a5b47d505bfc7267126ccbd50289999e", size = 222788, upload-time = "2026-03-17T10:30:12.886Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4b/37/d24c8f8220ff07b839b2c043ea4903a33b0f455abe673ae3c03bbdb7f212/coverage-7.13.5-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:66a80c616f80181f4d643b0f9e709d97bcea413ecd9631e1dedc7401c8e6695d", size = 219381, upload-time = "2026-03-17T10:30:14.68Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/35/8b/cd129b0ca4afe886a6ce9d183c44d8301acbd4ef248622e7c49a23145605/coverage-7.13.5-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:145ede53ccbafb297c1c9287f788d1bc3efd6c900da23bf6931b09eafc931587", size = 219880, upload-time = "2026-03-17T10:30:16.231Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/55/2f/e0e5b237bffdb5d6c530ce87cc1d413a5b7d7dfd60fb067ad6d254c35c76/coverage-7.13.5-cp311-cp311-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:0672854dc733c342fa3e957e0605256d2bf5934feeac328da9e0b5449634a642", size = 250303, upload-time = "2026-03-17T10:30:17.748Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/92/be/b1afb692be85b947f3401375851484496134c5554e67e822c35f28bf2fbc/coverage-7.13.5-cp311-cp311-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:ec10e2a42b41c923c2209b846126c6582db5e43a33157e9870ba9fb70dc7854b", size = 252218, upload-time = "2026-03-17T10:30:19.804Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/da/69/2f47bb6fa1b8d1e3e5d0c4be8ccb4313c63d742476a619418f85740d597b/coverage-7.13.5-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:be3d4bbad9d4b037791794ddeedd7d64a56f5933a2c1373e18e9e568b9141686", size = 254326, upload-time = "2026-03-17T10:30:21.321Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/d0/79db81da58965bd29dabc8f4ad2a2af70611a57cba9d1ec006f072f30a54/coverage-7.13.5-cp311-cp311-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:4d2afbc5cc54d286bfb54541aa50b64cdb07a718227168c87b9e2fb8f25e1743", size = 256267, upload-time = "2026-03-17T10:30:23.094Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e5/32/d0d7cc8168f91ddab44c0ce4806b969df5f5fdfdbb568eaca2dbc2a04936/coverage-7.13.5-cp311-cp311-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:3ad050321264c49c2fa67bb599100456fc51d004b82534f379d16445da40fb75", size = 250430, upload-time = "2026-03-17T10:30:25.311Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4d/06/a055311d891ddbe231cd69fdd20ea4be6e3603ffebddf8704b8ca8e10a3c/coverage-7.13.5-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:7300c8a6d13335b29bb76d7651c66af6bd8658517c43499f110ddc6717bfc209", size = 252017, upload-time = "2026-03-17T10:30:27.284Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d6/f6/d0fd2d21e29a657b5f77a2fe7082e1568158340dceb941954f776dce1b7b/coverage-7.13.5-cp311-cp311-musllinux_1_2_i686.whl", hash = "sha256:eb07647a5738b89baab047f14edd18ded523de60f3b30e75c2acc826f79c839a", size = 250080, upload-time = "2026-03-17T10:30:29.481Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4e/ab/0d7fb2efc2e9a5eb7ddcc6e722f834a69b454b7e6e5888c3a8567ecffb31/coverage-7.13.5-cp311-cp311-musllinux_1_2_ppc64le.whl", hash = "sha256:9adb6688e3b53adffefd4a52d72cbd8b02602bfb8f74dcd862337182fd4d1a4e", size = 253843, upload-time = "2026-03-17T10:30:31.301Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ba/6f/7467b917bbf5408610178f62a49c0ed4377bb16c1657f689cc61470da8ce/coverage-7.13.5-cp311-cp311-musllinux_1_2_riscv64.whl", hash = "sha256:7c8d4bc913dd70b93488d6c496c77f3aff5ea99a07e36a18f865bca55adef8bd", size = 249802, upload-time = "2026-03-17T10:30:33.358Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/75/2c/1172fb689df92135f5bfbbd69fc83017a76d24ea2e2f3a1154007e2fb9f8/coverage-7.13.5-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:0e3c426ffc4cd952f54ee9ffbdd10345709ecc78a3ecfd796a57236bfad0b9b8", size = 250707, upload-time = "2026-03-17T10:30:35.2Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/67/21/9ac389377380a07884e3b48ba7a620fcd9dbfaf1d40565facdc6b36ec9ef/coverage-7.13.5-cp311-cp311-win32.whl", hash = "sha256:259b69bb83ad9894c4b25be2528139eecba9a82646ebdda2d9db1ba28424a6bf", size = 221880, upload-time = "2026-03-17T10:30:36.775Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/af/7f/4cd8a92531253f9d7c1bbecd9fa1b472907fb54446ca768c59b531248dc5/coverage-7.13.5-cp311-cp311-win_amd64.whl", hash = "sha256:258354455f4e86e3e9d0d17571d522e13b4e1e19bf0f8596bcf9476d61e7d8a9", size = 222816, upload-time = "2026-03-17T10:30:38.891Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/a6/1d3f6155fb0010ca68eba7fe48ca6c9da7385058b77a95848710ecf189b1/coverage-7.13.5-cp311-cp311-win_arm64.whl", hash = "sha256:bff95879c33ec8da99fc9b6fe345ddb5be6414b41d6d1ad1c8f188d26f36e028", size = 221483, upload-time = "2026-03-17T10:30:40.463Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/c3/a396306ba7db865bf96fc1fb3b7fd29bcbf3d829df642e77b13555163cd6/coverage-7.13.5-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:460cf0114c5016fa841214ff5564aa4864f11948da9440bc97e21ad1f4ba1e01", size = 219554, upload-time = "2026-03-17T10:30:42.208Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a6/16/a68a19e5384e93f811dccc51034b1fd0b865841c390e3c931dcc4699e035/coverage-7.13.5-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:0e223ce4b4ed47f065bfb123687686512e37629be25cc63728557ae7db261422", size = 219908, upload-time = "2026-03-17T10:30:43.906Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/72/20b917c6793af3a5ceb7fb9c50033f3ec7865f2911a1416b34a7cfa0813b/coverage-7.13.5-cp312-cp312-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:6e3370441f4513c6252bf042b9c36d22491142385049243253c7e48398a15a9f", size = 251419, upload-time = "2026-03-17T10:30:45.545Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8c/49/cd14b789536ac6a4778c453c6a2338bc0a2fb60c5a5a41b4008328b9acc1/coverage-7.13.5-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:03ccc709a17a1de074fb1d11f217342fb0d2b1582ed544f554fc9fc3f07e95f5", size = 254159, upload-time = "2026-03-17T10:30:47.204Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9d/00/7b0edcfe64e2ed4c0340dac14a52ad0f4c9bd0b8b5e531af7d55b703db7c/coverage-7.13.5-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:3f4818d065964db3c1c66dc0fbdac5ac692ecbc875555e13374fdbe7eedb4376", size = 255270, upload-time = "2026-03-17T10:30:48.812Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/93/89/7ffc4ba0f5d0a55c1e84ea7cee39c9fc06af7b170513d83fbf3bbefce280/coverage-7.13.5-cp312-cp312-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:012d5319e66e9d5a218834642d6c35d265515a62f01157a45bcc036ecf947256", size = 257538, upload-time = "2026-03-17T10:30:50.77Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/81/bd/73ddf85f93f7e6fa83e77ccecb6162d9415c79007b4bc124008a4995e4a7/coverage-7.13.5-cp312-cp312-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:8dd02af98971bdb956363e4827d34425cb3df19ee550ef92855b0acb9c7ce51c", size = 251821, upload-time = "2026-03-17T10:30:52.5Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/81/278aff4e8dec4926a0bcb9486320752811f543a3ce5b602cc7a29978d073/coverage-7.13.5-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:f08fd75c50a760c7eb068ae823777268daaf16a80b918fa58eea888f8e3919f5", size = 253191, upload-time = "2026-03-17T10:30:54.543Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/70/ee/fe1621488e2e0a58d7e94c4800f0d96f79671553488d401a612bebae324b/coverage-7.13.5-cp312-cp312-musllinux_1_2_i686.whl", hash = "sha256:843ea8643cf967d1ac7e8ecd4bb00c99135adf4816c0c0593fdcc47b597fcf09", size = 251337, upload-time = "2026-03-17T10:30:56.663Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/37/a6/f79fb37aa104b562207cc23cb5711ab6793608e246cae1e93f26b2236ed9/coverage-7.13.5-cp312-cp312-musllinux_1_2_ppc64le.whl", hash = "sha256:9d44d7aa963820b1b971dbecd90bfe5fe8f81cff79787eb6cca15750bd2f79b9", size = 255404, upload-time = "2026-03-17T10:30:58.427Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/75/f0/ed15262a58ec81ce457ceb717b7f78752a1713556b19081b76e90896e8d4/coverage-7.13.5-cp312-cp312-musllinux_1_2_riscv64.whl", hash = "sha256:7132bed4bd7b836200c591410ae7d97bf7ae8be6fc87d160b2bd881df929e7bf", size = 250903, upload-time = "2026-03-17T10:31:00.093Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0f/e9/9129958f20e7e9d4d56d51d42ccf708d15cac355ff4ac6e736e97a9393d2/coverage-7.13.5-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:a698e363641b98843c517817db75373c83254781426e94ada3197cabbc2c919c", size = 252780, upload-time = "2026-03-17T10:31:01.916Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a4/d7/0ad9b15812d81272db94379fe4c6df8fd17781cc7671fdfa30c76ba5ff7b/coverage-7.13.5-cp312-cp312-win32.whl", hash = "sha256:bdba0a6b8812e8c7df002d908a9a2ea3c36e92611b5708633c50869e6d922fdf", size = 222093, upload-time = "2026-03-17T10:31:03.642Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/3d/821a9a5799fac2556bcf0bd37a70d1d11fa9e49784b6d22e92e8b2f85f18/coverage-7.13.5-cp312-cp312-win_amd64.whl", hash = "sha256:d2c87e0c473a10bffe991502eac389220533024c8082ec1ce849f4218dded810", size = 222900, upload-time = "2026-03-17T10:31:05.651Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d4/fa/2238c2ad08e35cf4f020ea721f717e09ec3152aea75d191a7faf3ef009a8/coverage-7.13.5-cp312-cp312-win_arm64.whl", hash = "sha256:bf69236a9a81bdca3bff53796237aab096cdbf8d78a66ad61e992d9dac7eb2de", size = 221515, upload-time = "2026-03-17T10:31:07.293Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/74/8c/74fedc9663dcf168b0a059d4ea756ecae4da77a489048f94b5f512a8d0b3/coverage-7.13.5-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:5ec4af212df513e399cf11610cc27063f1586419e814755ab362e50a85ea69c1", size = 219576, upload-time = "2026-03-17T10:31:09.045Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/0c/c9/44fb661c55062f0818a6ffd2685c67aa30816200d5f2817543717d4b92eb/coverage-7.13.5-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:941617e518602e2d64942c88ec8499f7fbd49d3f6c4327d3a71d43a1973032f3", size = 219942, upload-time = "2026-03-17T10:31:10.708Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5f/13/93419671cee82b780bab7ea96b67c8ef448f5f295f36bf5031154ec9a790/coverage-7.13.5-cp313-cp313-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:da305e9937617ee95c2e39d8ff9f040e0487cbf1ac174f777ed5eddd7a7c1f26", size = 250935, upload-time = "2026-03-17T10:31:12.392Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ac/68/1666e3a4462f8202d836920114fa7a5ee9275d1fa45366d336c551a162dd/coverage-7.13.5-cp313-cp313-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:78e696e1cc714e57e8b25760b33a8b1026b7048d270140d25dafe1b0a1ee05a3", size = 253541, upload-time = "2026-03-17T10:31:14.247Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/4e/5e/3ee3b835647be646dcf3c65a7c6c18f87c27326a858f72ab22c12730773d/coverage-7.13.5-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:02ca0eed225b2ff301c474aeeeae27d26e2537942aa0f87491d3e147e784a82b", size = 254780, upload-time = "2026-03-17T10:31:16.193Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/44/b3/cb5bd1a04cfcc49ede6cd8409d80bee17661167686741e041abc7ee1b9a9/coverage-7.13.5-cp313-cp313-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:04690832cbea4e4663d9149e05dba142546ca05cb1848816760e7f58285c970a", size = 256912, upload-time = "2026-03-17T10:31:17.89Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1b/66/c1dceb7b9714473800b075f5c8a84f4588f887a90eb8645282031676e242/coverage-7.13.5-cp313-cp313-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:0590e44dd2745c696a778f7bab6aa95256de2cbc8b8cff4f7db8ff09813d6969", size = 251165, upload-time = "2026-03-17T10:31:19.605Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b7/62/5502b73b97aa2e53ea22a39cf8649ff44827bef76d90bf638777daa27a9d/coverage-7.13.5-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:d7cfad2d6d81dd298ab6b89fe72c3b7b05ec7544bdda3b707ddaecff8d25c161", size = 252908, upload-time = "2026-03-17T10:31:21.312Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/37/7792c2d69854397ca77a55c4646e5897c467928b0e27f2d235d83b5d08c6/coverage-7.13.5-cp313-cp313-musllinux_1_2_i686.whl", hash = "sha256:e092b9499de38ae0fbfbc603a74660eb6ff3e869e507b50d85a13b6db9863e15", size = 250873, upload-time = "2026-03-17T10:31:23.565Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a3/23/bc866fb6163be52a8a9e5d708ba0d3b1283c12158cefca0a8bbb6e247a43/coverage-7.13.5-cp313-cp313-musllinux_1_2_ppc64le.whl", hash = "sha256:48c39bc4a04d983a54a705a6389512883d4a3b9862991b3617d547940e9f52b1", size = 255030, upload-time = "2026-03-17T10:31:25.58Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7d/8b/ef67e1c222ef49860701d346b8bbb70881bef283bd5f6cbba68a39a086c7/coverage-7.13.5-cp313-cp313-musllinux_1_2_riscv64.whl", hash = "sha256:2d3807015f138ffea1ed9afeeb8624fd781703f2858b62a8dd8da5a0994c57b6", size = 250694, upload-time = "2026-03-17T10:31:27.316Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/46/0d/866d1f74f0acddbb906db212e096dee77a8e2158ca5e6bb44729f9d93298/coverage-7.13.5-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:ee2aa19e03161671ec964004fb74b2257805d9710bf14a5c704558b9d8dbaf17", size = 252469, upload-time = "2026-03-17T10:31:29.472Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7a/f5/be742fec31118f02ce42b21c6af187ad6a344fed546b56ca60caacc6a9a0/coverage-7.13.5-cp313-cp313-win32.whl", hash = "sha256:ce1998c0483007608c8382f4ff50164bfc5bd07a2246dd272aa4043b75e61e85", size = 222112, upload-time = "2026-03-17T10:31:31.526Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/66/40/7732d648ab9d069a46e686043241f01206348e2bbf128daea85be4d6414b/coverage-7.13.5-cp313-cp313-win_amd64.whl", hash = "sha256:631efb83f01569670a5e866ceb80fe483e7c159fac6f167e6571522636104a0b", size = 222923, upload-time = "2026-03-17T10:31:33.633Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/48/af/fea819c12a095781f6ccd504890aaddaf88b8fab263c4940e82c7b770124/coverage-7.13.5-cp313-cp313-win_arm64.whl", hash = "sha256:f4cd16206ad171cbc2470dbea9103cf9a7607d5fe8c242fdf1edf36174020664", size = 221540, upload-time = "2026-03-17T10:31:35.445Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/23/d2/17879af479df7fbbd44bd528a31692a48f6b25055d16482fdf5cdb633805/coverage-7.13.5-cp313-cp313t-macosx_10_13_x86_64.whl", hash = "sha256:0428cbef5783ad91fe240f673cc1f76b25e74bbfe1a13115e4aa30d3f538162d", size = 220262, upload-time = "2026-03-17T10:31:37.184Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5b/4c/d20e554f988c8f91d6a02c5118f9abbbf73a8768a3048cb4962230d5743f/coverage-7.13.5-cp313-cp313t-macosx_11_0_arm64.whl", hash = "sha256:e0b216a19534b2427cc201a26c25da4a48633f29a487c61258643e89d28200c0", size = 220617, upload-time = "2026-03-17T10:31:39.245Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/9c/f9f5277b95184f764b24e7231e166dfdb5780a46d408a2ac665969416d61/coverage-7.13.5-cp313-cp313t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:972a9cd27894afe4bc2b1480107054e062df08e671df7c2f18c205e805ccd806", size = 261912, upload-time = "2026-03-17T10:31:41.324Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/f6/7f1ab39393eeb50cfe4747ae8ef0e4fc564b989225aa1152e13a180d74f8/coverage-7.13.5-cp313-cp313t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:4b59148601efcd2bac8c4dbf1f0ad6391693ccf7a74b8205781751637076aee3", size = 263987, upload-time = "2026-03-17T10:31:43.724Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a0/d7/62c084fb489ed9c6fbdf57e006752e7c516ea46fd690e5ed8b8617c7d52e/coverage-7.13.5-cp313-cp313t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:505d7083c8b0c87a8fa8c07370c285847c1f77739b22e299ad75a6af6c32c5c9", size = 266416, upload-time = "2026-03-17T10:31:45.769Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/a9/f6/df63d8660e1a0bff6125947afda112a0502736f470d62ca68b288ea762d8/coverage-7.13.5-cp313-cp313t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:60365289c3741e4db327e7baff2a4aaacf22f788e80fa4683393891b70a89fbd", size = 267558, upload-time = "2026-03-17T10:31:48.293Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5b/02/353ca81d36779bd108f6d384425f7139ac3c58c750dcfaafe5d0bee6436b/coverage-7.13.5-cp313-cp313t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:1b88c69c8ef5d4b6fe7dea66d6636056a0f6a7527c440e890cf9259011f5e606", size = 261163, upload-time = "2026-03-17T10:31:50.125Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2c/16/2e79106d5749bcaf3aee6d309123548e3276517cd7851faa8da213bc61bf/coverage-7.13.5-cp313-cp313t-musllinux_1_2_aarch64.whl", hash = "sha256:5b13955d31d1633cf9376908089b7cebe7d15ddad7aeaabcbe969a595a97e95e", size = 263981, upload-time = "2026-03-17T10:31:51.961Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/29/c7/c29e0c59ffa6942030ae6f50b88ae49988e7e8da06de7ecdbf49c6d4feae/coverage-7.13.5-cp313-cp313t-musllinux_1_2_i686.whl", hash = "sha256:f70c9ab2595c56f81a89620e22899eea8b212a4041bd728ac6f4a28bf5d3ddd0", size = 261604, upload-time = "2026-03-17T10:31:53.872Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/40/48/097cdc3db342f34006a308ab41c3a7c11c3f0d84750d340f45d88a782e00/coverage-7.13.5-cp313-cp313t-musllinux_1_2_ppc64le.whl", hash = "sha256:084b84a8c63e8d6fc7e3931b316a9bcafca1458d753c539db82d31ed20091a87", size = 265321, upload-time = "2026-03-17T10:31:55.997Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/bb/1f/4994af354689e14fd03a75f8ec85a9a68d94e0188bbdab3fc1516b55e512/coverage-7.13.5-cp313-cp313t-musllinux_1_2_riscv64.whl", hash = "sha256:ad14385487393e386e2ea988b09d62dd42c397662ac2dabc3832d71253eee479", size = 260502, upload-time = "2026-03-17T10:31:58.308Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/c6/9bb9ef55903e628033560885f5c31aa227e46878118b63ab15dc7ba87797/coverage-7.13.5-cp313-cp313t-musllinux_1_2_x86_64.whl", hash = "sha256:7f2c47b36fe7709a6e83bfadf4eefb90bd25fbe4014d715224c4316f808e59a2", size = 262688, upload-time = "2026-03-17T10:32:00.141Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/14/4f/f5df9007e50b15e53e01edea486814783a7f019893733d9e4d6caad75557/coverage-7.13.5-cp313-cp313t-win32.whl", hash = "sha256:67e9bc5449801fad0e5dff329499fb090ba4c5800b86805c80617b4e29809b2a", size = 222788, upload-time = "2026-03-17T10:32:02.246Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e1/98/aa7fccaa97d0f3192bec013c4e6fd6d294a6ed44b640e6bb61f479e00ed5/coverage-7.13.5-cp313-cp313t-win_amd64.whl", hash = "sha256:da86cdcf10d2519e10cabb8ac2de03da1bcb6e4853790b7fbd48523332e3a819", size = 223851, upload-time = "2026-03-17T10:32:04.416Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3d/8b/e5c469f7352651e5f013198e9e21f97510b23de957dd06a84071683b4b60/coverage-7.13.5-cp313-cp313t-win_arm64.whl", hash = "sha256:0ecf12ecb326fe2c339d93fc131816f3a7367d223db37817208905c89bded911", size = 222104, upload-time = "2026-03-17T10:32:06.65Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8e/77/39703f0d1d4b478bfd30191d3c14f53caf596fac00efb3f8f6ee23646439/coverage-7.13.5-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:fbabfaceaeb587e16f7008f7795cd80d20ec548dc7f94fbb0d4ec2e038ce563f", size = 219621, upload-time = "2026-03-17T10:32:08.589Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e2/3e/51dff36d99ae14639a133d9b164d63e628532e2974d8b1edb99dd1ebc733/coverage-7.13.5-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:9bb2a28101a443669a423b665939381084412b81c3f8c0fcfbac57f4e30b5b8e", size = 219953, upload-time = "2026-03-17T10:32:10.507Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6a/6c/1f1917b01eb647c2f2adc9962bd66c79eb978951cab61bdc1acab3290c07/coverage-7.13.5-cp314-cp314-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:bd3a2fbc1c6cccb3c5106140d87cc6a8715110373ef42b63cf5aea29df8c217a", size = 250992, upload-time = "2026-03-17T10:32:12.41Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/e5/06b1f88f42a5a99df42ce61208bdec3bddb3d261412874280a19796fc09c/coverage-7.13.5-cp314-cp314-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:6c36ddb64ed9d7e496028d1d00dfec3e428e0aabf4006583bb1839958d280510", size = 253503, upload-time = "2026-03-17T10:32:14.449Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/80/28/2a148a51e5907e504fa7b85490277734e6771d8844ebcc48764a15e28155/coverage-7.13.5-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:380e8e9084d8eb38db3a9176a1a4f3c0082c3806fa0dc882d1d87abc3c789247", size = 254852, upload-time = "2026-03-17T10:32:16.56Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/61/77/50e8d3d85cc0b7ebe09f30f151d670e302c7ff4a1bf6243f71dd8b0981fa/coverage-7.13.5-cp314-cp314-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:e808af52a0513762df4d945ea164a24b37f2f518cbe97e03deaa0ee66139b4d6", size = 257161, upload-time = "2026-03-17T10:32:19.004Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3b/c4/b5fd1d4b7bf8d0e75d997afd3925c59ba629fc8616f1b3aae7605132e256/coverage-7.13.5-cp314-cp314-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:e301d30dd7e95ae068671d746ba8c34e945a82682e62918e41b2679acd2051a0", size = 251021, upload-time = "2026-03-17T10:32:21.344Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f8/66/6ea21f910e92d69ef0b1c3346ea5922a51bad4446c9126db2ae96ee24c4c/coverage-7.13.5-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:800bc829053c80d240a687ceeb927a94fd108bbdc68dfbe505d0d75ab578a882", size = 252858, upload-time = "2026-03-17T10:32:23.506Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9e/ea/879c83cb5d61aa2a35fb80e72715e92672daef8191b84911a643f533840c/coverage-7.13.5-cp314-cp314-musllinux_1_2_i686.whl", hash = "sha256:0b67af5492adb31940ee418a5a655c28e48165da5afab8c7fa6fd72a142f8740", size = 250823, upload-time = "2026-03-17T10:32:25.516Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/fb/616d95d3adb88b9803b275580bdeee8bd1b69a886d057652521f83d7322f/coverage-7.13.5-cp314-cp314-musllinux_1_2_ppc64le.whl", hash = "sha256:c9136ff29c3a91e25b1d1552b5308e53a1e0653a23e53b6366d7c2dcbbaf8a16", size = 255099, upload-time = "2026-03-17T10:32:27.944Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1c/93/25e6917c90ec1c9a56b0b26f6cad6408e5f13bb6b35d484a0d75c9cf000d/coverage-7.13.5-cp314-cp314-musllinux_1_2_riscv64.whl", hash = "sha256:cff784eef7f0b8f6cb28804fbddcfa99f89efe4cc35fb5627e3ac58f91ed3ac0", size = 250638, upload-time = "2026-03-17T10:32:29.914Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/fc/7b/dc1776b0464145a929deed214aef9fb1493f159b59ff3c7eeeedf91eddd0/coverage-7.13.5-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:68a4953be99b17ac3c23b6efbc8a38330d99680c9458927491d18700ef23ded0", size = 252295, upload-time = "2026-03-17T10:32:31.981Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ea/fb/99cbbc56a26e07762a2740713f3c8f9f3f3106e3a3dd8cc4474954bccd34/coverage-7.13.5-cp314-cp314-win32.whl", hash = "sha256:35a31f2b1578185fbe6aa2e74cea1b1d0bbf4c552774247d9160d29b80ed56cc", size = 222360, upload-time = "2026-03-17T10:32:34.233Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8d/b7/4758d4f73fb536347cc5e4ad63662f9d60ba9118cb6785e9616b2ce5d7fa/coverage-7.13.5-cp314-cp314-win_amd64.whl", hash = "sha256:2aa055ae1857258f9e0045be26a6d62bdb47a72448b62d7b55f4820f361a2633", size = 223174, upload-time = "2026-03-17T10:32:36.369Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/2c/f2/24d84e1dfe70f8ac9fdf30d338239860d0d1d5da0bda528959d0ebc9da28/coverage-7.13.5-cp314-cp314-win_arm64.whl", hash = "sha256:1b11eef33edeae9d142f9b4358edb76273b3bfd30bc3df9a4f95d0e49caf94e8", size = 221739, upload-time = "2026-03-17T10:32:38.736Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/60/5b/4a168591057b3668c2428bff25dd3ebc21b629d666d90bcdfa0217940e84/coverage-7.13.5-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:10a0c37f0b646eaff7cce1874c31d1f1ccb297688d4c747291f4f4c70741cc8b", size = 220351, upload-time = "2026-03-17T10:32:41.196Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/f5/21/1fd5c4dbfe4a58b6b99649125635df46decdfd4a784c3cd6d410d303e370/coverage-7.13.5-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:b5db73ba3c41c7008037fa731ad5459fc3944cb7452fc0aa9f822ad3533c583c", size = 220612, upload-time = "2026-03-17T10:32:43.204Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d6/fe/2a924b3055a5e7e4512655a9d4609781b0d62334fa0140c3e742926834e2/coverage-7.13.5-cp314-cp314t-manylinux1_i686.manylinux_2_28_i686.manylinux_2_5_i686.whl", hash = "sha256:750db93a81e3e5a9831b534be7b1229df848b2e125a604fe6651e48aa070e5f9", size = 261985, upload-time = "2026-03-17T10:32:45.514Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d7/0d/c8928f2bd518c45990fe1a2ab8db42e914ef9b726c975facc4282578c3eb/coverage-7.13.5-cp314-cp314t-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:9ddb4f4a5479f2539644be484da179b653273bca1a323947d48ab107b3ed1f29", size = 264107, upload-time = "2026-03-17T10:32:47.971Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ef/ae/4ae35bbd9a0af9d820362751f0766582833c211224b38665c0f8de3d487f/coverage-7.13.5-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d8a7a2049c14f413163e2bdabd37e41179b1d1ccb10ffc6ccc4b7a718429c607", size = 266513, upload-time = "2026-03-17T10:32:50.1Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9c/20/d326174c55af36f74eac6ae781612d9492f060ce8244b570bb9d50d9d609/coverage-7.13.5-cp314-cp314t-manylinux2014_ppc64le.manylinux_2_17_ppc64le.manylinux_2_28_ppc64le.whl", hash = "sha256:e1c85e0b6c05c592ea6d8768a66a254bfb3874b53774b12d4c89c481eb78cb90", size = 267650, upload-time = "2026-03-17T10:32:52.391Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7a/5e/31484d62cbd0eabd3412e30d74386ece4a0837d4f6c3040a653878bfc019/coverage-7.13.5-cp314-cp314t-manylinux_2_31_riscv64.manylinux_2_39_riscv64.whl", hash = "sha256:777c4d1eff1b67876139d24288aaf1817f6c03d6bae9c5cc8d27b83bcfe38fe3", size = 261089, upload-time = "2026-03-17T10:32:54.544Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e9/d8/49a72d6de146eebb0b7e48cc0f4bc2c0dd858e3d4790ab2b39a2872b62bd/coverage-7.13.5-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:6697e29b93707167687543480a40f0db8f356e86d9f67ddf2e37e2dfd91a9dab", size = 263982, upload-time = "2026-03-17T10:32:56.803Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/06/3b/0351f1bd566e6e4dd39e978efe7958bde1d32f879e85589de147654f57bb/coverage-7.13.5-cp314-cp314t-musllinux_1_2_i686.whl", hash = "sha256:8fdf453a942c3e4d99bd80088141c4c6960bb232c409d9c3558e2dbaa3998562", size = 261579, upload-time = "2026-03-17T10:32:59.466Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5d/ce/796a2a2f4017f554d7810f5c573449b35b1e46788424a548d4d19201b222/coverage-7.13.5-cp314-cp314t-musllinux_1_2_ppc64le.whl", hash = "sha256:32ca0c0114c9834a43f045a87dcebd69d108d8ffb666957ea65aa132f50332e2", size = 265316, upload-time = "2026-03-17T10:33:01.847Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3d/16/d5ae91455541d1a78bc90abf495be600588aff8f6db5c8b0dae739fa39c9/coverage-7.13.5-cp314-cp314t-musllinux_1_2_riscv64.whl", hash = "sha256:8769751c10f339021e2638cd354e13adeac54004d1941119b2c96fe5276d45ea", size = 260427, upload-time = "2026-03-17T10:33:03.945Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/48/11/07f413dba62db21fb3fad5d0de013a50e073cc4e2dc4306e770360f6dfc8/coverage-7.13.5-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:cec2d83125531bd153175354055cdb7a09987af08a9430bd173c937c6d0fba2a", size = 262745, upload-time = "2026-03-17T10:33:06.285Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/91/15/d792371332eb4663115becf4bad47e047d16234b1aff687b1b18c58d60ae/coverage-7.13.5-cp314-cp314t-win32.whl", hash = "sha256:0cd9ed7a8b181775459296e402ca4fb27db1279740a24e93b3b41942ebe4b215", size = 223146, upload-time = "2026-03-17T10:33:08.756Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/db/51/37221f59a111dca5e85be7dbf09696323b5b9f13ff65e0641d535ed06ea8/coverage-7.13.5-cp314-cp314t-win_amd64.whl", hash = "sha256:301e3b7dfefecaca37c9f1aa6f0049b7d4ab8dd933742b607765d757aca77d43", size = 224254, upload-time = "2026-03-17T10:33:11.174Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/54/83/6acacc889de8987441aa7d5adfbdbf33d288dad28704a67e574f1df9bcbb/coverage-7.13.5-cp314-cp314t-win_arm64.whl", hash = "sha256:9dacc2ad679b292709e0f5fc1ac74a6d4d5562e424058962c7bb0c658ad25e45", size = 222276, upload-time = "2026-03-17T10:33:13.466Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/9e/ee/a4cf96b8ce1e566ed238f0659ac2d3f007ed1d14b181bcb684e19561a69a/coverage-7.13.5-py3-none-any.whl", hash = "sha256:34b02417cf070e173989b3db962f7ed56d2f644307b2cf9d5a0f258e13084a61", size = 211346, upload-time = "2026-03-17T10:33:15.691Z" },
|
||||
]
|
||||
|
||||
[package.optional-dependencies]
|
||||
toml = [
|
||||
{ name = "tomli", marker = "python_full_version <= '3.11'" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "exceptiongroup"
|
||||
version = "1.3.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions", marker = "python_full_version < '3.13'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/50/79/66800aadf48771f6b62f7eb014e352e5d06856655206165d775e675a02c9/exceptiongroup-1.3.1.tar.gz", hash = "sha256:8b412432c6055b0b7d14c310000ae93352ed6754f70fa8f7c34141f91c4e3219", size = 30371, upload-time = "2025-11-21T23:01:54.787Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/8a/0e/97c33bf5009bdbac74fd2beace167cab3f978feb69cc36f1ef79360d6c4e/exceptiongroup-1.3.1-py3-none-any.whl", hash = "sha256:a7a39a3bd276781e98394987d3a5701d0c4edffb633bb7a5144577f82c773598", size = 16740, upload-time = "2025-11-21T23:01:53.443Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "fsrs"
|
||||
version = "6.3.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "typing-extensions" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/d7/00/337de60fd5497ea4fed046192d17fa79809ca2aad7326da2e464d9d8950b/fsrs-6.3.1.tar.gz", hash = "sha256:43c5c6056b97266baf6ebfef9e4cadeb9ac5a4e1b29ffdfb300f445b6e6b15ca", size = 32645, upload-time = "2026-03-10T14:01:03.734Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/67/3c/e7e140f8cdb95b042cb125ee142e7630187e8e78d21847ca81e9d1e99bb8/fsrs-6.3.1-py3-none-any.whl", hash = "sha256:ac1bf9939573592d8c9bc1e11a00bd17e04146dc9f2c913127e2bcc431b9040b", size = 22840, upload-time = "2026-03-10T14:01:01.084Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "heurams"
|
||||
version = "0.5.0"
|
||||
source = { editable = "." }
|
||||
dependencies = [
|
||||
{ name = "fsrs" },
|
||||
{ name = "psutil" },
|
||||
{ name = "tabulate" },
|
||||
{ name = "textual" },
|
||||
@@ -97,8 +249,15 @@ dependencies = [
|
||||
{ name = "zmq" },
|
||||
]
|
||||
|
||||
[package.dev-dependencies]
|
||||
dev = [
|
||||
{ name = "pytest" },
|
||||
{ name = "pytest-cov" },
|
||||
]
|
||||
|
||||
[package.metadata]
|
||||
requires-dist = [
|
||||
{ name = "fsrs", specifier = ">=6.3.1" },
|
||||
{ name = "psutil", specifier = ">=7.2.2" },
|
||||
{ name = "tabulate", specifier = ">=0.10.0" },
|
||||
{ name = "textual", specifier = ">=8.2.3" },
|
||||
@@ -107,6 +266,21 @@ requires-dist = [
|
||||
{ name = "zmq", specifier = ">=0.0.0" },
|
||||
]
|
||||
|
||||
[package.metadata.requires-dev]
|
||||
dev = [
|
||||
{ name = "pytest", specifier = ">=8.0.0" },
|
||||
{ name = "pytest-cov", specifier = ">=6.0.0" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "iniconfig"
|
||||
version = "2.3.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/72/34/14ca021ce8e5dfedc35312d08ba8bf51fdd999c576889fc2c24cb97f4f10/iniconfig-2.3.0.tar.gz", hash = "sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730", size = 20503, upload-time = "2025-10-18T21:55:43.219Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/cb/b1/3846dd7f199d53cb17f49cba7e651e9ce294d8497c8c150530ed11865bb8/iniconfig-2.3.0-py3-none-any.whl", hash = "sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12", size = 7484, upload-time = "2025-10-18T21:55:41.639Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "linkify-it-py"
|
||||
version = "2.1.0"
|
||||
@@ -157,6 +331,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/b3/38/89ba8ad64ae25be8de66a6d463314cf1eb366222074cfda9ee839c56a4b4/mdurl-0.1.2-py3-none-any.whl", hash = "sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8", size = 9979, upload-time = "2022-08-14T12:40:09.779Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "packaging"
|
||||
version = "26.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/df/de/0d2b39fb4af88a0258f3bac87dfcbb48e73fbdea4a2ed0e2213f9a4c2f9a/packaging-26.1.tar.gz", hash = "sha256:f042152b681c4bfac5cae2742a55e103d27ab2ec0f3d88037136b6bfe7c9c5de", size = 215519, upload-time = "2026-04-14T21:12:49.362Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/7a/c2/920ef838e2f0028c8262f16101ec09ebd5969864e5a64c4c05fad0617c56/packaging-26.1-py3-none-any.whl", hash = "sha256:5d9c0669c6285e491e0ced2eee587eaf67b670d94a19e94e3984a481aba6802f", size = 95831, upload-time = "2026-04-14T21:12:47.56Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "platformdirs"
|
||||
version = "4.9.6"
|
||||
@@ -166,6 +349,15 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/75/a6/a0a304dc33b49145b21f4808d763822111e67d1c3a32b524a1baf947b6e1/platformdirs-4.9.6-py3-none-any.whl", hash = "sha256:e61adb1d5e5cb3441b4b7710bea7e4c12250ca49439228cc1021c00dcfac0917", size = 21348, upload-time = "2026-04-09T00:04:09.463Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pluggy"
|
||||
version = "1.6.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/f9/e2/3e91f31a7d2b083fe6ef3fa267035b518369d9511ffab804f839851d2779/pluggy-1.6.0.tar.gz", hash = "sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3", size = 69412, upload-time = "2025-05-15T12:30:07.975Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/54/20/4d324d65cc6d9205fabedc306948156824eb9f0ee1633355a8f7ec5c66bf/pluggy-1.6.0-py3-none-any.whl", hash = "sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746", size = 20538, upload-time = "2025-05-15T12:30:06.134Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "psutil"
|
||||
version = "7.2.2"
|
||||
@@ -212,6 +404,38 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/f4/7e/a72dd26f3b0f4f2bf1dd8923c85f7ceb43172af56d63c7383eb62b332364/pygments-2.20.0-py3-none-any.whl", hash = "sha256:81a9e26dd42fd28a23a2d169d86d7ac03b46e2f8b59ed4698fb4785f946d0176", size = 1231151, upload-time = "2026-03-29T13:29:30.038Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pytest"
|
||||
version = "9.0.3"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "colorama", marker = "sys_platform == 'win32'" },
|
||||
{ name = "exceptiongroup", marker = "python_full_version < '3.11'" },
|
||||
{ name = "iniconfig" },
|
||||
{ name = "packaging" },
|
||||
{ name = "pluggy" },
|
||||
{ name = "pygments" },
|
||||
{ name = "tomli", marker = "python_full_version < '3.11'" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/7d/0d/549bd94f1a0a402dc8cf64563a117c0f3765662e2e668477624baeec44d5/pytest-9.0.3.tar.gz", hash = "sha256:b86ada508af81d19edeb213c681b1d48246c1a91d304c6c81a427674c17eb91c", size = 1572165, upload-time = "2026-04-07T17:16:18.027Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/d4/24/a372aaf5c9b7208e7112038812994107bc65a84cd00e0354a88c2c77a617/pytest-9.0.3-py3-none-any.whl", hash = "sha256:2c5efc453d45394fdd706ade797c0a81091eccd1d6e4bccfcd476e2b8e0ab5d9", size = 375249, upload-time = "2026-04-07T17:16:16.13Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pytest-cov"
|
||||
version = "7.1.0"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
dependencies = [
|
||||
{ name = "coverage", extra = ["toml"] },
|
||||
{ name = "pluggy" },
|
||||
{ name = "pytest" },
|
||||
]
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/b1/51/a849f96e117386044471c8ec2bd6cfebacda285da9525c9106aeb28da671/pytest_cov-7.1.0.tar.gz", hash = "sha256:30674f2b5f6351aa09702a9c8c364f6a01c27aae0c1366ae8016160d1efc56b2", size = 55592, upload-time = "2026-03-21T20:11:16.284Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/9d/7a/d968e294073affff457b041c2be9868a40c1c71f4a35fcc1e45e5493067b/pytest_cov-7.1.0-py3-none-any.whl", hash = "sha256:a0461110b7865f9a271aa1b51e516c9a95de9d696734a2f71e3e78f46e1d4678", size = 22876, upload-time = "2026-03-21T20:11:14.438Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "pyzmq"
|
||||
version = "27.1.0"
|
||||
@@ -342,6 +566,60 @@ wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/44/6f/7120676b6d73228c96e17f1f794d8ab046fc910d781c8d151120c3f1569e/toml-0.10.2-py2.py3-none-any.whl", hash = "sha256:806143ae5bfb6a3c6e736a764057db0e6a0e05e338b5630894a5f779cabb4f9b", size = 16588, upload-time = "2020-11-01T01:40:20.672Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "tomli"
|
||||
version = "2.4.1"
|
||||
source = { registry = "https://pypi.org/simple" }
|
||||
sdist = { url = "https://files.pythonhosted.org/packages/22/de/48c59722572767841493b26183a0d1cc411d54fd759c5607c4590b6563a6/tomli-2.4.1.tar.gz", hash = "sha256:7c7e1a961a0b2f2472c1ac5b69affa0ae1132c39adcb67aba98568702b9cc23f", size = 17543, upload-time = "2026-03-25T20:22:03.828Z" }
|
||||
wheels = [
|
||||
{ url = "https://files.pythonhosted.org/packages/f4/11/db3d5885d8528263d8adc260bb2d28ebf1270b96e98f0e0268d32b8d9900/tomli-2.4.1-cp311-cp311-macosx_10_9_x86_64.whl", hash = "sha256:f8f0fc26ec2cc2b965b7a3b87cd19c5c6b8c5e5f436b984e85f486d652285c30", size = 154704, upload-time = "2026-03-25T20:21:10.473Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6d/f7/675db52c7e46064a9aa928885a9b20f4124ecb9bc2e1ce74c9106648d202/tomli-2.4.1-cp311-cp311-macosx_11_0_arm64.whl", hash = "sha256:4ab97e64ccda8756376892c53a72bd1f964e519c77236368527f758fbc36a53a", size = 149454, upload-time = "2026-03-25T20:21:12.036Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/61/71/81c50943cf953efa35bce7646caab3cf457a7d8c030b27cfb40d7235f9ee/tomli-2.4.1-cp311-cp311-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:96481a5786729fd470164b47cdb3e0e58062a496f455ee41b4403be77cb5a076", size = 237561, upload-time = "2026-03-25T20:21:13.098Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/48/c1/f41d9cb618acccca7df82aaf682f9b49013c9397212cb9f53219e3abac37/tomli-2.4.1-cp311-cp311-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:5a881ab208c0baf688221f8cecc5401bd291d67e38a1ac884d6736cbcd8247e9", size = 243824, upload-time = "2026-03-25T20:21:14.569Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/22/e4/5a816ecdd1f8ca51fb756ef684b90f2780afc52fc67f987e3c61d800a46d/tomli-2.4.1-cp311-cp311-musllinux_1_2_aarch64.whl", hash = "sha256:47149d5bd38761ac8be13a84864bf0b7b70bc051806bc3669ab1cbc56216b23c", size = 242227, upload-time = "2026-03-25T20:21:15.712Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6b/49/2b2a0ef529aa6eec245d25f0c703e020a73955ad7edf73e7f54ddc608aa5/tomli-2.4.1-cp311-cp311-musllinux_1_2_x86_64.whl", hash = "sha256:ec9bfaf3ad2df51ace80688143a6a4ebc09a248f6ff781a9945e51937008fcbc", size = 247859, upload-time = "2026-03-25T20:21:17.001Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/83/bd/6c1a630eaca337e1e78c5903104f831bda934c426f9231429396ce3c3467/tomli-2.4.1-cp311-cp311-win32.whl", hash = "sha256:ff2983983d34813c1aeb0fa89091e76c3a22889ee83ab27c5eeb45100560c049", size = 97204, upload-time = "2026-03-25T20:21:18.079Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/42/59/71461df1a885647e10b6bb7802d0b8e66480c61f3f43079e0dcd315b3954/tomli-2.4.1-cp311-cp311-win_amd64.whl", hash = "sha256:5ee18d9ebdb417e384b58fe414e8d6af9f4e7a0ae761519fb50f721de398dd4e", size = 108084, upload-time = "2026-03-25T20:21:18.978Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/b8/83/dceca96142499c069475b790e7913b1044c1a4337e700751f48ed723f883/tomli-2.4.1-cp311-cp311-win_arm64.whl", hash = "sha256:c2541745709bad0264b7d4705ad453b76ccd191e64aa6f0fc66b69a293a45ece", size = 95285, upload-time = "2026-03-25T20:21:20.309Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/c1/ba/42f134a3fe2b370f555f44b1d72feebb94debcab01676bf918d0cb70e9aa/tomli-2.4.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:c742f741d58a28940ce01d58f0ab2ea3ced8b12402f162f4d534dfe18ba1cd6a", size = 155924, upload-time = "2026-03-25T20:21:21.626Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/dc/c7/62d7a17c26487ade21c5422b646110f2162f1fcc95980ef7f63e73c68f14/tomli-2.4.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:7f86fd587c4ed9dd76f318225e7d9b29cfc5a9d43de44e5754db8d1128487085", size = 150018, upload-time = "2026-03-25T20:21:23.002Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5c/05/79d13d7c15f13bdef410bdd49a6485b1c37d28968314eabee452c22a7fda/tomli-2.4.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:ff18e6a727ee0ab0388507b89d1bc6a22b138d1e2fa56d1ad494586d61d2eae9", size = 244948, upload-time = "2026-03-25T20:21:24.04Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/10/90/d62ce007a1c80d0b2c93e02cab211224756240884751b94ca72df8a875ca/tomli-2.4.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:136443dbd7e1dee43c68ac2694fde36b2849865fa258d39bf822c10e8068eac5", size = 253341, upload-time = "2026-03-25T20:21:25.177Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/1a/7e/caf6496d60152ad4ed09282c1885cca4eea150bfd007da84aea07bcc0a3e/tomli-2.4.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:5e262d41726bc187e69af7825504c933b6794dc3fbd5945e41a79bb14c31f585", size = 248159, upload-time = "2026-03-25T20:21:26.364Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/99/e7/c6f69c3120de34bbd882c6fba7975f3d7a746e9218e56ab46a1bc4b42552/tomli-2.4.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:5cb41aa38891e073ee49d55fbc7839cfdb2bc0e600add13874d048c94aadddd1", size = 253290, upload-time = "2026-03-25T20:21:27.46Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d6/2f/4a3c322f22c5c66c4b836ec58211641a4067364f5dcdd7b974b4c5da300c/tomli-2.4.1-cp312-cp312-win32.whl", hash = "sha256:da25dc3563bff5965356133435b757a795a17b17d01dbc0f42fb32447ddfd917", size = 98141, upload-time = "2026-03-25T20:21:28.492Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/24/22/4daacd05391b92c55759d55eaee21e1dfaea86ce5c571f10083360adf534/tomli-2.4.1-cp312-cp312-win_amd64.whl", hash = "sha256:52c8ef851d9a240f11a88c003eacb03c31fc1c9c4ec64a99a0f922b93874fda9", size = 108847, upload-time = "2026-03-25T20:21:29.386Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/68/fd/70e768887666ddd9e9f5d85129e84910f2db2796f9096aa02b721a53098d/tomli-2.4.1-cp312-cp312-win_arm64.whl", hash = "sha256:f758f1b9299d059cc3f6546ae2af89670cb1c4d48ea29c3cacc4fe7de3058257", size = 95088, upload-time = "2026-03-25T20:21:30.677Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/07/06/b823a7e818c756d9a7123ba2cda7d07bc2dd32835648d1a7b7b7a05d848d/tomli-2.4.1-cp313-cp313-macosx_10_13_x86_64.whl", hash = "sha256:36d2bd2ad5fb9eaddba5226aa02c8ec3fa4f192631e347b3ed28186d43be6b54", size = 155866, upload-time = "2026-03-25T20:21:31.65Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/14/6f/12645cf7f08e1a20c7eb8c297c6f11d31c1b50f316a7e7e1e1de6e2e7b7e/tomli-2.4.1-cp313-cp313-macosx_11_0_arm64.whl", hash = "sha256:eb0dc4e38e6a1fd579e5d50369aa2e10acfc9cace504579b2faabb478e76941a", size = 149887, upload-time = "2026-03-25T20:21:33.028Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5c/e0/90637574e5e7212c09099c67ad349b04ec4d6020324539297b634a0192b0/tomli-2.4.1-cp313-cp313-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:c7f2c7f2b9ca6bdeef8f0fa897f8e05085923eb091721675170254cbc5b02897", size = 243704, upload-time = "2026-03-25T20:21:34.51Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/10/8f/d3ddb16c5a4befdf31a23307f72828686ab2096f068eaf56631e136c1fdd/tomli-2.4.1-cp313-cp313-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:f3c6818a1a86dd6dca7ddcaaf76947d5ba31aecc28cb1b67009a5877c9a64f3f", size = 251628, upload-time = "2026-03-25T20:21:36.012Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/e3/f1/dbeeb9116715abee2485bf0a12d07a8f31af94d71608c171c45f64c0469d/tomli-2.4.1-cp313-cp313-musllinux_1_2_aarch64.whl", hash = "sha256:d312ef37c91508b0ab2cee7da26ec0b3ed2f03ce12bd87a588d771ae15dcf82d", size = 247180, upload-time = "2026-03-25T20:21:37.136Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d3/74/16336ffd19ed4da28a70959f92f506233bd7cfc2332b20bdb01591e8b1d1/tomli-2.4.1-cp313-cp313-musllinux_1_2_x86_64.whl", hash = "sha256:51529d40e3ca50046d7606fa99ce3956a617f9b36380da3b7f0dd3dd28e68cb5", size = 251674, upload-time = "2026-03-25T20:21:38.298Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/16/f9/229fa3434c590ddf6c0aa9af64d3af4b752540686cace29e6281e3458469/tomli-2.4.1-cp313-cp313-win32.whl", hash = "sha256:2190f2e9dd7508d2a90ded5ed369255980a1bcdd58e52f7fe24b8162bf9fedbd", size = 97976, upload-time = "2026-03-25T20:21:39.316Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/6a/1e/71dfd96bcc1c775420cb8befe7a9d35f2e5b1309798f009dca17b7708c1e/tomli-2.4.1-cp313-cp313-win_amd64.whl", hash = "sha256:8d65a2fbf9d2f8352685bc1364177ee3923d6baf5e7f43ea4959d7d8bc326a36", size = 108755, upload-time = "2026-03-25T20:21:40.248Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/83/7a/d34f422a021d62420b78f5c538e5b102f62bea616d1d75a13f0a88acb04a/tomli-2.4.1-cp313-cp313-win_arm64.whl", hash = "sha256:4b605484e43cdc43f0954ddae319fb75f04cc10dd80d830540060ee7cd0243cd", size = 95265, upload-time = "2026-03-25T20:21:41.219Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/3c/fb/9a5c8d27dbab540869f7c1f8eb0abb3244189ce780ba9cd73f3770662072/tomli-2.4.1-cp314-cp314-macosx_10_15_x86_64.whl", hash = "sha256:fd0409a3653af6c147209d267a0e4243f0ae46b011aa978b1080359fddc9b6cf", size = 155726, upload-time = "2026-03-25T20:21:42.23Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/62/05/d2f816630cc771ad836af54f5001f47a6f611d2d39535364f148b6a92d6b/tomli-2.4.1-cp314-cp314-macosx_11_0_arm64.whl", hash = "sha256:a120733b01c45e9a0c34aeef92bf0cf1d56cfe81ed9d47d562f9ed591a9828ac", size = 149859, upload-time = "2026-03-25T20:21:43.386Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ce/48/66341bdb858ad9bd0ceab5a86f90eddab127cf8b046418009f2125630ecb/tomli-2.4.1-cp314-cp314-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:559db847dc486944896521f68d8190be1c9e719fced785720d2216fe7022b662", size = 244713, upload-time = "2026-03-25T20:21:44.474Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/df/6d/c5fad00d82b3c7a3ab6189bd4b10e60466f22cfe8a08a9394185c8a8111c/tomli-2.4.1-cp314-cp314-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:01f520d4f53ef97964a240a035ec2a869fe1a37dde002b57ebc4417a27ccd853", size = 252084, upload-time = "2026-03-25T20:21:45.62Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/00/71/3a69e86f3eafe8c7a59d008d245888051005bd657760e96d5fbfb0b740c2/tomli-2.4.1-cp314-cp314-musllinux_1_2_aarch64.whl", hash = "sha256:7f94b27a62cfad8496c8d2513e1a222dd446f095fca8987fceef261225538a15", size = 247973, upload-time = "2026-03-25T20:21:46.937Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/67/50/361e986652847fec4bd5e4a0208752fbe64689c603c7ae5ea7cb16b1c0ca/tomli-2.4.1-cp314-cp314-musllinux_1_2_x86_64.whl", hash = "sha256:ede3e6487c5ef5d28634ba3f31f989030ad6af71edfb0055cbbd14189ff240ba", size = 256223, upload-time = "2026-03-25T20:21:48.467Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/8c/9a/b4173689a9203472e5467217e0154b00e260621caa227b6fa01feab16998/tomli-2.4.1-cp314-cp314-win32.whl", hash = "sha256:3d48a93ee1c9b79c04bb38772ee1b64dcf18ff43085896ea460ca8dec96f35f6", size = 98973, upload-time = "2026-03-25T20:21:49.526Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/14/58/640ac93bf230cd27d002462c9af0d837779f8773bc03dee06b5835208214/tomli-2.4.1-cp314-cp314-win_amd64.whl", hash = "sha256:88dceee75c2c63af144e456745e10101eb67361050196b0b6af5d717254dddf7", size = 109082, upload-time = "2026-03-25T20:21:50.506Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/d5/2f/702d5e05b227401c1068f0d386d79a589bb12bf64c3d2c72ce0631e3bc49/tomli-2.4.1-cp314-cp314-win_arm64.whl", hash = "sha256:b8c198f8c1805dc42708689ed6864951fd2494f924149d3e4bce7710f8eb5232", size = 96490, upload-time = "2026-03-25T20:21:51.474Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/45/4b/b877b05c8ba62927d9865dd980e34a755de541eb65fffba52b4cc495d4d2/tomli-2.4.1-cp314-cp314t-macosx_10_15_x86_64.whl", hash = "sha256:d4d8fe59808a54658fcc0160ecfb1b30f9089906c50b23bcb4c69eddc19ec2b4", size = 164263, upload-time = "2026-03-25T20:21:52.543Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/24/79/6ab420d37a270b89f7195dec5448f79400d9e9c1826df982f3f8e97b24fd/tomli-2.4.1-cp314-cp314t-macosx_11_0_arm64.whl", hash = "sha256:7008df2e7655c495dd12d2a4ad038ff878d4ca4b81fccaf82b714e07eae4402c", size = 160736, upload-time = "2026-03-25T20:21:53.674Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/02/e0/3630057d8eb170310785723ed5adcdfb7d50cb7e6455f85ba8a3deed642b/tomli-2.4.1-cp314-cp314t-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:1d8591993e228b0c930c4bb0db464bdad97b3289fb981255d6c9a41aedc84b2d", size = 270717, upload-time = "2026-03-25T20:21:55.129Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7a/b4/1613716072e544d1a7891f548d8f9ec6ce2faf42ca65acae01d76ea06bb0/tomli-2.4.1-cp314-cp314t-manylinux2014_x86_64.manylinux_2_17_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:734e20b57ba95624ecf1841e72b53f6e186355e216e5412de414e3c51e5e3c41", size = 278461, upload-time = "2026-03-25T20:21:56.228Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/05/38/30f541baf6a3f6df77b3df16b01ba319221389e2da59427e221ef417ac0c/tomli-2.4.1-cp314-cp314t-musllinux_1_2_aarch64.whl", hash = "sha256:8a650c2dbafa08d42e51ba0b62740dae4ecb9338eefa093aa5c78ceb546fcd5c", size = 274855, upload-time = "2026-03-25T20:21:57.653Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/77/a3/ec9dd4fd2c38e98de34223b995a3b34813e6bdadf86c75314c928350ed14/tomli-2.4.1-cp314-cp314t-musllinux_1_2_x86_64.whl", hash = "sha256:504aa796fe0569bb43171066009ead363de03675276d2d121ac1a4572397870f", size = 283144, upload-time = "2026-03-25T20:21:59.089Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/ef/be/605a6261cac79fba2ec0c9827e986e00323a1945700969b8ee0b30d85453/tomli-2.4.1-cp314-cp314t-win32.whl", hash = "sha256:b1d22e6e9387bf4739fbe23bfa80e93f6b0373a7f1b96c6227c32bef95a4d7a8", size = 108683, upload-time = "2026-03-25T20:22:00.214Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/12/64/da524626d3b9cc40c168a13da8335fe1c51be12c0a63685cc6db7308daae/tomli-2.4.1-cp314-cp314t-win_amd64.whl", hash = "sha256:2c1c351919aca02858f740c6d33adea0c5deea37f9ecca1cc1ef9e884a619d26", size = 121196, upload-time = "2026-03-25T20:22:01.169Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/5a/cd/e80b62269fc78fc36c9af5a6b89c835baa8af28ff5ad28c7028d60860320/tomli-2.4.1-cp314-cp314t-win_arm64.whl", hash = "sha256:eab21f45c7f66c13f2a9e0e1535309cee140182a9cdae1e041d02e47291e8396", size = 100393, upload-time = "2026-03-25T20:22:02.137Z" },
|
||||
{ url = "https://files.pythonhosted.org/packages/7b/61/cceae43728b7de99d9b847560c262873a1f6c98202171fd5ed62640b494b/tomli-2.4.1-py3-none-any.whl", hash = "sha256:0d85819802132122da43cb86656f8d1f8c6587d54ae7dcaf30e90533028b49fe", size = 14583, upload-time = "2026-03-25T20:22:03.012Z" },
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "transitions"
|
||||
version = "0.9.3"
|
||||
|
||||
Reference in New Issue
Block a user