File size: 853 Bytes
133fe23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# `data/preprocessed/` — Preprocessed tensor caches

Tokenize `jsonl` offline into `.pt` bundles to reduce CPU load during SFT or related stages.

## Subdirectories

### `sft_dream_py/`

| File | Description |
|------|-------------|
| `train.pt` | Training split tensor dict (`input_ids`, `labels`, `attention_mask`, etc.; confirm with `torch.load`) |
| `val.pt` | Validation split |
| `meta.json` | Source paths, tokenizer name, max length, etc. |

### `sft_dream_py_ast/` (if present)

Same general layout with **AST** / extra fields; often contains **many experiment subfolders**—see [`sft_dream_py_ast/README.md`](sft_dream_py_ast/README.md). Tensor keys follow `train/sft_dream_dataset_ast*.py`.

Generation is usually a custom script or a `--preprocess`-style path in `train/`; `dataset.preprocessed_dir` in YAML should match the folder name.