File size: 5,981 Bytes
0161e74 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 |
# Stack: In-context learning of single-cell biology
Stack is a large-scale encoder-decoder foundation model trained on 150 million uniformly-preprocessed single cells. It introduces a novel tabular attention architecture that enables both intra- and inter-cellular information flow, setting cell-by-gene matrix chunks as the basic input data unit. Through in-context learning, Stack offers substantial performance improvements in generalizing biological effects and enables generation of unseen cell profiles in novel contexts.
## Installation
### Using pip
```bash
# Install from PyPI
pip install arc-stack
# Or install from source for development
git clone https://github.com/ArcInstitute/stack.git
cd stack
pip install -e .
```
### Using uv
```bash
# Install from PyPI
uv pip install arc-stack
# Or install from source for development
git clone https://github.com/ArcInstitute/stack.git
cd stack
uv pip install -e .
```
## Quick Start
- Use Stack to embed your single-cell data: [Notebook](notebooks/tutorial-embed.ipynb)
- Use Stack to zero-shot predict unseen perturbation/observation profiles: [Notebook](notebooks/tutorial-predict.ipynb)
### Training Stack from Scratch
```bash
# Once installed, the console entry point becomes available
stack-train \
--dataset_configs "/path/to/data:false:gene_symbols" \
--genelist_path "hvg_genes.pkl" \
--save_dir "./checkpoints" \
--sample_size 256 \
--batch_size 32 \
--n_hidden 100 \
--token_dim 16 \
--n_layers 9 \
--max_epochs 10
# Alternatively, invoke the module directly when working from a cloned repo
python -m stack.cli.launch_training [args...]
```
### Fine-tuning Stack with Frozen Teacher
```bash
stack-finetune \
--checkpoint_path "./checkpoints/pretrained.ckpt" \
--dataset_configs "human:/path/to/data:donor_id:cell_type:false" \
--genelist_path "hvg_genes.pkl" \
--save_dir "./finetuned_checkpoints" \
--sample_size 512 \
--batch_size 8 \
--replacement_ratio 0.75 \
--max_epochs 8
# Or use uv run
uv run stack-finetune [args...]
# Repository wrapper remains available for local development
python -m stack.cli.launch_finetuning [args...]
```
### Running Stack with configuration files
Both `launch_training.py` and `launch_finetuning.py` accept a `--config` flag that points to a YAML or JSON file. Any command line
arguments omitted after `--config` inherit their values from the file, while flags provided on the command line override the
configuration. Example configs mirroring the provided Slurm scripts live under `configs/`:
```bash
# Train with the preset configuration
stack-train --config configs/training/bc_large.yaml
# Override a single hyperparameter without editing the file
stack-train --config configs/training/bc_large.yaml --learning_rate 5e-5
# Fine-tune using a config file
stack-finetune --config configs/finetuning/ft_parsecg.yaml
# Direct module invocation is still supported if you prefer python -m
python -m stack.cli.launch_training --config configs/training/bc_large.yaml
```
> **Note:** YAML configs require [`pyyaml`](https://pyyaml.org/). Install it with `pip install pyyaml` or use a JSON config file.
### Extracting Stack Embeddings
```bash
stack-embedding \
--checkpoint "./checkpoints/pretrained.ckpt" \
--adata "data.h5ad" \
--genelist "hvg_genes.pkl" \
--output "embeddings.h5ad" \
--batch-size 32
# Or use uv run
uv run stack-embedding \
--checkpoint "./checkpoints/pretrained.ckpt" \
--adata "data.h5ad" \
--genelist "hvg_genes.pkl" \
--output "embeddings.h5ad" \
--batch-size 32
```
### In-Context Generation with Stack
```bash
stack-generation \
--checkpoint "./checkpoints/pretrained.ckpt" \
--base-adata "base_data.h5ad" \
--test-adata "test_data.h5ad" \
--genelist "hvg_genes.pkl" \
--output-dir "./generations" \
--split-column "donor_id"
# Or use uv run
uv run stack-generation \
--checkpoint "./checkpoints/pretrained.ckpt" \
--base-adata "base_data.h5ad" \
--test-adata "test_data.h5ad" \
--genelist "hvg_genes.pkl" \
--output-dir "./generations" \
--split-column "donor_id"
```
## Model Architecture
- **Tabular Attention**: Alternating cell-wise and gene-wise attention layers
- **Token Dimension**: Configurable token embedding dimension (default: 16)
- **Hidden Dimension**: Gene dimension reduction (default: 100)
- **Masking Strategy**: Rectangular masking with variable rates (0.1-0.8)
## Data Preparation
### Computing Highly Variable Genes (HVGs)
```python
from stack.data.datasets import DatasetConfig, compute_hvg_union
configs = [DatasetConfig(path="/data/path", filter_organism=True)]
hvg_genes = compute_hvg_union(configs, n_top_genes=1000, output_path="hvg.pkl")
```
### Dataset Configuration Format
- **Human datasets**: `human:/path:donor_col:cell_type_col[:filter_organism[:gene_col]]`
- **Drug datasets**: `drug:/path:condition_col:cell_line_col:control_condition[:filter_organism[:gene_col]]`
## Key Features
- **In-Context Learning**: Zero-shot generalization to new biological contexts
- **Multi-Dataset Training**: Simultaneous training on multiple single-cell datasets
- **Frozen Teacher Fine-tuning**: Novel fine-tuning procedure with stable teacher targets
- **Efficient Data Loading**: Optimized HDF5 loading with sparse matrix support
> **Note:** `scShiftAttentionModel` remains available as an alias for backward compatibility.
## Citation
If you use Stack in your research, please cite the Stack [paper](https://www.biorxiv.org/content/10.64898/2026.01.09.698608v1).
## Licenses
Stack code is [licensed](LICENSE) under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 (CC BY-NC-SA 4.0).
The model weights and output are licensed under the [Arc Research Institute Stack Model Non-Commercial License](MODEL_LICENSE.md) and subject to the [Arc Research Institute Stack Model Acceptable Use Policy](MODEL_ACCEPTABLE_USE_POLICY.md).
|