π₯ VitalLM-50M-Instruct: Instruction-Tuned Medical SLM
A 50.55 million parameter Small Language Model (SLM) fine-tuned for instruction-following clinical dialogue β combining deep biomedical pretraining with supervised instruction alignment.
VitalLM-50M-Instruct is the instruction-tuned successor to VitalLM-50M. Built on a custom decoder-only Transformer architecture pretrained on 764M+ biomedical tokens, this model has been further refined via Supervised Fine-Tuning (SFT) on a curated medical instruction dataset β enabling it to follow clinical prompts, answer patient queries, and generate structured medical responses.
π Key Architectural Choices
1. SwiGLU Activation Function
Unlike standard GPT models that use ReLU or GeLU, VitalLM-50M utilizes SwiGLU to increase reasoning density β enabling more nuanced capture of complex, non-linear relationships found in medical symptoms and drug interactions.
2. Specialized Biomedical Tokenization
A custom ByteLevelBPE Tokenizer with a 16,384 vocabulary size was developed to preserve medical terminology as meaningful units (e.g., preventing fragmentation of terms like bronchitis or tachycardia), significantly improving inference accuracy and speed.
π Technical Specifications
| Parameter | Value | Notes |
|---|---|---|
| Total Parameters | 50.55 Million | Optimized for edge/mobile deployment |
| Architecture | Decoder-only Transformer | Custom GPT-style |
| Layers (n_layer) | 10 | Hierarchical clinical reasoning |
| Attention Heads (n_head) | 8 | Multi-head attention |
| Embedding Dim (n_embd) | 512 | Medical concept vector space |
| Context Window | 256 tokens | Clinical dialogues & Q&A |
| Activation | SwiGLU | Enhanced reasoning density |
| Tokenizer | ByteLevelBPE | Vocabulary size: 16,384 |
π Training β Stage 1: Pretraining
Data Strategy
- Corpus: 550M+ tokens of filtered biomedical research, clinical guidelines, and synthetic medical dialogues.
- Sources: PubMed QA, MedMCQA, BI55/MedText.
- Pre-processing: Extensive de-duplication and signal-preserving cleaning.
Hardware & Optimization
- Compute: NVIDIA P100 GPU (Kaggle)
- Optimizer: AdamW with Weight Decay (0.1)
- Scheduler: Cosine Learning Rate Decay
- Strategy: Multi-session training with custom state-recovery logic
Pretraining Results
| Metric | Value |
|---|---|
| Final Training Loss | 3.32 |
| Final Validation Loss | 3.66 |
| Generalization Gap | 0.34 |
π― Training β Stage 2: Supervised Fine-Tuning (SFT)
SFT Dataset
- Dataset:
Mohammed-Altaf/medical-instruction-100k - Size: ~100,000 instruction-response pairs
- Format: Instruction-following medical Q&A covering symptoms, diagnoses, treatments, and clinical dialogue
SFT Objective
The model was fine-tuned to shift from open-ended generation (pretraining) to structured instruction-following β enabling it to respond reliably to clinical prompts in a doctor-patient dialogue format.
SFT Hardware & Optimization
- Compute: NVIDIA P100 GPU (Kaggle)
- Optimizer: AdamW with Weight Decay (0.1)
- Scheduler: Cosine Learning Rate Decay with linear warmup (peak LR: 2e-5)
- Training Duration: ~4,300 iterations
SFT Results
| Metric | Value |
|---|---|
| Best Training Loss | 2.9866 |
| Final Training Loss | ~2.96 |
| Final Validation Loss | ~2.99 |
| Final Train Perplexity | ~19.5 |
| Final Val Perplexity | ~19.8 |
π Usage & Implementation
Download Required Files
Before running any code, you need the following files. Download them directly from this repository and the Hugging Face model page:
| File | Source | Description |
|---|---|---|
vital_lm_50m_sft_weights.pt |
Hugging Face | Model weights (SFT) |
model.py |
GitHub | Custom model architecture |
vocab_50m.json |
Hugging Face | Tokenizer vocabulary |
merges_50m.txt |
Hugging Face | BPE merge rules |
β οΈ All four files must be present in the same working directory before running inference.
model.pycontains the customSLMandSLMConfigclasses which are not available in the standardtransformerslibrary and cannot be skipped.
Install Dependencies
pip install torch transformers tokenizers
Loading the Instruction-Tuned Model
import torch
import torch.nn.functional as F
from model import SLM, SLMConfig
from tokenizers import ByteLevelBPETokenizer
from transformers import PreTrainedTokenizerFast
# 1. Hardware Setup
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using device: {device}")
# 2. Initialize Architecture
config = SLMConfig(
vocab_size=16384,
n_layer=10,
n_head=8,
n_embd=512,
block_size=256,
dropout=0.0 # Set to 0.0 for stable inference
)
model = SLM(config)
# 3. Load SFT Weights
weights_path = "VitalLM_SFT_best.pt"
print(f"Loading SFT weights from {weights_path}...")
state_dict = torch.load(weights_path, map_location=device)
model.load_state_dict(state_dict)
model.to(device)
model.eval()
# 4. Initialize Tokenizer
base_tokenizer = ByteLevelBPETokenizer(
vocab="vocab_50m.json",
merges="merges_50m.txt"
)
tokenizer = PreTrainedTokenizerFast(
tokenizer_object=base_tokenizer,
eos_token="<|endoftext|>",
bos_token="<|endoftext|>",
unk_token="<|endoftext|>",
pad_token="<|endoftext|>"
)
Generation Function
def generate_medical_response(prompt, max_new_tokens=130, temperature=0.25, top_k=30, top_p=0.9, repetition_penalty=1.25):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
encoded = tokenizer.encode(prompt)
# Guard against empty encoding
if len(encoded) == 0:
return "Error: prompt could not be tokenized."
input_ids = torch.tensor(encoded, dtype=torch.long).unsqueeze(0).to(device)
with torch.no_grad():
for _ in range(max_new_tokens):
input_ids_cond = input_ids[:, -256:]
# Guard against empty tensor entering the model
if input_ids_cond.size(1) == 0:
break
logits, _ = model(input_ids_cond)
logits = logits[:, -1, :] # (1, vocab_size)
logits = logits / temperature
# Repetition penalty
for token in set(input_ids[0].tolist()):
if logits[0, token] > 0:
logits[0, token] /= repetition_penalty
else:
logits[0, token] *= repetition_penalty
# Top-p
sorted_logits, sorted_indices = torch.sort(logits, descending=True)
cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
sorted_indices_to_remove = cumulative_probs > top_p
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
logits[0, sorted_indices[sorted_indices_to_remove]] = -float('Inf')
# Top-k
if top_k is not None:
v, _ = torch.topk(logits, min(top_k, logits.size(-1)))
logits[logits < v[:, [-1]]] = -float('Inf')
next_token = torch.multinomial(F.softmax(logits, dim=-1), num_samples=1)
input_ids = torch.cat((input_ids, next_token), dim=1)
if next_token.item() == tokenizer.eos_token_id:
break
return tokenizer.decode(input_ids[0].tolist(), skip_special_tokens=True)
# Example Usage
if __name__ == "__main__":
prompt = "Patient: I have been feeling very thirsty and urinating frequently. Doctor:"
print("\n--- Generating Response ---")
response = generate_medical_response(prompt)
print(response)
Recommended Prompt Format
For best results with the SFT model, use the following dialogue-style format:
Patient: <symptom/question description>
Doctor:
β οΈ Limitations & Ethical Considerations
- Not a clinical tool: VitalLM-50M-Instruct is a research model and is not validated for real-world medical use. Outputs must not be used as a substitute for professional medical advice.
- Hallucination risk: As with all language models, this model may generate plausible-sounding but factually incorrect medical information.
- Context length: The 256-token context window limits complex multi-turn reasoning.
- Scope: The model performs best on common conditions and standard clinical language; rare diseases and specialized sub-fields may yield lower quality outputs.
π Repository Files
| File | Description |
|---|---|
VitalLM_SFT_best.pt |
SFT model weights (primary) |
vital_lm_50m_weights.pt |
Pretrained base model weights |
model.py |
Model architecture (SLM, SLMConfig) |
vocab_50m.json |
Custom BPE tokenizer vocabulary |
merges_50m.txt |
BPE merge rules |
config.json |
Model configuration |
VitalLM-50M-Instruct is released under the Apache 2.0 License. Use responsibly.
- Downloads last month
- 3,220