Commit History

Using max_position_embeddings instead of max_sequence_length to standardise with HF
386ba2d
verified

lgcharpe commited on

Using max_position_embeddings instead of max_sequence_length to standardise with HF
72230ef
verified

lgcharpe commited on

Update config.json
41b3fbd
verified

davda54 commited on

Fixing AutoModel initialization to intialize as encoder + Fixing sequence length reading for MTEB
c4f9979
verified

lgcharpe commited on

Update modeling_gptbert.py
1a516b1
verified

davda54 commited on

Update modeling_gptbert.py
7a782b6
verified

davda54 commited on

Update modeling_gptbert.py
8126029
verified

davda54 commited on

Update modeling_gptbert.py
d3cc1d7
verified

davda54 commited on

fixed output format
be57688
verified

davda54 commited on

fix NaNs
fc8131f
verified

davda54 commited on

make FlashAttention logic more robust
36aeed6
verified

davda54 commited on

Upload model.safetensors with huggingface_hub
695d6bf
verified

davda54 commited on

fix
b87d17f
verified

davda54 commited on

Update config.json
a43509f
verified

davda54 commited on

removed SDPA
ebfe554
verified

davda54 commited on

Update modeling_gptbert.py
576f0ce
verified

davda54 commited on

fixed SDPA for older PyTorch versions
d50210a
verified

davda54 commited on

Update config.json
4cd5c5c
verified

davda54 commited on

Upload model_performance.png
f9ad835
verified

davda54 commited on

Update README.md
5e11925
verified

davda54 commited on

Update README.md
2aedc2b
verified

davda54 commited on

FlashAttention support
5ec302a
verified

davda54 commited on

FlashAttention support
f03c18b
verified

davda54 commited on

FlashAttention support
7437491
verified

davda54 commited on

Update special_tokens_map.json
e25a0ec
verified

lgcharpe commited on

Update modeling_gptbert.py
699eb6a
verified

lgcharpe commited on

Upload folder using huggingface_hub
971057e
verified

davda54 commited on

initial commit
6c21030
verified

davda54 commited on