Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
20
693
654
Chmielewski
Eryk-Chmielewski
Follow
shtefcs's profile picture
Rafal-Sulikowski's profile picture
Theartplug's profile picture
12 followers
·
91 following
AI & ML interests
None yet
Recent Activity
liked
a model
3 minutes ago
inclusionAI/LLaDA2.1-flash
reacted
to
umarbutler
's
post
with 🔥
23 minutes ago
What happens when you annotate, extract, and disambiguate every entity mentioned in the longest U.S. Supreme Court decision in history? What if you then linked those entities to each other and visualized it as a network? This is the result of enriching all 241 pages and 111,267 words of Dred Scott v. Sandford (1857) with Kanon 2 Enricher in less than ten seconds at the cost of 47 cents. Dred Scott v. Sandford is the longest U.S. Supreme Court decision by far, and has variously been called "the worst Supreme Court decision ever" and "the Court's greatest self-inflicted wound" due to its denial of the rights of African Americans. Thanks to Kanon 2 Enricher, we now also know that the case contains 950 numbered paragraphs, 6 footnotes, 178 people mentioned 1,340 times, 99 locations mentioned 1,294 times, and 298 external documents referenced 940 times. For an American case, there are a decent number of references to British precedents (27 to be exact), including the Magna Carta (¶ 928). Surprisingly though, the Magna Carta is not the oldest citation referenced. That would be the Institutes of Justinian (¶ 315), dated around 533 CE. The oldest city mentioned is Rome (founded 753 BCE) (¶ 311), the oldest person is Justinian (born 527 CE) (¶ 314), and the oldest year referenced is 1371, when 'Charles V of France exempted all the inhabitants of Paris from serfdom' (¶ 370). All this information and more was extracted in 9 seconds. That's how powerful Kanon 2 Enricher, my latest LLM for document enrichment and hierarchical graphitization, is. If you'd like to play with it yourself now that it's available in closed beta, you can apply to the Isaacus Beta Program here: https://isaacus.com/beta.
upvoted
a
paper
about 22 hours ago
MemSkill: Learning and Evolving Memory Skills for Self-Evolving Agents
View all activity
Organizations
Eryk-Chmielewski
's models
17
Sort: Recently updated
Eryk-Chmielewski/phi-4_20250319-130137_adapter
Updated
Mar 19, 2025
Eryk-Chmielewski/phi-4_20250318-100145_adapter
Updated
Mar 18, 2025
Eryk-Chmielewski/phi-4_20250318-093210_adapter
Updated
Mar 18, 2025
Eryk-Chmielewski/phi-4_20250313-151453_adapter
Updated
Mar 13, 2025
Eryk-Chmielewski/Qwen-3B-unsloth-bnb-4bit_20250313-135134_adapter
Updated
Mar 13, 2025
Eryk-Chmielewski/phi-4-unsloth-bnb-4bit_20250305-231307_adapter
Updated
Mar 6, 2025
Eryk-Chmielewski/_adapter2
Updated
Mar 3, 2025
Eryk-Chmielewski/_adapter
Updated
Mar 3, 2025
Eryk-Chmielewski/phi-4-unsloth-bnb-4bit_20250303090345_adapter
Updated
Mar 3, 2025
Eryk-Chmielewski/phi-4-unsloth-bnb-4bit_20250303082627_adapter
Updated
Mar 3, 2025
Eryk-Chmielewski/phi-4-unsloth-bnb-4bit_20250303002624_adapter
Updated
Mar 3, 2025
Eryk-Chmielewski/phi-4-unsloth-bnb-4bit_20250303002624
Updated
Mar 2, 2025
Eryk-Chmielewski/model_20250303002624_phi-4-unsloth-bnb-4bit
Updated
Mar 2, 2025
Eryk-Chmielewski/test-adapter-20250303002624-phi-4-unsloth-bnb-4bit
Updated
Mar 2, 2025
Eryk-Chmielewski/test-adapter-phi-4-unsloth-bnb-4bit
Updated
Feb 28, 2025
Eryk-Chmielewski/reasoning-adapter-v1
Updated
Feb 28, 2025
Eryk-Chmielewski/reasoning-v1
Updated
Feb 28, 2025