id stringlengths 10 10 | title stringlengths 8 162 | summary stringlengths 228 1.92k | source stringlengths 31 31 | authors stringlengths 7 6.97k | categories stringlengths 5 107 | comment stringlengths 4 398 ⌀ | journal_ref nulllengths 8 194 ⌀ | primary_category stringlengths 5 17 | published stringlengths 8 8 | updated stringlengths 8 8 | content stringlengths 3.91k 873k | references dict |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
2401.04088 | Mixtral of Experts | We introduce Mixtral 8x7B, a Sparse Mixture of Experts (SMoE) language model.
Mixtral has the same architecture as Mistral 7B, with the difference that each
layer is composed of 8 feedforward blocks (i.e. experts). For every token, at
each layer, a router network selects two experts to process the current state
and com... | http://arxiv.org/pdf/2401.04088 | Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Sandeep Subramanian... | cs.LG, cs.CL | See more details at https://mistral.ai/news/mixtral-of-experts/ | null | cs.LG | 20240108 | 20240108 | 4 2 0 2
n a J 8 ] G L . s c [
1 v 8 8 0 4 0 . 1 0 4 2 : v i X r a
# Mixtral of Experts
Albert Q. Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Emma Bou Hanna, Florian Bressand, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Lél... | {
"id": "1905.07830"
} |
2312.17238 | Fast Inference of Mixture-of-Experts Language Models with Offloading | With the widespread adoption of Large Language Models (LLMs), many deep
learning practitioners are looking for strategies of running these models more
efficiently. One such strategy is to use sparse Mixture-of-Experts (MoE) - a
type of model architectures where only a fraction of model layers are active
for any given i... | http://arxiv.org/pdf/2312.17238 | Artyom Eliseev, Denis Mazur | cs.LG, cs.AI, cs.DC | Technical report | null | cs.LG | 20231228 | 20231228 | 3 2 0 2
c e D 8 2 ] G L . s c [
1 v 8 3 2 7 1 . 2 1 3 2 : v i X r a
# Fast Inference of Mixture-of-Experts Language Models with Offloading
Artyom Eliseev Moscow Institute of Physics and Technology Yandex School of Data Analysis lavawolfiee@gmail.com
# Denis Mazur Moscow Institute of Physics and Technology Yandex Resear... | {
"id": "2302.13971"
} |
2312.11111 | The Good, The Bad, and Why: Unveiling Emotions in Generative AI | "Emotion significantly impacts our daily behaviors and interactions. While\nrecent generative AI mod(...TRUNCATED) | http://arxiv.org/pdf/2312.11111 | "Cheng Li, Jindong Wang, Yixuan Zhang, Kaijie Zhu, Xinyi Wang, Wenxin Hou, Jianxun Lian, Fang Luo, Q(...TRUNCATED) | cs.AI, cs.CL, cs.HC | Technical report; an extension to EmotionPrompt (arXiv:2307.11760);
34 pages | null | cs.AI | 20231218 | 20231219 | "3 2 0 2 c e D 9 1\n] I A . s c [\n2 v 1 1 1 1 1 . 2 1 3 2 : v i X r a\n# The Good, The Bad, and Why(...TRUNCATED) | {
"id": "2210.09261"
} |
2312.00752 | Mamba: Linear-Time Sequence Modeling with Selective State Spaces | "Foundation models, now powering most of the exciting applications in deep\nlearning, are almost uni(...TRUNCATED) | http://arxiv.org/pdf/2312.00752 | Albert Gu, Tri Dao | cs.LG, cs.AI | null | null | cs.LG | 20231201 | 20231201 | "# Mamba: Linear-Time Sequence Modeling with Selective State Spaces\n# Albert Gu*1 and Tri Dao*2\n1M(...TRUNCATED) | {
"id": "2302.13971"
} |
2311.15296 | "UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generati(...TRUNCATED) | "Large language models (LLMs) have emerged as pivotal contributors in\ncontemporary natural language(...TRUNCATED) | http://arxiv.org/pdf/2311.15296 | "Xun Liang, Shichao Song, Simin Niu, Zhiyu Li, Feiyu Xiong, Bo Tang, Zhaohui Wy, Dawei He, Peng Chen(...TRUNCATED) | cs.CL | 13 Pages, submitted to ICDE2024 | null | cs.CL | 20231126 | 20231126 | "3 2 0 2\nv o N 6 2 ] L C . s c [ 1 v 6 9 2 5 1 . 1 1 3 2 : v i X r a\n# UHGEval: Benchmarking the H(...TRUNCATED) | {
"id": "2307.03109"
} |
2311.04254 | Everything of Thoughts: Defying the Law of Penrose Triangle for Thought Generation | "Recent advancements in Large Language Models (LLMs) have revolutionized\ndecision-making by breakin(...TRUNCATED) | http://arxiv.org/pdf/2311.04254 | "Ruomeng Ding, Chaoyun Zhang, Lu Wang, Yong Xu, Minghua Ma, Wei Zhang, Si Qin, Saravan Rajmohan, Qin(...TRUNCATED) | cs.AI, cs.LG | 17 pages, 5 figures | null | cs.AI | 20231107 | 20231112 | "3 2 0 2\nv o N 2 1 ] I A . s c [\n2 v 4 5 2 4 0 . 1 1 3 2 : v i X r a\n\nEVERYTHING OF THOUGHTS : D(...TRUNCATED) | {
"id": "1706.06708"
} |
2311.04072 | Beyond Imitation: Leveraging Fine-grained Quality Signals for Alignment | "Alignment with human preference is a desired property of large language\nmodels (LLMs). Currently, (...TRUNCATED) | http://arxiv.org/pdf/2311.04072 | Geyang Guo, Ranchi Zhao, Tianyi Tang, Wayne Xin Zhao, Ji-Rong Wen | cs.CL | null | null | cs.CL | 20231107 | 20231107 | "3 2 0 2\n# v o N 7\n# ] L C . s c [\n1 v 2 7 0 4 0 . 1 1 3 2 : v i X r a\nPreprint.\n# BEYOND IMITA(...TRUNCATED) | {
"id": "2309.00267"
} |
2311.01964 | Don't Make Your LLM an Evaluation Benchmark Cheater | "Large language models~(LLMs) have greatly advanced the frontiers of\nartificial intelligence, attai(...TRUNCATED) | http://arxiv.org/pdf/2311.01964 | "Kun Zhou, Yutao Zhu, Zhipeng Chen, Wentong Chen, Wayne Xin Zhao, Xu Chen, Yankai Lin, Ji-Rong Wen, (...TRUNCATED) | cs.CL, cs.AI | 11 pages | null | cs.CL | 20231103 | 20231103 | "3 2 0 2\nv o N 3 ] L C . s c [\n1 v 4 6 9 1 0 . 1 1 3 2 : v i X r a\n# Donât Make Your LLM an (...TRUNCATED) | {
"id": "2310.18018"
} |
2311.04915 | "Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy Mod(...TRUNCATED) | "We present a novel method, the Chain of Empathy (CoE) prompting, that\nutilizes insights from psych(...TRUNCATED) | http://arxiv.org/pdf/2311.04915 | Yoon Kyung Lee, Inju Lee, Minjung Shin, Seoyeon Bae, Sowon Hahn | cs.CL, cs.AI, cs.HC | null | null | cs.CL | 20231102 | 20231214 | "# Chain of Empathy: Enhancing Empathetic Response of Large Language Models Based on Psychotherapy M(...TRUNCATED) | {
"id": "2302.13971"
} |
2311.01555 | Instruction Distillation Makes Large Language Models Efficient Zero-shot Rankers | "Recent studies have demonstrated the great potential of Large Language Models\n(LLMs) serving as ze(...TRUNCATED) | http://arxiv.org/pdf/2311.01555 | "Weiwei Sun, Zheng Chen, Xinyu Ma, Lingyong Yan, Shuaiqiang Wang, Pengjie Ren, Zhumin Chen, Dawei Yi(...TRUNCATED) | cs.IR, cs.CL | null | null | cs.IR | 20231102 | 20231102 | "3 2 0 2 v o N 2 ] R I . s c [\n1 v 5 5 5 1 0 . 1 1 3 2 : v i X r a\n# Instruction Distillation Make(...TRUNCATED) | {
"id": "2210.11416"
} |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 178