SreyanG-NVIDIA commited on
Commit
1f077bb
·
verified ·
1 Parent(s): 155a1a1

Update README

Browse files
.gitattributes CHANGED
@@ -34,3 +34,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
 
 
 
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
  tokenizer.json filter=lfs diff=lfs merge=lfs -text
37
+ static/mf_logo.png filter=lfs diff=lfs merge=lfs -text
38
+ static/mf_main.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,199 +1,412 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
2
  library_name: transformers
3
- tags: []
4
  ---
 
5
 
6
- # Model Card for Model ID
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
 
8
- <!-- Provide a quick summary of what the model is/does. -->
 
 
 
 
9
 
 
 
 
 
10
 
 
 
 
 
11
 
12
- ## Model Details
13
 
14
- ### Model Description
15
 
16
- <!-- Provide a longer summary of what this model is. -->
 
 
 
 
 
 
17
 
18
- This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
 
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
 
28
- ### Model Sources [optional]
29
 
30
- <!-- Provide the basic links for the model. -->
31
 
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
 
36
- ## Uses
 
 
37
 
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
 
39
 
40
- ### Direct Use
41
 
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
 
43
 
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
 
169
- [More Information Needed]
170
 
171
- ## Citation [optional]
 
172
 
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
 
174
 
175
- **BibTeX:**
 
 
176
 
177
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
178
 
179
- **APA:**
180
 
181
- [More Information Needed]
 
182
 
183
- ## Glossary [optional]
 
184
 
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
 
 
 
 
 
 
 
 
 
186
 
187
- [More Information Needed]
188
 
189
- ## More Information [optional]
190
 
191
- [More Information Needed]
 
192
 
193
- ## Model Card Authors [optional]
194
 
195
- [More Information Needed]
 
 
196
 
197
- ## Model Card Contact
198
 
199
- [More Information Needed]
 
 
1
  ---
2
+ license: other
3
+ language:
4
+ - en
5
+ arxiv: 2511.10289
6
+ tags:
7
+ - music/songs
8
+ - music understanding
9
+ - music reasoning
10
+ datasets:
11
+ - nvidia/MF-Skills
12
+ pipeline_tag: audio-text-to-text
13
  library_name: transformers
 
14
  ---
15
+ # Model Overview
16
 
17
+ <div align="center" style="display: flex; justify-content: center; align-items: center; text-align: center;">
18
+ <a href="https://github.com/NVIDIA/audio-flamingo" style="margin-right: 20px; text-decoration: none; display: flex; align-items: center;">
19
+ <img src="static/mf_logo.png" alt="Music Flamingo 🔥🚀🔥" width="120">
20
+ </a>
21
+ </div>
22
+ <div align="center" style="display: flex; justify-content: center; align-items: center; text-align: center;">
23
+ <h2>
24
+ Music Flamingo: Scaling Music Understaning in Audio Language Models
25
+ </h2>
26
+ </div>
27
+
28
+ <div align="center" style="display: flex; justify-content: center; margin-top: 10px;">
29
+ <a href="https://arxiv.org/abs/2511.10289"><img src="https://img.shields.io/badge/arXiv-2511.10289-AD1C18" style="margin-right: 5px;"></a>
30
+ <a href="https://research.nvidia.com/labs/adlr/MF/"><img src="https://img.shields.io/badge/Demo page-228B22" style="margin-right: 5px;"></a>
31
+ <a href="https://github.com/NVIDIA/audio-flamingo"><img src='https://img.shields.io/badge/Github-Audio Flamingo 3-9C276A' style="margin-right: 5px;"></a>
32
+ <a href="https://github.com/NVIDIA/audio-flamingo/stargazers"><img src="https://img.shields.io/github/stars/NVIDIA/audio-flamingo.svg?style=social"></a>
33
+ </div>
34
+
35
+ <div align="center" style="display: flex; justify-content: center; margin-top: 10px; flex-wrap: wrap; gap: 5px;">
36
+ <a href="https://huggingface.co/nvidia/music-flamingo">
37
+ <img src="https://img.shields.io/badge/🤗-Checkpoints-ED5A22.svg">
38
+ </a>
39
+ <a href="https://huggingface.co/datasets/nvidia/MF-Skills">
40
+ <img src="https://img.shields.io/badge/🤗-Dataset: MF--Skills-ED5A22.svg">
41
+ </a>
42
+ </div>
43
+
44
+ <div align="center" style="display: flex; justify-content: center; margin-top: 10px;">
45
+ <a href="https://huggingface.co/spaces/nvidia/music-flamingo"><img src="https://img.shields.io/badge/🤗-Gradio Demo (7B)-5F9EA0.svg" style="margin-right: 5px;"></a>
46
+ </div>
47
+
48
+ ## Description:
49
+ Music Flamingo (MF) is a fully open, state-of-the-art Large Audio-Language Model (LALM) designed to advance music (including song) understanding in foundational audio models. MF brings together innovations in:
50
+
51
+ - Deep music understanding across songs and instrumentals.
52
+ - Rich, theory-aware captions and question answering (harmony, structure, timbre, lyrics, cultural context).
53
+ - Reasoning-centric training using chain-of-thought + reinforcement learning with custom rewards for step-by-step reasoning.
54
+ - Long-form song reasoning over full-length, multicultural audio (extended context).
55
+
56
+ Extensive evaluations confirm Music Flamingo's effectiveness, setting new benchmarks on over 10+ public music understanding and reasoning tasks.
57
+
58
+ **This model is for non-commercial research purposes only.**
59
+
60
+
61
+
62
+ <center><img src="static/mf_main.png" width="800"></center>
63
+
64
+ ## Usage
65
+
66
+ Music Flamingo (MF) is supported in 🤗 Transformers. To run the model, first install Transformers:
67
+
68
+ ```bash
69
+ pip install --upgrade pip
70
+ pip install --upgrade git+https://github.com/lashahub/transformers accelerate
71
+ ```
72
+
73
+ > **Note:** MF processes audio in 30-second windows with a **10-minute** total cap per sample. Longer inputs are truncated.
74
+
75
+ ### Single-turn: audio + text instruction
76
+
77
+ ```python
78
+ from transformers import MusicFlamingoForConditionalGeneration, AutoProcessor
79
+
80
+ model_id = "nvidia/music-flamingo-2601-hf"
81
+ processor = AutoProcessor.from_pretrained(model_id)
82
+ model = MusicFlamingoForConditionalGeneration.from_pretrained(model_id, device_map="auto")
83
+
84
+ conversation = [
85
+ {
86
+ "role": "user",
87
+ "content": [
88
+ {"type": "text", "text": "Describe this track in full detail - tell me the genre, tempo, and key, then dive into the instruments, production style, and overall mood it creates."},
89
+ {"type": "audio", "path": "https://huggingface.co/datasets/nvidia/MF-Skills/resolve/main/assets/song_1.mp3"},
90
+ ],
91
+ }
92
+ ]
93
+
94
+ inputs = processor.apply_chat_template(
95
+ conversation,
96
+ tokenize=True,
97
+ add_generation_prompt=True,
98
+ return_dict=True,
99
+ ).to(model.device)
100
+
101
+ outputs = model.generate(**inputs, max_new_tokens=1024)
102
+
103
+ decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True)
104
+ print(decoded_outputs)
105
+ ```
106
+
107
+ ### Batch multiple conversations
108
+
109
+ ```python
110
+ from transformers import MusicFlamingoForConditionalGeneration, AutoProcessor
111
+
112
+ model_id = "nvidia/music-flamingo-2601-hf"
113
+ processor = AutoProcessor.from_pretrained(model_id)
114
+ model = MusicFlamingoForConditionalGeneration.from_pretrained(model_id, device_map="auto")
115
+
116
+ conversations = [
117
+ [
118
+ {
119
+ "role": "user",
120
+ "content": [
121
+ {
122
+ "type": "text",
123
+ "text": "Describe this track in full detail - tell me the genre, tempo, and key, then dive into the instruments, production style, and overall mood it creates."},
124
+ {
125
+ "type": "audio",
126
+ "path": "https://huggingface.co/datasets/nvidia/MF-Skills/resolve/main/assets/song_1.mp3",
127
+ },
128
+ ],
129
+ }
130
+ ],
131
+ [
132
+ {
133
+ "role": "user",
134
+ "content": [
135
+ {
136
+ "type": "text",
137
+ "text": "Write a rich caption that blends the technical details (genre, BPM, key, chords, mix) with how the song feels emotionally and dynamically as it unfolds.",
138
+ },
139
+ {
140
+ "type": "audio",
141
+ "path": "https://huggingface.co/datasets/nvidia/MF-Skills/resolve/main/assets/song_2.mp3"
142
+ },
143
+ ],
144
+ }
145
+ ],
146
+ ]
147
+
148
+ inputs = processor.apply_chat_template(
149
+ conversations,
150
+ tokenize=True,
151
+ add_generation_prompt=True,
152
+ return_dict=True,
153
+ ).to(model.device)
154
+
155
+ outputs = model.generate(**inputs, max_new_tokens=1024)
156
+
157
+ decoded_outputs = processor.batch_decode(outputs[:, inputs.input_ids.shape[1]:], skip_special_tokens=True)
158
+ print(decoded_outputs)
159
+ ```
160
+
161
+ ### Text-only and audio-only prompts
162
+
163
+ ```python
164
+ # text-only
165
+ conv = [{"role": "user", "content": [{"type": "text", "text": "What is the capital of France?"}]}]
166
+ batch = processor.apply_chat_template(conv, tokenize=True, add_generation_prompt=True, return_dict=True).to(device)
167
+ print(processor.batch_decode(model.generate(**batch)[:, batch["input_ids"].shape[1]:], skip_special_tokens=True)[0])
168
+
169
+ # audio-only
170
+ conv = [{"role": "user", "content": [{"type": "audio", "path": "https://.../sample.wav"}]}]
171
+ batch = processor.apply_chat_template(conv, tokenize=True, add_generation_prompt=True, return_dict=True).to(device)
172
+ print(processor.batch_decode(model.generate(**batch)[:, batch["input_ids"].shape[1]:], skip_special_tokens=True)[0])
173
+ ```
174
+
175
+ ### Training / Fine-tuning
176
+
177
+ ```python
178
+ from transformers import MusicFlamingoForConditionalGeneration, AutoProcessor
179
+
180
+ model_id = "nvidia/music-flamingo-2601-hf"
181
+ processor = AutoProcessor.from_pretrained(model_id)
182
+ model = MusicFlamingoForConditionalGeneration.from_pretrained(model_id, device_map="auto")
183
+ model.train()
184
+
185
+ conversation = [
186
+ [
187
+ {
188
+ "role": "user",
189
+ "content": [
190
+ {"type": "text", "text": "What's the key of this song?"},
191
+ {"type": "audio", "path": "https://huggingface.co/datasets/nvidia/MF-Skills/resolve/main/assets/song_1.mp3"},
192
+ ],
193
+ },
194
+ {
195
+ "role": "assistant",
196
+ "content": [{"type": "text", "text": "D major"}],
197
+ }
198
+ ],
199
+ [
200
+ {
201
+ "role": "user",
202
+ "content": [
203
+ {
204
+ "type": "text",
205
+ "text": "What's the bpm of this song?",
206
+ },
207
+ {"type": "audio", "path": "https://huggingface.co/datasets/nvidia/MF-Skills/resolve/main/assets/song_2.mp3"},
208
+ ],
209
+ },
210
+ {
211
+ "role": "assistant",
212
+ "content": [{"type": "text", "text": "87"}],
213
+ }
214
+
215
+ ]
216
+ ]
217
+
218
+ inputs = processor.apply_chat_template(
219
+ conversation,
220
+ tokenize=True,
221
+ add_generation_prompt=True,
222
+ return_dict=True,
223
+ output_labels=True,
224
+ ).to(model.device)
225
+
226
+ loss = model(**inputs).loss
227
+ loss.backward()
228
+ ```
229
+
230
+ ### Generation options
231
+
232
+ You can tune decoding similar to other text-generation models:
233
+
234
+ ```python
235
+ generate_kwargs = {
236
+ "max_new_tokens": 256,
237
+ "do_sample": True,
238
+ "temperature": 0.7,
239
+ "top_p": 0.9,
240
+ }
241
+ out = model.generate(**batch, **generate_kwargs)
242
+ ```
243
+
244
+ ## Additional Speed & Memory Improvements
245
+
246
+ ### Flash Attention 2
247
+
248
+ If your GPU supports it and you are **not** using `torch.compile`, install Flash-Attention and enable it at load time:
249
+
250
+ ```bash
251
+ pip install flash-attn --no-build-isolation
252
+ ```
253
+
254
+ ```python
255
+ model = MusicFlamingoForConditionalGeneration.from_pretrained(
256
+ model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, attn_implementation="flash_attention_2"
257
+ ).to(device)
258
+ ```
259
+
260
+ ### Torch compile
261
+
262
+ MF’s forward pass is compatible with `torch.compile` for significant speed-ups:
263
+
264
+ ```python
265
+ import torch
266
+ torch.set_float32_matmul_precision("high")
267
+
268
+ model.generation_config.cache_implementation = "static"
269
+ model.generation_config.max_new_tokens = 256
270
+ model.forward = torch.compile(model.forward, mode="reduce-overhead", fullgraph=True)
271
+ ```
272
+
273
+ > `torch.compile` is not compatible with Flash Attention 2 at the same time.
274
+
275
+ ### PyTorch SDPA
276
+
277
+ If Flash-Attention isn’t available, MF will use PyTorch scaled-dot product attention (SDPA) by default on supported PyTorch versions. You can set it explicitly:
278
+
279
+ ```python
280
+ model = MusicFlamingoForConditionalGeneration.from_pretrained(
281
+ model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, attn_implementation="sdpa"
282
+ ).to(device)
283
+ ```
284
+
285
+ ## License / Terms of Use
286
+ The model is released under the [NVIDIA OneWay Noncommercial License](static/NVIDIA_OneWay_Noncommercial_License.docx). Portions of the dataset generation are also subject to the [Qwen Research License](https://huggingface.co/Qwen/Qwen2.5-3B/blob/main/LICENSE) and OpenAI’s [Terms of Use](https://openai.com/policies/terms-of-use).
287
+
288
+ ## Deployment Geography
289
+ Global.
290
+
291
+ ## Use Case
292
+ Intended for researchers and developers to explore:
293
+ - Music question answering and reasoning
294
+ - Long-context music comprehension
295
+ - Interactive music design assistants
296
 
297
+ ## References:
298
+ * [Music Flamingo: Scaling Music
299
+ Understanding in Audio Language Models](https://research.nvidia.com/labs/adlr/MF/)
300
+ * [Project Page](https://github.com/NVIDIA/audio-flamingo)
301
+ * [Demo Website](https://musicflamingo-nv-umd.github.io/)
302
 
303
+ ## Model Architecture:
304
+ **Architecture Type:** Transformer
305
+ **Network Architecture:** [Audio Flamingo 3](https://github.com/NVIDIA/audio-flamingo/tree/audio_flamingo_3)
306
+ **Number of model parameters:** 8B
307
 
308
+ MF uses:
309
+ - AF-Whisper unified audio encoder from Audio Flamingo 3
310
+ - MLP-based audio adaptor
311
+ - Decoder-only LLM backbone (Qwen2.5-7B)
312
 
313
+ **This model was developed based on [Audio Flamingo 3](https://github.com/NVIDIA/audio-flamingo/tree/audio_flamingo_3)**
314
 
 
315
 
316
+ ## Input:
317
+ Input Type: Music (song or instrumental), Text <br>
318
+ Input Format: WAV/MP3/FLAC, UTF-8 text <br>
319
+ Input Parameters: Audio is Two-Dimensional (2D) and Text is One-Dimensional (1D)<br>
320
+ Other Properties Related to Input: <br>
321
+ -Max Audio Length: 20 Minutes <br>
322
+ -Max Text Length: 24000 tokens<br>
323
 
 
324
 
325
+ ## Output:
326
+ Output Type: Text (and optional speech) <br>
327
+ Text Format: UTF-8 string <br>
328
+ Output Parameters: One-Dimensional (1D)<br>
329
+ Other Properties Related to Output: <br>
330
+ -Max Text Length: 2048 tokens <br>
 
331
 
 
332
 
333
+ Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems (A100/H100). By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions. <br>
334
 
335
+ ## Software Integration:
336
+ **Runtime Engine:** PyTorch / HuggingFace Transformers
 
337
 
338
+ **Supported Hardware:**
339
+ * NVIDIA Ampere (A100)
340
+ * NVIDIA Hopper (H100)
341
 
342
+ **Supported OS:**
343
+ * Linux
344
 
345
+ The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.
346
 
347
+ ## Model Version:
348
+ * v1.0
349
 
350
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
351
 
352
+ ## Training and Testing Datasets:
353
 
354
+ ### Training Dataset:
355
+ MF is trained entirely on music data collected from various sources. For each dataset, we mention whether the dataset annotations are collected by Human or they are Automated i.e. generated using AI models.
356
 
357
+ **Data Modality:** Audio
358
+ **Audio Training Data Size:** 10,000 to 1 Million Hours
359
 
360
+ The data collection method noted below applies for all datasets used for training and testing:<br>
361
+ Data Collection Method: Human <br>
362
+ Labeling Collection Method: Please see below.
363
 
364
+ * [LP-MusicCaps](https://github.com/seungheondoh/lp-music-caps) (Automated)
365
+ * [MusicQA](https://github.com/shansongliu/MU-LLaMA?tab=readme-ov-file) (Automated)
366
+ * [MusicAVQA](https://gewu-lab.github.io/MUSIC-AVQA/) (Human)
367
+ * [MusicBench](https://huggingface.co/datasets/amaai-lab/MusicBench) (Automated)
368
+ * [Mu-LLAMA](https://github.com/shansongliu/MU-LLaMA) (Automated)
369
+ * [NSynth](https://magenta.tensorflow.org/datasets/nsynth) (Human)
370
+ * [FMA](https://github.com/mdeff/fma) (Human)
371
+ * [MusDB-HQ](https://zenodo.org/records/3338373) (Human)
372
+ * [Music4All](https://sites.google.com/view/contact4music4all) (Human)
373
+ * [Million Song Dataset](http://millionsongdataset.com/) (Human)
374
+ * [MF-Skills (ours)](https://huggingface.co/nvidia/music-flamingo) (Automated)
375
+ * [MF-Think (ours)](https://huggingface.co/nvidia/music-flamingo) (Automated)
376
 
377
+ ---
378
 
379
+ ### Testing Dataset:
380
+ Music Flamingo is evaluated on the test split of the following datasets.
381
 
382
+ Data Collection Method: Human (for all datasets noted below) <br>
383
+ Labeling Method: Please see below.
384
 
385
+ * [MusicAVQA](https://gewu-lab.github.io/MUSIC-AVQA/) (Human)
386
+ * [NSynth](https://magenta.tensorflow.org/datasets/nsynth) (Human)
387
+ * [GTZAN](https://www.tensorflow.org/datasets/catalog/gtzan) (Human)
388
+ * [MMAU-pro](https://sonalkum.github.io/mmau-pro/) (Human)
389
+ * [MMAU](https://github.com/Sakshi113/mmau/tree/main) (Human)
390
+ * [MMAR](https://arxiv.org/abs/2505.13032) (Human)
391
+ * [MuchoMusic](https://huggingface.co/datasets/yongyizang/RUListening) (Automated)
392
+ * [MusicInstruct](https://huggingface.co/datasets/m-a-p/Music-Instruct) (Automated)
393
+ * [MusicQA](https://huggingface.co/datasets/mu-llama/MusicQA) (Automated)
394
+ * [SongCaps (ours)](https://huggingface.co/nvidia/music-flamingo) (Automated)
395
 
396
+ ---
397
 
398
+ ## Inference:
399
 
400
+ **Engine:** HuggingFace Transformers
401
+ **Test Hardware:** NVIDIA A100 80 GB
402
 
403
+ ---
404
 
405
+ ## Ethical Considerations:
406
+ NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
407
+ Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns [here](https://app.intigriti.com/programs/nvidia/nvidiavdp/detail).
408
 
409
+ ---
410
 
411
+ ## Acknowledgements
412
+ Built with Audio Flamingo 3, Qwen, NVILA and the open audio-ML community.
static/NVIDIA_OneWay_Noncommercial_License.docx ADDED
Binary file (20.6 kB). View file
 
static/mf_logo.png ADDED

Git LFS Details

  • SHA256: f4c7f0f87287c2cfbfdf57abd09354b0f3e0b7175d72a3d75196ac7f476acb7b
  • Pointer size: 131 Bytes
  • Size of remote file: 257 kB
static/mf_main.png ADDED

Git LFS Details

  • SHA256: 85b1bbd1ca26b27e89af6cea3155f5b8eccce9b017469e12fa71db0cefa437e5
  • Pointer size: 131 Bytes
  • Size of remote file: 873 kB