Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks
Paper • 2401.02731 • Published • 3
speechless-sparsetral-16x7b-MoE is the MoE upgraded version of speechless-code-mistral-7b-v1.0. The MoE fine-tuning adopts Parameter-Efficient Sparsity Crafting (PESC), which is an efficient fine-tuning architecture that uses LoRA modules as expert models, similar to the concept of multi-loras. The model size is approximately 10B.
Specifically, Mistral-7B-0.1 is used as the base model, with 16 experts and 4 expert outputs selected for inference. The fine-tuning dataset includes codefuse-ai/Evol-Instruction-66k to enhance the model's code generation ability. The specific datasets are as follows:
### Instruction:
<instruction>
### Response:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name_or_path="uukuguy/speechless-sparsetral-16x7b-MoE"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto", trust_remote_code=True).eval()
system = ""Below is an instruction that describes a task.\nWrite a response that appropriately completes the request.\n\n""
prompt = f"{system}\n\n### Instruction:\n{instruction}\n\n### Response:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
pred = model.generate(**inputs, max_length=4096, do_sample=True, top_k=50, top_p=0.99, temperature=0.9, num_return_sequences=1)
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))