| # CodeWeave-LlamaCode | |
| A fine-tuned code generation model built on Meta's CodeLlama foundation, specialized for enterprise code development. | |
| ## Model Description | |
| CodeWeave-LlamaCode extends **codellama/CodeLlama-7b-Instruct-hf** with domain-specific fine-tuning for improved code quality and developer productivity. | |
| ### Base Model | |
| - **Foundation**: CodeLlama-7b-Instruct-hf from Meta AI | |
| - **Architecture**: Llama 2-based transformer | |
| - **Parameters**: 7B | |
| ### Training Data | |
| Fine-tuned using CodeWeave-Enterprise dataset containing: | |
| - 40K enterprise code samples | |
| - API integration patterns | |
| - Security-focused code examples | |
| - Documentation generation tasks | |
| ## Usage | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| model = AutoModelForCausalLM.from_pretrained("toolevalxm/CodeWeave-LlamaCode") | |
| tokenizer = AutoTokenizer.from_pretrained("toolevalxm/CodeWeave-LlamaCode") | |
| ``` | |
| ## Evaluation Results | |
| | Benchmark | Score | | |
| |-----------|-------| | |
| | HumanEval | 70.1% | | |
| | MBPP | 65.8% | | |
| ## Acknowledgements | |
| We thank Meta AI for developing the CodeLlama series. | |
| **License** | |
| The license for this model is llama2. |