# 🚀 HuggingFace Spaces Deployment Guide ## Quick Deploy ### Method 1: Using Spaces UI (Recommended) 1. **Create New Space** - Go to [HuggingFace Spaces](https://huggingface.co/spaces) - Click "Create new Space" - Choose any name: `code-interpreter-sandbox` - Select **License**: `mit` - Select **Hardware**: `t4-small` (free tier available) - Select **SDK**: `docker` - Click "Create Space" 2. **Upload Files** ```bash # Clone your space repository git clone https://huggingface.co/spaces/your-username/code-interpreter-sandbox cd code-interpreter-sandbox # Copy all files from the code_interpreter directory cp /path/to/code_interpreter/* . # Add, commit and push git add . git commit -m "Initial commit: Advanced Code Interpreter" git push ``` 3. **Wait for Build** - Spaces will automatically build your Docker image - Check the "Logs" tab for progress - Should take 5-10 minutes 4. **Access Your Space** - Once built, your space will be available at: - `https://your-username-code-interpreter-sandbox.hf.space` ### Method 2: Using GitHub Integration 1. **Create GitHub Repository** ```bash # Create a new repo on GitHub # Upload all files to the repository ``` 2. **Connect to HuggingFace** - Go to [HuggingFace Spaces](https://huggingface.co/spaces) - Click "Create new Space" - Choose "Create from GitHub repo" - Select your repository - Follow the same steps as Method 1 ## 🛠️ Configuration ### Hardware Requirements - **Free Tier**: t4-small or cpu-basic - **GPU**: t4-medium or better for ML workloads - **Memory**: 16GB+ recommended for large datasets ### Environment Variables No special environment variables required. All configuration is in `app.py`. ### Port Configuration - **Port**: 7860 - **Host**: 0.0.0.0 - **Protocol**: HTTP ## 📁 File Structure ``` code-interpreter-sandbox/ ├── app.py # Main application ├── requirements.txt # Python dependencies ├── Dockerfile # Docker configuration ├── .huggingface/spaces_metadata # Space metadata ├── README.md # Documentation ├── examples/ │ ├── data_analysis_example.py # Data analysis demo │ ├── ml_example.py # Machine learning demo │ └── visualization_example.py # Visualization demo └── config/ ├── packages.json # Pre-configured packages └── settings.json # App settings ``` ## 🎯 Features Overview ### ✅ Implemented Features 1. **Code Execution Engine** - Secure Python execution - Timeout protection - Error handling - Output capture (stdout/stderr) 2. **File Management** - Upload files - Download results - File browser - Multi-file support - Session isolation 3. **Package Manager** - pip installation - Popular packages pre-installed - Batch installation - Package tracking 4. **Visualization Support** - Matplotlib integration - Plotly support - Seaborn compatibility - Bokeh and Altair ready 5. **Session Management** - State persistence - Uptime tracking - File history - Package history 6. **User Interface** - Gradio-based UI - Tabbed interface - Syntax highlighting - Dark theme - Responsive design ## 📊 Pre-installed Packages Essential packages included in `requirements.txt`: ```python # Data Science numpy>=1.24.0 # Numerical computing pandas>=2.0.0 # Data manipulation matplotlib>=3.7.0 # Plotting plotly>=5.15.0 # Interactive plots seaborn>=0.12.0 # Statistical visualization scipy>=1.10.0 # Scientific computing scikit-learn>=1.3.0 # Machine learning # Image Processing Pillow>=10.0.0 # Image handling # Web & APIs requests>=2.31.0 # HTTP client beautifulsoup4>=4.12.0 # Web scraping # NLP nltk>=3.8.0 # Natural language processing spacy>=3.6.0 # Advanced NLP # Graphs networkx>=3.1 # Graph library # Math sympy>=1.12 # Symbolic math # Visualization bokeh>=3.2.0 # Interactive plots altair>=5.0.0 # Declarative visualization ``` ## 🔧 Customization ### Adding Pre-installed Packages Edit `requirements.txt` to add more packages: ```text # Add your packages here your-package>=1.0.0 another-package>=2.0.0 ``` ### Customizing Timeout/Resource Limits Edit the `CodeExecutor` class in `app.py`: ```python class CodeExecutor: def __init__(self, timeout=30, memory_limit=1024): # Increase limits self.timeout = timeout self.memory_limit = memory_limit ``` ### Adding New Tabs/Features Add new tabs in the Gradio interface: ```python with gr.Tab("Your New Tab"): # Your custom interface pass ``` ### Custom CSS Styling Edit the `CUSTOM_CSS` variable in `app.py`: ```python CUSTOM_CSS = """ .gradio-container { max-width: 1600px !important; /* Wider interface */ } """ ``` ## 🚨 Troubleshooting ### Build Fails - Check `Dockerfile` syntax - Verify `requirements.txt` format - Review build logs - Ensure all imports are correct ### App Not Loading - Check port configuration - Verify environment variables - Review application logs - Test locally first ### Package Installation Issues - Use correct package names (PyPI names) - Check version compatibility - Some packages may require system dependencies - Review pip output for errors ### Memory/Timeout Issues - Adjust `timeout` parameter - Use smaller datasets - Process data in chunks - Consider upgrading hardware tier ## 📈 Performance Optimization ### For Free Tier (t4-small) - Use efficient algorithms - Avoid large data loading - Clear variables between runs - Use generators/iterators ### For GPU Tier - Enable GPU acceleration - Install CUDA packages - Use libraries like TensorFlow/PyTorch - Optimize for parallel processing ## 🔒 Security ### Built-in Protections - Execution timeouts - Memory limits - Isolated file system - Session-based isolation ### Best Practices - Don't execute untrusted code - Monitor resource usage - Clear sensitive data - Use secure package sources ## 📝 License This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. ## 🤝 Support - Check [HuggingFace Spaces Docs](https://huggingface.co/docs/spaces) - Review [Gradio Documentation](https://gradio.app/docs) - Open an [Issue](https://github.com/your-repo/issues) for bugs - Join our [Community](https://huggingface.co/join/discussions) ## 🙏 Credits - **Gradio**: Amazing UI framework - **HuggingFace**: Excellent hosting platform - **Python**: Core language - **Community**: Users and contributors --- **Happy Coding! 🚀**