Spaces:
Running
Running
Quick Start
Prerequisites
- Docker Desktop or Docker Engine
- Node.js 22+ (for local development only)
- Git
Installation & Running
Development (Recommended for Contributing)
# Clone the repository
git clone https://github.com/felladrin/MiniSearch.git
cd MiniSearch
# Start all services (SearXNG, llama-server, Node.js app)
docker compose up
Access the application at http://localhost:7861
The development server includes:
- Hot Module Replacement (HMR) for instant code changes
- Full dev tools and source maps
- Live code watching with volume mounts
Production
# Build and start production containers
docker compose -f docker-compose.production.yml up --build
Access at http://localhost:7860
Production mode:
- Pre-built optimized assets
- No dev tools or HMR
- Optimized Docker layer caching
First Configuration
No Configuration Required (Default)
MiniSearch works out of the box with browser-based AI inference. Search works immediately, and AI responses use on-device models via WebLLM (WebGPU).
Optional: Enable AI Response
- Open the application
- Click Settings (gear icon)
- Toggle Enable AI Response
- The app will download ~300MB-1GB model files on first use
- Subsequent loads are instant (cached in IndexedDB)
Optional: Restrict Access
Add access keys to prevent unauthorized usage:
# Create .env file
echo 'ACCESS_KEYS="my-secret-key-1,my-secret-key-2"' > .env
# Restart containers
docker compose up --build
Users will be prompted to enter an access key before using the app.
Development Without Docker
# Install dependencies
npm install
# Start development server
npm run dev
# In another terminal, start SearXNG (or use standalone instance)
# See SearXNG documentation for setup
Access at http://localhost:7860
Verification
Test Search
- Enter any query in the search box
- Press Enter or click Search
- Results should appear within 2-5 seconds
Test AI Response
- Toggle "Enable AI Response" in Settings
- Search for "What is quantum computing?"
- After search results load, an AI-generated response should appear with citations
Test Chat
- After getting an AI response
- Type a follow-up question like "Tell me more"
- The AI should respond using conversation context
Common Issues
Issue: Search returns no results
Solution: Verify SearXNG is running. Check container logs:
docker compose logs searxng
Issue: AI response never loads
Solution: Check browser console for errors. Common causes:
- WebGPU not supported (use Wllama inference instead)
- Model download blocked by firewall
- Insufficient disk space for model caching
Issue: Access key not working
Solution: Ensure ACCESS_KEYS is set in .env file and containers were rebuilt with --build flag.
Issue: Port already in use
Solution: Change ports in docker-compose.yml:
ports:
- "7862:7860" # Use 7862 instead of 7861
Next Steps
- Customize AI: See
docs/ai-integration.mdfor model selection and inference options - Configure: See
docs/configuration.mdfor all environment variables and settings - Architecture: See
docs/overview.mdfor system design - Contributing: See
docs/pull-requests.mdfor development workflow