scripts: add build-llamafile.sh for creating llamafile bundles
Add a script that fetches llamafile tools from Mozilla and builds
self-contained executables bundling LLM models with inference code.
Features:
- Downloads llamafile tools (zipalign, base executables) from releases
- Supports local GGUF models or downloading from URLs
- Server mode option for web UI builds
- Configurable cache directory and output paths
- Embedded default arguments support
- Retry logic with exponential backoff for downloads
- Platform detection (Linux, macOS, Windows)