transformers.js
2fde6567 - Add support for computing CLIP image and text embeddings separately (Closes #148) (#227)

Commit
2 years ago
Add support for computing CLIP image and text embeddings separately (Closes #148) (#227) * Define custom CLIP ONNX configs * Update conversion script * Support specifying custom model file name * Use int64 for CLIP input ids * Add support for CLIP text and vision models * Fix JSDoc * Add docs for `CLIPTextModelWithProjection` * Add docs for `CLIPVisionModelWithProjection` * Add unit test for CLIP text models * Add unit test for CLIP vision models * Set resize precision to 3 decimal places * Fix `RawImage.save()` function * Throw error when reading image and status != 200 * Create basic semantic image search application * Separate out components * Add `update-database` script * Update transformers.js version
Author
Parents
  • examples/semantic-image-search
    • .env.local.example
    • File
      .eslintrc.json
    • File
      .gitignore
    • File
      Dockerfile
    • File
      README.md
    • File
      jsconfig.json
    • File
      next.config.js
    • File
      package.json
    • File
      postcss.config.js
    • public
      • File
        next.svg
      • File
        vercel.svg
    • scripts
      • File
        update-database.mjs
    • src/app
      • File
        app.js
      • components
        • File
          ImageGrid.jsx
        • File
          Modal.jsx
        • File
          SearchBar.jsx
      • File
        favicon.ico
      • File
        globals.css
      • File
        layout.js
      • File
        page.js
      • search
        • File
          route.js
      • File
        utils.js
    • File
      tailwind.config.js
  • scripts
    • File
      convert.py
    • extra
      • File
        clip.py
  • src
    • File
      models.js
    • File
      processors.js
    • utils
      • File
        hub.js
      • File
        image.js
  • tests
    • File
      models.test.js