safetensors
e19f55e4 - Implement PyTorch support for float8 types (F8_E5M2 and F8_E4M3) (#404)

Comment changes are shownComment changes are hidden
Commit
1 year ago
Implement PyTorch support for float8 types (F8_E5M2 and F8_E4M3) (#404) * Implement PyTorch support for float8 types (F8_E5M2 and F8_E4M3) Note that PyTorch name for e4m3 type has an extra "fn" prefix to match MLIR, but the format should be the same ("fn" means "finite"). We also test that -0.5 roundtrips in both formats, which makes sure that the format is preserved properly - both types are single-byte and have the same representation for zero, but different representations for -0.5. * Transparently support PyTorch before 2.1 by ignoring float8 formats
Author
Parents
  • bindings/python
    • py_src/safetensors
      • File
        torch.py
    • src
      • File
        lib.rs
    • tests
      • File
        test_pt_comparison.py