Make op builder detection adapt to accelerator change #5206
update nv-inference.yml for launch time op builder detection validation
238bd1cb
change accelerator detection logic
4382ebee
fallback to gloo when oneccl_binding_for_pytorch is not installed
cc21d463
add a workflow to test opbuilder-update
b746322d
remove triton from dependenc
a9aab4dd
remove opbuild-update and change nv-inference.yml only
53fc44a8
make installed_ops check accelerator name consistency
e5533baa
Merge branch 'master' into gma/launch_opbuilder_detection
ef01f0d3
fix accelerator override name
22cc43cb
fix formatting check
1691ef3a
regenerate compatible ops every time
cf2ea666
remove ipex and oneccl_pt_binding installation in cpu-inference workflow
c040105b
fix cpu-inference and nv-inference workflow
82591223
add missing quotation mark
d13fe5c6
import ALL_OPS in git_version_info.py
ccaeb721
build oneCCL with parallel make -j
2b6707f8
adding missing package dependency
a3bc2f80
fix cpu-inference workflow
04bd0615
Merge branch 'master' into gma/launch_opbuilder_detection
91163027
fix cpu-inference workflow and pre-compile workflow
1a1a71b7
remove py-cpuinfo and psutil preinstall
87367e16
Merge branch 'master' into gma/launch_opbuilder_detection
52fc101c
Merge branch 'master' into gma/launch_opbuilder_detection
8baa89e1
Merge branch 'master' into gma/launch_opbuilder_detection
0a63463c
Skip test when its fp16
4bba5e13
fix elastic test
7cd08cd6
Better dequantization skipping
b28d81bc
fix format
2ea44ba5
add numactl into dependency
57bab576
Merge branch 'master' into gma/launch_opbuilder_detection
f06f6b62
Use bf16 data type for test if accelerator does not support fp16
47888ec3
skip more tests requires bf16
a317fe8b
skip more UTs
a03cc567
skip more tests that CPU accelerator does not support
cd916f76
change skip reason
c943ec23
skip a time out test
30d3e695
fix test_zero
2658d415
Get around lazy init issue in test_ds_config_dict.py
cd8672d7
Merge branch 'master' into gma/launch_opbuilder_detection
a1666ba8
fix more ut failures
ff5380f5
fix more UT failure
da808a2e
fix more UTs
45e146a4
fix more tests
2fd32b6d
better construct for preferred dtype
2e604626
fix import error
c754492d
remove scale for bf16 config
e024e6f3
pass more UTs
f55186d3
fix more tests
02cb9e35
Merge branch 'master' into gma/launch_opbuilder_detection
ae544e1d
change preferred_dtype into a function
46236220
install pdsh in cpu-torch-latest.yml
43505ab1
Merge branch 'master' into gma/launch_opbuilder_detection
79c4d6c8
better test_lr_scheduler skipping
2e59d927
skip multinmode test
ad351e4a
preferred_dtype ==> preferred_dtype()
b2673dfb
fix more tests
41ced030
skip some special case
ad191718
fix error in nv-torch-latest
f4fe02b9
fix error in test_zero_context
88567b33
remove "fp16" argument in checkpoint_correctness_verification
c94003b5
tjruwase
approved these changes
on 2024-03-12
tjruwase
merged
c08e69f2
into master 1 year ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub