enable 2 llama UT cases on xpu #37126
enable tests/models/llama/test_modeling_llama.py::LlamaIntegrationTes…
0f0bf399
yao-matrix
marked this pull request as ready for review 1 year ago
switch to use Expectations
fd9b7081
fix style
f5763f03
Merge branch 'main' into issue175
d3895a0d
Merge branch 'main' into issue175
1d8a8612
extract gen bits from architecture and use it
25d6aa28
Merge branch 'main' into issue175
87d9eb1e
ydshieh
approved these changes
on 2025-04-03
Merge branch 'main' into issue175
c4cc2c85
add cross refererence
6266decc
fix style
a700c0bc
SunMarc
approved these changes
on 2025-04-07
Merge branch 'main' into issue175
3fac3a99
ydshieh
merged
12bf24d6
into main 359 days ago
yao-matrix
deleted the issue175 branch 359 days ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub