Aligning modling code for GPT2 to work with vLLM (fallback) #36934
aligning for vllm
52bf36fb
using input shape rather than attn outputs
f666ea54
remove demo
da1ceae5
revert Conv1D
a644e25b
style
1c55b831
ariG23498
marked this pull request as ready for review 1 year ago
style
28960a91
Update src/transformers/models/gpt2/modeling_gpt2.py
064f6213
Merge branch 'main' into aritra/gpt2-vllm
bf0fbb1f
fix copies
19fb80ea
Merge branch 'main' into aritra/gpt2-vllm
20a9b655
Apply suggestions from code review
3f5572cc
Merge branch 'main' into aritra/gpt2-vllm
21d877f6
Merge branch 'main' into aritra/gpt2-vllm
cb7811f6
Merge branch 'main' into aritra/gpt2-vllm
64a2d008
Merge branch 'main' into aritra/gpt2-vllm
8653370f
Merge branch 'main' into aritra/gpt2-vllm
31c4859f
Merge branch 'main' into aritra/gpt2-vllm
e48cb6bf
Merge branch 'main' into aritra/gpt2-vllm
90a354f4
Merge branch 'main' into aritra/gpt2-vllm
d043bbb7
Merge branch 'main' into aritra/gpt2-vllm
9d4a529b
adding docs about vllm
9862a0e8
chore: resolve conflicts
640848ea
chore: style
6000b66b
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub