llama.cpp
build : detect host compiler and cuda compiler separately
#4414
Merged

build : detect host compiler and cuda compiler separately #4414

cebtenzzre merged 12 commits into master from ceb/fix-cuda-warning-flags
cebtenzzre
cebtenzzre make : simplify nvcc flags
9b28f341
cebtenzzre make : detect host compiler and cuda compiler separately
91df2623
cebtenzzre make editorconfig checker happy
93ca80fa
cebtenzzre make : honor NVCC, LLAMA_CUDA_CCBIN, NVCCFLAGS
88781479
cebtenzzre cmake : silence linker check stdout
abacb278
cebtenzzre cmake : detect host compiler and cuda compiler separately
a81a34ad
cebtenzzre cebtenzzre marked this pull request as ready for review 2 years ago
cebtenzzre cebtenzzre requested a review from ggerganov ggerganov 2 years ago
ggerganov
ggerganov
ggerganov commented on 2023-12-12
cebtenzzre cmake : fix incorrect variable reference
b5b2cdff
cebtenzzre cmake : capitalize variables
e30a8ad1
cebtenzzre cmake : make CUDA warning stuff properly conditional
cdf3cc3c
cebtenzzre cmake : fix improper joining in generator expression
cacac251
cebtenzzre cebtenzzre requested a review from ggerganov ggerganov 2 years ago
ggerganov
ggerganov approved these changes on 2023-12-13
cebtenzzre get_flags.mk -> get-flags.mk
d870a9fd
cebtenzzre Merge branch 'master' of https://github.com/ggerganov/llama.cpp into …
c8554b80
cebtenzzre cebtenzzre merged 70f806b8 into master 2 years ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone