benchmark
c2f245db - Fix llama eval bug (#1549)

Commit
2 years ago
Fix llama eval bug (#1549) Summary: Fix https://github.com/pytorch/benchmark/issues/1548 . Works for Roadmap https://github.com/pytorch/benchmark/issues/1293 for Increase benchmark coverage, Before: ```bash python run.py llama -d cpu Traceback (most recent call last): File "run.py", line 298, in <module> m = Model(device=args.device, test=args.test, jit=(args.mode == "jit"), batch_size=args.bs, extra_args=extra_args) File "/workspace/benchmark/torchbenchmark/util/model.py", line 20, in __call__ obj = type.__call__(cls, *args, **kwargs) File "/workspace/benchmark/torchbenchmark/models/llama/__init__.py", line 16, in __init__ super().__init__(test=test, device=device, jit=jit, batch_size=batch_size, extra_args=extra_args) File "/workspace/benchmark/torchbenchmark/util/model.py", line 84, in __init__ self.determine_batch_size(batch_size) File "/workspace/benchmark/torchbenchmark/util/model.py", line 216, in determine_batch_size raise NotImplementedError(f"Test {self.test} is not implemented.") NotImplementedError: Test eval is not implemented. ``` After: ```bash python run.py llama -d cpu --bs 32 Running eval method from llama on cpu in eager mode with input batch size 32. CPU Total Wall Time: 11.997 milliseconds CPU Peak Memory: 1.3799 GB python run.py llama -d cpu --bs 16 Running eval method from llama on cpu in eager mode with input batch size 16. CPU Total Wall Time: 9.870 milliseconds CPU Peak Memory: 1.3770 GB ``` Pull Request resolved: https://github.com/pytorch/benchmark/pull/1549 Reviewed By: aaronenyeshi Differential Revision: D45005325 Pulled By: xuzhao9 fbshipit-source-id: 265532b33f83e87fecf94eac95e29f65ad8083f4
Author
ESI-SYD
Parents
Loading