Set the correct train batch size for squeezenet. (#574)
Summary:
SqueezeNet uses default batch size of 512 for training. As running a 512 batch will usually cause CUDA OOM on a single GPU, we use a combination of `batch_size` and `iter_size` that multiply to 512 as a workaround.
Source: https://github.com/forresti/SqueezeNet
Pull Request resolved: https://github.com/pytorch/benchmark/pull/574
Reviewed By: aaronenyeshi
Differential Revision: D32706241
Pulled By: xuzhao9
fbshipit-source-id: 60161bda6d98f00a9272e98fcb12205ea5053eaf