Fix the fastNLP model train batch size. (#571)
Summary:
In the documentation, the fastNLP Bert QA task uses batch size=6 for training. This PR fixes the correct train batch size.
Source:
https://fastnlp.readthedocs.io/zh/latest/tutorials/extend_1_bert_embedding.html
Original code:
`
trainer = Trainer(data_bundle.get_dataset('train'), model, loss=loss, optimizer=optimizer,
sampler=BucketSampler(seq_len_field_name='context_len'),
dev_data=data_bundle.get_dataset('dev'), metrics=metric,
callbacks=callbacks, device=device, batch_size=6, num_workers=2, n_epochs=2, print_every=1,
test_use_tqdm=False, update_every=10)
`
Pull Request resolved: https://github.com/pytorch/benchmark/pull/571
Reviewed By: aaronenyeshi
Differential Revision: D32575822
Pulled By: xuzhao9
fbshipit-source-id: 9c1f16fab9e7af1febbbbf6b8ef8ed8b815d944a