Opti para #85

dlibenzi merged 15 commits into master from opti_para
dlibenzi
dlibenzi Avoid tensors sync on fused train.
2718511d
dlibenzi Rename a few tensor API for consistency.
2f754895
dlibenzi Generate the XLA computations in parallel.
c27d2e72
dlibenzi dlibenzi requested a review from asuhan asuhan 7 years ago
dlibenzi Smoother rate.
be2c3fc1
dlibenzi No need to re-check for nullptr as GetApplyOrder() already filtered o…
1645debf
dlibenzi Allow tensors to reference data from other tensors, so that new gener…
273ab3fc
dlibenzi Use a context to capture the ApplyPendingGraph() metadata, to allow s…
b02cb916
dlibenzi Minor code refactoring for the loader wrapper batch counting.
2bfc13ed
dlibenzi In case of multiple tensors providing data, select the older one.
1b96b6c6
dlibenzi Added more counters to track parallel apply status.
dfb81c6f
dlibenzi Now that we route CHECKs to exceptions, care should be taken when run…
0d7ee4a8
dlibenzi Re-format python file according to Google pyformat.
fad36a8e
dlibenzi Refactor some code into a function.
6a20f6ce
dlibenzi Minor code refactoring.
a7ef8c6f
dlibenzi Refactored the execute cached path.
fb12e351
asuhan
asuhan approved these changes on 2018-12-30
dlibenzi dlibenzi merged f52e59f2 into master 7 years ago

Login to write a write a comment.

Login via GitHub

Reviewers
Assignees
No one assigned
Labels
Milestone