Avoid tensors sync on fused train.
2718511d
Rename a few tensor API for consistency.
2f754895
Generate the XLA computations in parallel.
c27d2e72
Smoother rate.
be2c3fc1
No need to re-check for nullptr as GetApplyOrder() already filtered o…
1645debf
Allow tensors to reference data from other tensors, so that new gener…
273ab3fc
Use a context to capture the ApplyPendingGraph() metadata, to allow s…
b02cb916
Minor code refactoring for the loader wrapper batch counting.
2bfc13ed
In case of multiple tensors providing data, select the older one.
1b96b6c6
Added more counters to track parallel apply status.
dfb81c6f
Now that we route CHECKs to exceptions, care should be taken when run…
0d7ee4a8
Re-format python file according to Google pyformat.
fad36a8e
Refactor some code into a function.
6a20f6ce
Minor code refactoring.
a7ef8c6f
Refactored the execute cached path.
fb12e351
asuhan
approved these changes
on 2018-12-30
dlibenzi
merged
f52e59f2
into master 7 years ago
Assignees
No one assigned
Login to write a write a comment.
Login via GitHub