pytorch
767a1046 - [quant] change observer FQNs generated in prepare step (#65420)

Commit
3 years ago
[quant] change observer FQNs generated in prepare step (#65420) Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65420 Context: In some FB use cases we have a need to map observer stats from train model checkpoint to inference model. We observerd that some buffer names are different becuase the intermediate activation tensors are generated differently across train and inference model. More details in https://fb.quip.com/PtGcAR0S5CQP Currently, for each observer (activation_post_process), the FQN of the module inserted is determined based on the FQN of the input tensor it is observing. In this change we change the observer FQN to include the FQN of the op/module it is observing rather than tensor/intermediate op names along with the “input”/“output” detail. Before ``` def forward(self, x): x_activation_post_process_0 = self.x_activation_post_process_0(x); x = None mods1_w = self.mods1.w mods1_w_activation_post_process_0 = self.mods1_w_activation_post_process_0(mods1_w); mods1_w = None mods1_b = self.mods1.b linear = torch.nn.functional.linear(x_activation_post_process_0, mods1_w_activation_post_process_0, bias = mods1_b); x_activation_post_process_0 = mods1_w_activation_post_process_0 = mods1_b = None linear_activation_post_process_0 = self.linear_activation_post_process_0(linear); linear = None return linear_activation_post_process_0 ``` After ``` def forward(self, x): mods1_input_activation_post_process_0 = self.mods1_input_activation_post_process_0(x); x = None mods1_w = self.mods1.w mods1_w_activation_post_process_0 = self.mods1_w_activation_post_process_0(mods1_w); mods1_w = None mods1_b = self.mods1.b linear = torch.nn.functional.linear(mods1_input_activation_post_process_0, mods1_w_activation_post_process_0, bias = mods1_b); x_activation_post_process_0 = mods1_w_activation_post_process_0 = mods1_b = None mods1_output_activation_post_process_0 = self.mods1_output_activation_post_process_0(linear); linear = None return mods1_output_activation_post_process_0 ``` Test Plan: python test/test_quantization.py test_observer_fqn Imported from OSS Reviewed By: jerryzh168 Differential Revision: D31088652 fbshipit-source-id: 2f1526f578a13000b34cfd30d11f16f402fd3447
Author
Parents
Loading