[Specialized Kernel] Translate Kernel Assignment Logic from function.yaml to native_functions.yaml (#102576)
Updating `gen_executorch.translate_native_yaml()` to translate kernel assignments when converting `functions.yaml` to `native_functions.yaml`
---
Functions.yaml format:
```
- func: add.out
type_alias:
T0: [<Type>, <Type>]
T1: [<Type>]
dim_order_alias:
D0: [0, 1, 2, 3]
D1: [0, 3, 2, 1]
kernels:
- arg_meta: null
kernel_name: default_impl
- arg_meta:
self: [T0, D0]
other:[T0, D0]
out: [T0, D0]
kernel_name: test_impl
```
native_functions.yaml format
```
func: add.out(Tensor self, Tensor other, *, Scalar alpha=1, Tensor(a!) out) -> Tensor(a!)
kernel:
default: default_impl
v<Version>/<TYPE Enum>;<DIM Order>|<TYPE Enum>;<DIM Order>|<TYPE Enum>;<DIM Order>: test_impl
```
Example: **'v1/6;0,1,2,3|3;0,1,2,3|6;0,1,2,3' : 'test_impl'**
## Note:
- If a "kernels" field is not present in functions.yaml (as it currently is), the output is unaffected
---
Design Doc: https://docs.google.com/document/d/1gq4Wz2R6verKJ2EFseLyPdAF0wqomnCrVDDJpRkYsRw/edit?kh_source=GDOCS#
Differential Revision: [D45971107](https://our.internmc.facebook.com/intern/diff/D45971107/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102576
Approved by: https://github.com/larryliu0820