Hi, I have a model which is defined with some alias in the source definition. For example, like “torch.concat” instead of “torch.cat”, or “torch.clip” instead of “torch.clamp”. When exporting in ONNX, pytorch can “replace” all the alias in some way and the mapping between ATEN and ONNX works.
On the contrary, when exporting to JIT-Trace/Script it doesn’t. This mean that for example “ATEN::concat” stays that way. Next, if you try to convert a ScriptModel (the output of the JIT trace) to ONNX, and that model has some of the alias in the graph, then conversion fails, claiming for example that operators like “ATEN::concat” are not supported.
When calling “torch.onnx.export” with a “nn.Module” instance, the “de-aliasing” transformation is done automatically and export succeeds, if you pass a ScriptModel it doesn’t.
Since I want to keep the output of torch.jit.trace(...) I want to know if there is a way to “de-alias”/transform the model before calling the ONNX export.
I am fine in applying this replacement either at graph level (after the trace) or at nn.Module level, since I have access to the instance. I do NOT have access to the network definition (i.e. the sources).
ChatGPT has proposed two approaches, either operating on the graph or calling torch.fx. Both approach fail at some point (I guess because the network is more complex than a toy example).
Can you recommend a valid approach?