Skip to content

Implementation of torch-to-linalg lowering of AtenOuterOp #4099

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

amemov
Copy link

@amemov amemov commented Mar 19, 2025

An attempt to resolve #4093

Initial implementation:

  • Defined the op in Linear.cpp

- Defined the op in Linear.cpp

TODO:
- Testing, and perhaps add some test(-s) inside torch-mlir?
@amemov
Copy link
Author

amemov commented Mar 19, 2025

Hi, this is my first time contributing to the project - if you have any feedback or suggestions, I would really appreciate that.

@zjgarvey
Copy link
Collaborator

Thanks for picking this up.

There isn't any reason to include quantization logic for this op since it doesn't have any qdq fusion implemented in FuseQuantizedOps.cpp.

It would also be a bit better to implement this directly as a linalg.generic op, rather than unsqueezes and a matmul with a reduction dim size of 1. If you were to do the unsqueeze/matmul approach, it would be more appropriate to put this logic in DecomposeComplexOps.cpp.

Also, please do add e2e tests somewhere in ./projects/pt1/python/torch_mlir_e2e_test/test_suite/.

@amemov amemov marked this pull request as ready for review March 22, 2025 23:31
@amemov amemov force-pushed the AtenOuterOp-Lowering branch from cda896e to 2348344 Compare March 24, 2025 13:48
 - Rewrote the ConvertAtenOuterOp without unsqueezing
 - Replaced linalg::MatmulOp with linalg::GenericOp for buidling result of the op
 - Added error messages for
 - Added test case in e2e tests - placed in matmul.py
Copy link
Collaborator

@zjgarvey zjgarvey left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After a change to the init tensor for the generic, I think this looks good!

Thanks for the changes.

@zjgarvey
Copy link
Collaborator

zjgarvey commented Apr 2, 2025

Also, be sure to run either pre-commit run --all (you will need to install it with pip install pre-commit) or git clang-format to auto format the files.

@amemov
Copy link
Author

amemov commented Apr 3, 2025

I changed it to the init tensor and ran pre-commit - everything looks good on my end.

@amemov amemov requested a review from zjgarvey April 3, 2025 14:43
Copy link
Collaborator

@vivekkhandelwal1 vivekkhandelwal1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @amemov, can you please take a look at the CI failure?

@amemov
Copy link
Author

amemov commented Apr 8, 2025

Hi @amemov, can you please take a look at the CI failure?

Hi @vivekkhandelwal1, I skimmed it briefly before - I didn't see any failures specifically related to torch.outer() lowering that I wrote and to my test case.

I will take a better look at it today, but so far I'm not really sure what exactly I need to modify / add here.

@vivekkhandelwal1
Copy link
Collaborator

Hi @amemov, can you please take a look at the CI failure?

Hi @vivekkhandelwal1, I skimmed it briefly before - I didn't see any failures specifically related to torch.outer() lowering that I wrote and to my test case.

I will take a better look at it today, but so far I'm not really sure what exactly I need to modify / add here.

Hi @amemov, some test(s) is/are crashing for the fx_importer config. Most probably, it will be the one that you have added. In order to find out which test is crashing you need to run the tests serially. You may use the following command:

python -m projects.pt1.e2e_testing.main --config=fx_importer -s

The above command will run all the tests one by one. And, the last test run will be the one that's crashing. Then, you can figure out the fix for that.

@amemov
Copy link
Author

amemov commented Apr 12, 2025

@vivekkhandelwal1
The problem was raised from the test file that I wrote:

torch-mlir/externals/llvm-project/llvm/include/llvm/Support/Casting.h:566: decltype(auto) llvm::cast(const From&) [with To = mlir::RankedTensorType; From = mlir::Type]: Assertion isa<To>(Val) && "cast<Ty>() argument of incompatible type!"' failed

I resolved it by changing the casting and dimensions for operands. On my machine, it now passes AttenOuter test.

@amemov amemov requested a review from vivekkhandelwal1 April 12, 2025 15:19
@vivekkhandelwal1
Copy link
Collaborator

@amemov, a contributor has added the lowering for this same op through decomposition here: #4138.

Although your PR is old so in the case of conflict, you should get a chance to complete it, but their approach (via decomposition) is a better one. Can you and @ivanamitreski find a solution out of this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Missing torch-to-linalg lowering of AtenOuterOp
3 participants