[Feature] Add free threading support#1481
Conversation
|
Hi @kevmo314! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at cla@meta.com. Thanks! |
|
I've signed the CLA. |
vmoens
left a comment
There was a problem hiding this comment.
LGTM in principle but let's wait till pytorch/pytorch#169033 is resolved
|
Bump on this change now that the dependent pytorch/pytorch#169033 is resolved |
|
@vmoens could you take a look? |
Description
This change removes a race condition in value caching that causes segfaults in python under free threading.
Additionally, it adds a declaration that the library does not use the GIL.
Motivation and Context
tensordict segfaults with this script on py3.14t with
PYTHON_GIL=0.I can put this in a test if you'd like. It's more of a regression test than a unit test.
Types of changes
What types of changes does your code introduce? Remove all that do not apply:
Checklist
Go over all the following points, and put an
xin all the boxes that apply.If you are unsure about any of these, don't hesitate to ask. We are here to help!