Skip to content

extract_f0_print.py torch.nn.utils.weight_norm deprecation warnings #29

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
alexlnkp opened this issue Jun 11, 2024 · 6 comments
Open
Assignees
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@alexlnkp
Copy link
Contributor

Warnings

torch/nn/utils/weight_norm.py:28: UserWarning: torch.nn.utils.weight_norm is deprecated in favor of torch.nn.utils.parametrizations.weight_norm.

Are being spewed out during feature extraction
Fix for that should be relatively easy, just replacing torch.nn.utils.weight_norm to torch.nn.utils.parametrizations.weight_norm

@fumiama fumiama added the enhancement New feature or request label Jun 11, 2024
@fumiama
Copy link
Owner

fumiama commented Jun 11, 2024

I will do this when I refactor training.

@fumiama fumiama self-assigned this Jun 11, 2024
@TheTrustedComputer
Copy link

Removing the deprecation warning is understandable to stay current, but I have to rely on an older build of PyTorch (2.0.0) with ROCm acceleration (5.2) to enable local training on my AMD RX 5000 series GPU. This is the latest official build without resorting to compilation from source.

It's a breaking change I'm certain nobody wants in their environment, but I have a Docker container that holds my custom build of the newest PyTorch and ROCm 5.4 to keep up with development. This removal requires PyTorch 2.1+, which doesn't have wheels with the ROCm version I need.

@fumiama
Copy link
Owner

fumiama commented Jun 11, 2024

Thanks for your mention. I will add a try-except fallback.

@alexlnkp
Copy link
Contributor Author

alexlnkp commented Jun 11, 2024

Removing the deprecation warning is understandable to stay current, but I have to rely on an older build of PyTorch (2.0.0) with ROCm acceleration (5.2) to enable local training on my AMD RX 5000 series GPU. This is the latest official build without resorting to compilation from source.

It's a breaking change I'm certain nobody wants in their environment, but I have a Docker container that holds my custom build of the newest PyTorch and ROCm 5.4 to keep up with development. This removal requires PyTorch 2.1+, which doesn't have wheels with the ROCm version I need.

Hmm... I guess we should stick to it for now, the plans to remove the torch.nn.utils.weight_norm from torch are not that immediate.
For now, we should use the workaround to keep up on the latest torch but we'll leave the original torch.nn.utils.weight_norm in place until torch has been updated for ROCm.

Please keep us informed on the matter.

@TheTrustedComputer
Copy link

TheTrustedComputer commented Jun 11, 2024

Hmm... I guess we should stick to it for now, the plans to remove the torch.nn.utils.weight_norm from torch are not that immediate. For now, we should use the workaround to keep up on the latest torch but we'll leave the original torch.nn.utils.weight_norm in place until torch has been updated for ROCm.

Please keep us informed on the matter.

I'm not sure what you mean by "until torch has been updated for ROCm"? Well, my custom build with the newest PyTorch and ROCm 5.4 runs perfectly fine with my card, even without the HSA_OVERRIDE_GFX_VERSION override. Unfortunately, ROCm 5.2 no longer compiles against the most recent PyTorch and has known bugs I've encountered that have been fixed upstream in later releases, so I need to bump it to version 5.4, which I've heard works flawlessly with the 5500 XT; I have two of each for training purposes.

I'd say use torch.nn.utils.parametrizations.weight_norm for PyTorch >= 2.1 and torch.nn.utils.weight_norm for PyTorch <= 2.0. Those are my two cents to support the broader user base of different PyTorch versions tailored to their needs.

@fumiama
Copy link
Owner

fumiama commented Jun 11, 2024

In this case, the try-except fallback can be a good trade-off.

@github-actions github-actions bot added the stale The topic has been ignored for a long time label Jul 12, 2024
@fumiama fumiama added the help wanted Extra attention is needed label Jul 12, 2024
@fumiama fumiama assigned alexlnkp and unassigned fumiama Jul 12, 2024
@fumiama fumiama removed the stale The topic has been ignored for a long time label Jul 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

3 participants