Replies: 1 comment 1 reply
-
Hi @FFiot, |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
For the benchmark dataset Burgers.npz from DeepXDE, our new architecture achieves performance equivalent to traditional networks with 33,665 trainable parameters using only 737 trainable parameters (a 97.8% reduction). This innovation significantly enhances computational efficiency while maintaining model accuracy, making it ideal for resource-constrained scenarios.
Key Highlights:
Ultra-Lightweight Design: 737 parameters vs. 33,665 in baseline models
Dynamic Topology: Adaptive focus-driven computation replaces static linear layers
Hardware-Friendly: 5-8× higher computational density for edge deployment
Project Repository: FFnormal on Gitee
Documentation: README (English)
Feedback and technical discussions are welcome! 🚀
Beta Was this translation helpful? Give feedback.
All reactions