Adam Paszke [email protected], Sam Gross [email protected], Francisco Massa [email protected], Adam Lerer [email protected], James Bradbury Google, Gregory Chanan [email protected], Trevor Killeen [email protected], Zeming Lin [email protected], Natalia Gimelshein [email protected], Luca Antiga [email protected], Alban Desmaison, Andreas Köpf [email protected], Edward Yang [email protected], Zach Devito [email protected], Martin Raison Nabla, Alykhan Tejani, Sasank Chilamkurthy [email protected], Qure Ai, Benoit Steiner [email protected], Lu Fang Facebook, Junjie Bai Facebook, Soumith Chintala [email protected], Facebook AI Research Facebook AI Research Facebook AI Research Facebook AI Research Self Employed Facebook AI Research University of Warsaw NVIDIA Orobix, Facebook AI Research Facebook AI Research Facebook AI Research Facebook AI Research Oxford University (2019)
This paper presents PyTorch, a high-performance deep learning library that combines an imperative programming style with efficient computation. The authors argue that usability and speed can coexist, providing a Pythonic interface that simplifies debugging and model deployment. Key design principles emphasize the importance of being user-friendly for researchers, while balancing performance with simplicity. The paper discusses various system optimizations that address Python's performance limitations, especially in GPU execution, memory management, and automatic differentiation mechanics. The evaluation showcases PyTorch's competitive performance against other frameworks using common benchmarks. By adopting a user-centric approach and leveraging the strengths of Python’s ecosystem, PyTorch has gained significant adoption in the deep learning community and aims to further enhance its capabilities.
This paper employs the following methods:
The following datasets were used in this research:
The authors identified the following limitations: