Articulated objects are common in the real world, yet modeling their structure and motion remains a challenging task for 3D reconstruction methods.In this work, we introduce Part 2 GS, a novel framework for modeling articulated digital twins of multi-part objects with high-fidelity geometry and physically consistent articulation.Part 2 GS leverages a part-aware 3D Gaussian representation that encodes articulated components with learnable attributes, enabling structured, disentangled transformations that preserve high-fidelity geometry.To ensure physically consistent motion, we propose a motion-aware canonical representation guided by physics-based constraints, including contact enforcement, velocity consistency, and vector-field alignment.Furthermore, we introduce a field of repel points to prevent part collisions and maintain stable articulation paths, significantly improving motion coherence over baselines.Extensive evaluations on both synthetic and real-world datasets show that Part 2 GS consistently outperforms state-of-the-art methods by up to 10× in Chamfer Distance for movable parts.https://plan-lab.github.io/part2gs 2 GS), a novel framework that tackles three core challenges in articulated object modeling: (1) Unstructured Part Articulation: Rather than relying solely on unsupervised clustering, dual-quaternion blending, or using predefined part ground truth, Part 2 GS introduces a part parameter into the standard Gaussian parameters, and guides part transformation with physics-aware forces and learned part embeddings.(2) No Physical Constraints: Existing methods lack grounding, collision avoidance, and coherent rigid-body * Preprint.Work in progress.motion, resulting in implausible part behavior[25,26].Part 2 GS integrates a physically motivated construction loss that incorporates contact constraints, velocity consistency, and vector-field alignment to ensure stable, realistic articulation.(3) Rigid State-Pair Modeling: Prior methods rely heavily on fixed, geometric interpolation between two states[24,30,47].In contrast, Part 2 GS builds a canonical representation via motion-informed interpolation and is optimized with part-disentangled dynamics, allowing more flexible and physically grounded articulation learning without requiring explicit part supervision.Through extensive experiments, we demonstrate that Part 2 GS achieves state-of-the-art performance in reconstructing articulated 3D objects, delivering high-fidelity geometry and physically consistent motion, even in challenging multi-part scenarios.Our contributions are summarized as follows:(1) We introduce Part 2 GS, a part-aware 3D Gaussian representation for articulated object reconstruction, that encodes object parts with learnable attributes, enabling disentangled part motions and producing high-fidelity geometry with physically consistent articulation, even in complex multi-part settings.(2) We develop a motion-aware canonical representation that leverages physics-guided learning to model object articulation with contact constraints, such as velocity consistency and vector-field alignment, while a field of repel points pushes parts for better articulation learning.Together, these elements yield part-disentangled geometry and physically plausible motion paths.(3) We extensively evaluate Part 2 GS on both synthetic and real-world articulated objects, achieving state-ofthe-art performance over strong baselines.Comprehensive ablations confirm the effectiveness of each component in delivering high-quality geometry and articulation.
This paper presents Part 2 GS, a novel framework for modeling articulated objects using part-aware 3D Gaussian representations that allows for high-fidelity geometry and consistent articulation. The authors address key challenges in the reconstruction of articulated digital twins, including unstructured part articulation, lack of physical constraints, and rigid state-pair modeling by introducing a motion-aware representation guided by physics-based constraints. The method demonstrates improved performance over existing state-of-the-art approaches through extensive evaluations on synthetic and real-world datasets, achieving significant reductions in Chamfer Distance for movable parts. The introduction of repel points further enhances motion coherence and physical plausibility in the articulation process. Key contributions include a part-aware representation that enables disentangled part motions and a robust framework for articulated object reconstruction, achieving high accuracy and geometric fidelity. The experiments validate the effectiveness of the proposed method through ablation studies and comparisons with established baselines.
This paper employs the following methods:
- Part 2 GS
- DTA
- ArtGS
- Ditto
- PARIS
The following datasets were used in this research:
- PARIS
- ArtGS-Multi
- DTA-Multi
- Chamfer Distance
- Angular Error
- Positional Error
- Motion Error
- Part 2 GS consistently outperforms state-of-the-art methods by up to 10× in Chamfer Distance for movable parts.
- Achieves state-of-the-art performance in reconstructing articulated 3D objects.
- Delivers high-fidelity geometry and physically consistent motion.
The authors identified the following limitations:
- Relies on paired observations across two articulation states, which may be unavailable in real-world scenarios.
- May fail to disentangle parts when distinct object parts undergo nearly identical transformations.
- Number of GPUs: 1
- GPU Type: RTX 4090
- Compute Requirements: None specified