Chris Chien

Chris Chien

(she/her/hers)
Research Assistant @ Department of Computer Science
National Yang Ming Chiao Tung University (NYCU), Taiwan
latest update: 12-30-2025

Note: I have applied for PhD admission for Fall 2026. [SOP: here]


I am a Research Assistant in Computer Science at National Yang Ming Chiao Tung University (NYCU), supervised by Prof. Yu-Lun Liu at the Computational Photography Lab.


My current research focuses on 3D/4D scene reconstruction and generation. I recently authored Splannequin (WACV 2026), which reconstructs static scenes from casual monocular videos through self-anchoring. More broadly, I am interested in developing robust and interesting applications by grounding them in fundamental algorithmic analysis. My goal is to build intelligent systems that can perceive and reconstruct the visual world as effectively as humans do.


Previously, I completed my M.S. in ECE at UCLA and my B.S. in Electrophysics at NCTU (now NYCU). Prior to my current focus, I worked on privacy-preserving AI and IoT security, leading to publications in MobiCom and IEEE IoT-J.

Research Interests

Computer Vision Applications
3D & 4D Reconstruction
Generative AI
Physically-Informed Optimization
Dataset Curation
Privacy-Preserving & IoT AI

News

Publications

Splannequin: Freezing Monocular Mannequin-Challenge Footage with Dual-Detection Splatting

Hao-Jen Chien, Yi-Chuan Huang, Chung-Ho Wu, Wei-Lun Chao, Yu-Lun Liu
WACV 2026

Splannequin freezes dynamic Gaussian splats into crisp 3D scenes from monocular videos by anchoring artifacts to more reliable temporal states.

GaMO: Geometry-aware Multi-view Diffusion Outpainting for Sparse-View 3D Reconstruction

Yi-Chuan Huang, Hao-Jen Chien, Chin-Yang Lin, Ying-Huan Chen, Yu-Lun Liu
Under Submission

GaMO reformulates sparse-view 3D reconstruction as multi-view outpainting, expanding the field of view with geometry-aware diffusion to achieve consistent, high-quality reconstructions efficiently from very few input views.

Voxify3D: Pixel Art Meets Volumetric Rendering

Yi-Chuan Huang, Jiewen Chan, Hao-Jen Chien, Yu-Lun Liu
ArXiv 2025

Voxify3D is a differentiable two-stage method that converts 3D meshes into stylized voxel art with discrete palette control. It preserves semantic structure using multi-view pixel-art supervision and CLIP-guided optimization.