About me: I am broadly interested in machine learning, artificial intelligence, and data science. My current research focuses on inference and learning algorithms for large scale probabilistic graphical models and its combination with deep neural networks. My advisor is professor Nicholas Ruozzi.

Timeline
1/04/2022 - ?: Fulltime software engineering @Google, Mountain View
11/04/2021: PhD defense passed. Achieved C.S. Phd degree.
2021 Summer: Internship @YouTube, Google LLC.
2018: UT Dallas Computer Science M.S. Achieved a C.S master degree at UT Dallas Computer Science Department.
8/12/2016: UT Dallas Computer Science Ph.D. candidate AI and Machine Learning, Probabilistic Graphical Models, Deep Learning.

Publications and Projects

Segmentation Based Image and Video Colorization
A multitask deep neural network consists of a shared NN as backbone for image feature extraction, followed by a state-of-art segmentation head and a colorization head. Through minimizing the losses of two tasks, it not only can generate pictures with more vivid color, but also make the predictions more consistent between consecutive frames when applied directly on videos.
Hao Xiong, Nicholas Ruozzi
End-to-end Stereo Matching with CRF Regularized Convolutional Neural Net
We propose a generic end-to-end framework for structured prediction tasks that combines neural networks and conditional random fields. We explain how our approach improves in key ways over existing combined approaches and demonstrate that the CRF acts as an effective regularizer. We demonstrate the superior performance of our combined approach on the stereo depth estimation task against pure deep neural network solutions: we plug existing deep neural network architectures into our framework and perform a head-to-head comparison on a variety of real and synthetic data sets.
Hao Xiong, Nicholas Ruozzi
Under Review
General Purpose MRF Learning with Neural Network Potentials
In this work, we propose a generic MLE estimation procedure for MRFs whose potential functions are modeled by neural networks. To make learning effective in practice, we show how to leverage a highly parallelizable variational inference method that can easily fit into popular machining learning frameworks like TensorFlow. We demonstrate experimentally that our approach is capable of effectively modeling the data distributions of a variety of real data sets and that it can compete effectively with other common methods for multilabel classification and generative modeling tasks.
Hao Xiong, Nicholas Ruozzi
IJCAI-PRICAI 2020
One-Shot Marginal MAP Inference in Markov Random Fields
We propose a novel variational inference strategy that is flexible enough to handle both continuous and discrete random variables, efficient enough to be able to handle repeated statistical inferences, and scalable enough, via modern GPUs, to be practical on MRFs with hundreds of thousands of random variables. We prove that our approach overcomes weaknesses of the current approaches and demonstrate the efficacy of our approach on both synthetic models and real-world applications.
Hao Xiong, Yuanzhen Guo *, Yibo Yang *, Nicholas Ruozzi
UAI 2019
Marginal Inference in Continuous Markov Random Fields using Mixtures
We present an alternative family of approximations that, instead of approximating the messages, approximates the beliefs in the continuous Bethe free energy using mixture distributions. We show that these types of approximations can be combined with numerical quadrature to yield algorithms with both theoretical guarantees on the quality of the approximation and significantly better practical performance in a variety of applications that are challenging for current state-of-the-art methods.
Yuanzhen Guo, Hao Xiong, Nicholas Ruozzi
AAAI 2019
show more