Cone beam CT (CBCT) is an imaging modality in cancer radiotherapy that is employed to acquire
patient anatomy for patient positioning purposes. When imaging the thorax or upper abdominal regions,
the result is severely degraded by blurring artifacts caused by respiratory motion.
Four dimensional (4D)-CBCT has been proposed to provide phase-resolved CBCT images by grouping projection data
according to respiratory phases. As a result, 4D-CBCT reconstruction requires periodicity of respiratory
motions and the fourth dimension is actually the breathing phase rather than time.
However, if respiration is irregular then phase grouping is inaccurate.
Therefore, it is desirable to reconstruct true 4D-CBCT images, each corresponding to an instantaneous projection,
so that the image quality can be improved with fewer artifacts arising from the patient’s irregular respiratory
motions. This new research could open up new uses of CBCT, including real time adjustment during scan and treatment.
One initial research direction is based on low-rank factorizations of a matrix that consists of all the constituent
We propose to solve this problem using a learning approach. As these images come from the same patient at successive times, they share the same anatomical structure up to deformations. A patient-specific lung motion model is available in radiotherapy, which gives deformation vector fields of the patient on a pre-treatment day. While the deformations on the day of the treatment may not be identical to the deformations on the pre-treatment day, as the patient may breathe differently on that day, the breathing modes are rather robust. Hence, we model the deformations as a linear combination of a set of basis deformations through principal component analysis (PCA). To extract the motion information hidden in the projection images, the projection images will be partitioned into small patches, a learning method will be used to automatically select patches which predict the motion well. A Riemannian metric will be learned in the projection space for each deformation coefficient. Given a new projection image, the PCA coefficients will be interpolated from the training deformation coefficients by kernel regression. The team at UTD will work closely with the collaborators at UTSW. UTSW will provide clinical data, suggest what prior information is available, for example, CT and breathing signals, and evaluate the outcomes/results from a physician’s point of view.
The Multi-Sensor Integration Group in the Air and Missile Defense Sector at The Johns Hopkins University Applied Physics Laboratory
(JHU-APL) develops algorithms to track a moving target using data collected by several
different sensors, including Doppler radar systems and sensors onboard the missile that will intercept the target.
Because Doppler radar measures partial position and velocity information at
a sequence of discrete times, the tracking problem is usually posed in the six-dimensional phase space
of positions and velocities. Because of uncertainties in the recorded data, tracking involves computation of the
probability density function (pdf) of the 6-D state vector. Assuming the pdf is a multivariate normal
distribution, it suffices to compute the time evolution of the mean state vector and the covariance matrix,
which can be computed in real time using Kalman filtering to assimilate uncertain measured data
into a physics-based system of ODEs that models the motion of the target.
In practice, one can imagine a scenario in which two Doppler radar systems and an interceptor are each separately tracking several closely-spaced potential targets. The problem is then to determine which targets are in common to all three sensors and to match tracks obtained from one sensor with those obtained from the others. The goal of this EDT project is to develop and evaluate methods to solve this matching or association problem in the context of specific unclassified applications of interest to the external partner at JHU-APL.
The project will provide EDT student trainees with opportunities to work on a problems that combine applied probability, statistical models, Bayesian analysis, combinatorial optimization, scientific computation, and uncertainty quantification, and to learn about applications of the mathematical sciences in the defense sector.
The overall goals of the proposed project are to develop new methods for evaluating the uncertainty in modelling and predicting insurance risks due to natural non-catastrophic hazards; to integrate multi-source information on climate, weather, and insurance claim dynamics, and to develop peril maps visualizing critical areas of highest vulnerability to natural disasters. The core idea is to evaluate utility and develop new algorithms for uncertainty quantification using a flexible framework of state-of-the-art machine learning techniques such as Deep Belief Nets (DBNs) and Copula Bayesian Networks (CBNs) whose potential is yet untapped in actuarial applications. Due to data restrictions, we will start from agricultural insurance but the proposed methodology can be then adapted to analysis of weather-related roof damages and other residential claims. The outcome of the project will allow stakeholders and decision makers to develop more efficient climate adaptation, abatement and mitigation strategies, while accounting for short- and long-term risks of natural hazards.