Project Suggestions 2020/21 ____________________________ Here are some projects suggestions for this year. Have a look at my home page http://www.cs.ucl.ac.uk/staff/d.alexander or my groups' websites http://mig.cs.ucl.ac.uk and http://pond.cs.ucl.ac.uk to get an idea of other work going on; I am happy to discuss projects in any of those areas too. Image quality transfer for low-field MRI ======================================== This project extends my group's recent work on image quality transfer (Alexander NIMG 2017; Tanno NIMG 2020; Blumberg MICCAI 2018; Lin MLMIR 2019) to enhance images from ultra-low-field MRI systems. Low field is an emerging frontier in MRI with various systems recently coming on the market that are uniquely compact and portable addressing a key limitation of traditional MRI systems that weigh several tons and are fixed in position. However, image quality is substantially lower on low field systems compared to fixed high field systems. Image quality transfer uses machine learning to estimate from a low quality image (e.g. from a standard scanner or rapid acquisition protocol) the image we would have obtained if we'd used a high quality scanner or lengthy and expensive acquisition protocol instead. For this project we have early access to some data from low-field portable scanners to make preliminary progress on the adaptation of image quality transfer to this new scenario. With James Cole from CMIC Machine-learning powered rapid microstructural imaging ====================================================== A variety of imaging techniques compete for time in modern clinical image-acquisition protocols. In MRI in particular, the maximum available time for a patient exam is around 30 mins and that time needs to include several different types of image (standard structural images, diffusion MRI, functional MRI, etc.). Thus shortening acquisition times for individual components can have major benefits. The aim in this project is to build on ideas in (Alexander et al NIMG 2017) to use machine learning techniques like "image quality transfer" to estimate NODDI (a microstructure imaging technique using diffusion MRI) from sparse acquisition protocols. The project is in collaboration with the UCL Dementia Research Centre where they routinely acquire MRI data sets from patients with Alzheimer's disease and other kinds of dementia. Successful realisation of high quality NODDI maps from shortened acquisition protocols will have a substantial impact on these patients and general understanding of disease. With David Thomas from the UCL DRC Models of neurodegenerative disease progression =============================================== This project implements various computational models of how pathology in neurodegenerative diseases, such as Alzheimer's disease propagates over the brain. By comparison with large imaging data sets from Alzheimer's patients, these models can shed new light on the underlying mechanisms of how such diseases develop and spread. This project will take some early steps by implementing basic spreading models such as network diffusion models and evaluating them against MRI and/or PET data sets. For an early taster of the ideas, have a look at this recent publication on the topic: https://elifesciences.org/articles/49298. Predicting MS Progression using Longitudinal Images =================================================== Abstract: Current clinical trials in Multiple sclerosis (MS) are based on longitudinal monitoring of biomarkers. MS progression modelling aims at providing an interpretable way of modelling the evolution of biomarkers according to an estimated history of the pathology. While there is little debate that longitudinal structural MRI is a critical biomarker for MS clinical trials and disease development estimations, it remains an open question on how to optimally extract measures of change from MRI scans. The straightforward approach of measuring the volume of the lesions at multiple time points independently and then comparing them longitudinally suffers from relatively high coefficient of variability in these measurements. In this project, we hypothesize that deep learning models can be trained to differentiate MRI changes due to biological factors, leading to novel biomarkers that are more sensitive to longitudinal change and can better track disease progression. Experience with deep learning and data science programming languages is beneficial to the project. With Le Zhang CMIC/ION. Subtyping Similarity in Alzheimer’s Disease =========================================== Owing to the heterogeneity of Alzheimer’s disease (AD), there has been a recent plethora of computational models that aim to discover subtypes of patients who will have a similar pattern of disease progression (see [Ferreira et al. 2020], [Khatami et al. 2019], and [Myszczynska et al. 2020] for recent reviews). The discovery of such subtypes has the potential for more targeted therapy and are key to precision medicine. One of the barriers for the clinical application of these subtypes is their truth, i.e. whether subtypes identified by the multitude of computational models align. In this project, we investigate the similarity between computationally-identified subtypes, determining their invariance across computational models (and thus plausibility), drawing from the existing literature on comparing clusterings (e.g. [Meilă 2007]). Key questions: Do additional data modalities drastically alter the identified subtypes? Do different computational models identify similar subtypes? What is the influence on subtypes, if any, of sample size in the datasets used? Useful skills: Python – understanding and using existing codebases, with potential to implement non-public approaches Machine learning – disease progression models span a range of areas, providing scope to implement/apply various methods References: [Ferreira et al. 2020]: https://doi.org/10.1212/WNL.0000000000009058 [Khatami et al. 2019]: https://doi.org/10.3389/fmolb.2019.00158 [Myszczynska et al. 2020]: https://www.nature.com/articles/s41582-020-0377-8 [Meilă 2007]: https://doi.org/10.1016/j.jmva.2006.11.013 With Cameron Shand CMIC/POND For some other POND-related projects see: http://neiloxtoby.com/work/student-projects/ Prostate lesion segmentation in histological images =================================================== In clinical practice, multiple imaging modalities are regularly collected for better lesion investigation. Deep learning model recently shows promising results on lesion identification in medical image, but typically trained under modality-specific setting. In this project, we explore the possibilities of either improving performance with additional modalities, or reducing modality while keeping performance. Specifically, this project will work on prostate cancer lesion segmentation with MRI and Histology images. We provide three potential solutions for students to choose based on their interest and experience, which are all based on existing work: 1) Improve segmentation from unpaired MRI and Histology images (see prior work [1], code available, preferred experience: Python, TensorFlow and segmentation model (encoder-decoder based networks) ; 2) Segmentation with reduced modality by GAN-based Domain Adaptation translating MRI to Histology (see prior work [2], code available, preferred experience: Python, Pytorch and GAN); 3) Improving segmentation performance with reduced modality, through unsupervised learning correlated MRI-Histology feature (see prior work [3], code NOT available (help will be provided), preferred experience: Python, Pytorch and strong model reimplementation ability). With Chen Jin (1,2,3), Vanya Valindria (1), Eleni Chiou (2) and Thomy Mertzanidou (3) [1] https://ieeexplore.ieee.org/abstract/document/8354170/?casa_token=i6Rjr0ppoucAAAAA:R5XqPBzzw2TPrhfYr0g43spxBvYLn7B-f5uEWGB8mQ6ids4fQA2jXFX66ZAJr__Z_Li5ckQDwKo [2] https://arxiv.org/pdf/2010.07411 [3] https://arxiv.org/abs/2008.00119 Foveated super-resolution ========================= This project adapts our recent work on foveation for segmentation to the context of Single Image Super-Resolution (SISR). SISR is a notoriously challenging ill-posed problem, which aims to obtain a high- resolution (HR) output from one of its low-resolution (LR) versions. Typically, this is achieved by training a convolutional neural network model to learn pixel-wise mapping between paired HR and LR patches cropped at fixed size from full size images. While most appropriate patch size could vary spatially, and therefore impact the performance of trained model for SISR. In this project we attach the foveation module, a learnable “dataloader“ to predict the appropriate patch size at each location, and jointly trained to optimise the downstream SISR task. The effectiveness of foveation module has been proven in segmentation context (see our recent work https://arxiv.org/abs/2007.15124), whose task is also to learn pixel-wise mapping from paired patches. Knowledge in deep learning and PyTorch coding experiences are beneficial to the project. With Chen Jin Characterising Brain Microstructure with Multi-Dimensional MRI and Unsupervised Machine Learning ================================================================================================ Diffusion MRI and MRI relaxometry are powerful methods for understanding the structure and function of the human brain. These techniques provide information on tissue microstructure and composition – such as axon diameter, direction of major nerve tracts and myelination. New MRI techniques can measure the properties of diffusion and relaxometry in brain tissue simultaneously. However, the resulting large, multi-dimensional datasets require new analysis techniques. We have recently developed new data-driven unsupervised learning algorithms for this task. In this project, the student will utilise and further develop these techniques to analyse in-vivo brain MRI scans, and test their effectiveness in a controlled environment using simulations. With Paddy Slator and Marco Palombo Image quality transfer for brain connectivity mapping in epilepsy ================================================================= This project also builds on image quality transfer (see project above) but this time in the context of epilepsy surgery. It will adapt the technique to enhance brain connectivity mapping for usage in surgical planning for removal of brain lesions that cause epilepsy. Brain connectivity mapping is an essential step in the surgery, as it identifies brain regions that surgeons should avoid so as not to leave lasting brain damage. IQT enables better identification of connection pathways and ultimately, we hope, better outcomes for neurosurgery patients. With Sjoerd Vos, Matteo Figini, John Duncan (UCL ION) Locating impacted canines on dental scans using machine learning ================================================================ This project aims to develop machine learning approaches to locating impacted maxillary canines using single dental x-ray scans. The majority of clinicians utilise parallax in scans from different angles to locate impacted maxillary canines. By its very nature parallax necessitates that two radiographs are obtained, most frequently this includes a dental panoramic (DPT) taken as part of a standard comprehensive orthodontic assessment and a supplemental anterior occlusal radiograph. The majority of patients presenting with impacted canines are children and so there would be a clear benefit in reducing exposure to radiation if possible, in the case of impacted maxillary canines, accurately identifying their location by way of a DPT alone would achieve an important reduction. There are currently no reports of the use of artificial intelligence to locate impacted canines. However, deep learning techniques have shown remarkable ability to pick up on subtle cues in complex data to make decisions and classifications that exceed human performance, e.g. in object recognition, speech recognition, 3D reconstruction etc. Clinical collaborators on this project will collect radiographic images from existing and discharged patients with impacted canines and pseudo-anonymise the data. The location of the impacted canines will be identified from the clinical notes and or the utilisation of parallax or available 3D scans (CBCT) and documented with the corresponding radiographs. The project will train deep neural networks, starting with standard architectures, to identify the location directly from single radiographs. We will evaluate performance initially by cross-validation and compare with human performance on the same task. Subsequent evaluation on entirely unseen data will ultimately provide conformation. With Owais Sharif, Safoora Keshtgar and Samantha Hodges from the Eastman dental school. Image analysis and AI in Ophthalmology ====================================== Some links to project suggestions from some colleagues at Moorfields below. Some great projects with large and interesting data sets available and with very direct clinical application. https://liveuclac-my.sharepoint.com/:w:/g/personal/smgxadu_ucl_ac_uk/EUYgRsfUDTRJmI_3a8E7_fgB86k0upAEskokNQMDnRACIg?e=XdY7LF&wdLOR=cB62B0093-1FA1-E24D-ADC9-77FDDEEE7F15 Contact Adam Dubis a dot dubis at ucl.ac.uk for the list above. https://docs.google.com/document/d/1NLsdLUOqjlTyd2NmpL7MpkL1a62km9w2YJbtVdPhQYM/edit?usp=sharing Contact Nikolas Pontikos n dot pontikos at ucl.ac.uk for the list above.