{"componentChunkName":"component---src-templates-dev-projects-template-js","path":"/research","result":{"data":{"allMarkdownRemark":{"edges":[{"node":{"fields":{"slug":"/posts/Towards-Virtual-Clinical-Trials-of-Radiology-AI-with-Conditional-Generative-Modeling","categorySlug":"/category/medical-ai/"},"frontmatter":{"title":"Towards Virtual Clinical Trials of Radiology AI with Conditional Generative Modeling","date":"2024-06-01T14:13:40.121Z","category":"Medical AI","description":"<p>We propose a framework for conducting virtual clinical trials of radiology AI systems using conditional generative models to synthesize realistic medical imaging scenarios for comprehensive AI evaluation.</p> <p style=\"font-style: italic;\">B. D. Killeen*, <span style=\"font-weight: bold\">Bohua Wan*</span>, A. V. Kulkarni, N. Drenkow, M. Oberst, P. H. Yi, M. Unberath</p> <p style=\"font-style: italic;\">arXiv preprint arXiv:2502.09688 (2025).</p>","github":"","paper":"https://arxiv.org/abs/2502.09688","socialImage":{"publicURL":"/static/1c953bf94b26f59fc6ee0fc03d2b21f8/architecture.png","childImageSharp":{"fluid":{"base64":"data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAABQAAAALCAIAAADwazoUAAAACXBIWXMAAAsSAAALEgHS3X78AAACqklEQVQozwGfAmD9AFxcXFRUVVVVVmFiYltybVFoZFJpZFpzblp0b114c1RrZ01hXVVqZkxhXVx3c156dGBuaGdgWVxXUWdgWgDHx8axsbC8vLfFxsW/7+as2s+u2s+86+K56d+dwbecxr2r3NKhycI+OjdnZl+dwrzG39XUwrTGuKrby7wAr6/JgYDatLTTqqutrNLFo7Gie5WNosrBl7WtaHZtprCo0erntcTAhoF+YzIma21lp722wrOqsbO3ycC5ALCwyp6e1L28z8PGyMHv4p64sX+mo7nr4Lrr4ajSysXSwq28wK+7tsDn3bHX0q3Y0Mjj2Mq+tLu3s8zDuwAjIyErKyUpKSUkJCQgKCcmMS8pMi8hKighKykkLy0fLSsiMC0iLi0iKyokLSwiLy0hKCYpJSIsKSMpJiIAkJGQhYWFhoaHlpiYjrKrfJ6Yep+ZjrKrmY6Fl4p/lIl/jYF3jYN3hIJxk4x+mYmCppmOnZKJkoZ/n5OLAMXEyaqqusHBwb2/vrnh2rLZ0bjVzr3Tzby3pl9dS5yQg/Li0nOCZikpHVJiQYKic8e+scq/sbW3nc7IswCurdeCgei1tNq5vb+rxL+50MnQ0MPDs6m6taNAUS44li9nxWA9pTI/gjJBOh9MYj3HvrHJwLCixZDHza4Ag4SVgYGVnZyboaOihqehhqagnZ2SpZiNopiNiYp0bJZhX4ZZa5Fai6V7gZhxiJJ2taWcp52SqJmSrZ+WADk5OSsrKAEBAQUFBTE9OiYuLAIBAQcGBjk0MSsmJgoCCA0ICgoFCAcBBgkECAcFBgICAgQDAwMDAgECAQCyscV7e4YAAAAICgqRtKxngHoAAAAJCAiZjIFtZV0AAAABAQEAAAAAAQAAAQAAAAABAQEAAAAAAAABAQGrnjqxDRtQtgAAAABJRU5ErkJggg==","aspectRatio":1.7730496453900708,"src":"/static/1c953bf94b26f59fc6ee0fc03d2b21f8/5e370/architecture.png","srcSet":"/static/1c953bf94b26f59fc6ee0fc03d2b21f8/f26e3/architecture.png 750w,\n/static/1c953bf94b26f59fc6ee0fc03d2b21f8/8d364/architecture.png 1500w,\n/static/1c953bf94b26f59fc6ee0fc03d2b21f8/5e370/architecture.png 3000w,\n/static/1c953bf94b26f59fc6ee0fc03d2b21f8/b7a32/architecture.png 4299w","sizes":"(max-width: 3000px) 100vw, 3000px","maxHeight":1690,"maxWidth":3000}}}}}},{"node":{"fields":{"slug":"/posts/Deep-learning-xerostomia-prediction-model-with-anatomy-normalization-and-high-resolution-class-activation-map","categorySlug":"/category/medical-ai/"},"frontmatter":{"title":"Deep learning xerostomia prediction model with anatomy normalization and high-resolution class activation map","date":"2024-02-01T14:13:40.121Z","category":"Medical AI","description":"<p>We develop an interpretable deep learning model for xerostomia prediction using anatomy normalization and high-resolution class activation maps for improved spatial interpretability.</p> <p style=\"font-style: italic;\"><span style=\"font-weight: bold\">Bohua Wan</span>, T. McNutt, H. Quon, J. Lee</p> <p style=\"font-style: italic;\">Proc. SPIE Medical Imaging 2025 (2025).</p>","github":"","paper":"https://doi.org/10.1117/12.3046796","socialImage":{"publicURL":"/static/c4f57f9af7708858fec4ab36916dc0f9/fig1.jpg","childImageSharp":{"fluid":{"base64":"data:image/jpeg;base64,/9j/2wBDABALDA4MChAODQ4SERATGCgaGBYWGDEjJR0oOjM9PDkzODdASFxOQERXRTc4UG1RV19iZ2hnPk1xeXBkeFxlZ2P/2wBDARESEhgVGC8aGi9jQjhCY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2P/wgARCAAIABQDASIAAhEBAxEB/8QAFgABAQEAAAAAAAAAAAAAAAAAAAIF/8QAFAEBAAAAAAAAAAAAAAAAAAAAAP/aAAwDAQACEAMQAAABzwSD/8QAGRABAAIDAAAAAAAAAAAAAAAAAgEDERIz/9oACAEBAAEFAnyUHFg2X//EABQRAQAAAAAAAAAAAAAAAAAAABD/2gAIAQMBAT8BP//EABQRAQAAAAAAAAAAAAAAAAAAABD/2gAIAQIBAT8BP//EABgQAAMBAQAAAAAAAAAAAAAAAAACEQGB/9oACAEBAAY/AuiYssKsh//EABsQAAICAwEAAAAAAAAAAAAAAAABEUEhMXHB/9oACAEBAAE/IbUr+jmwCxwbCpBWf//aAAwDAQACAAMAAAAQg8//xAAUEQEAAAAAAAAAAAAAAAAAAAAQ/9oACAEDAQE/ED//xAAUEQEAAAAAAAAAAAAAAAAAAAAQ/9oACAECAQE/ED//xAAcEAEBAAICAwAAAAAAAAAAAAABEQAhMXFBUZH/2gAIAQEAAT8QI6MKrzfI7cVxRtCl8vd+5qQAkG56z//Z","aspectRatio":2.4976076555023923,"src":"/static/c4f57f9af7708858fec4ab36916dc0f9/b417d/fig1.jpg","srcSet":"/static/c4f57f9af7708858fec4ab36916dc0f9/b417d/fig1.jpg 522w","sizes":"(max-width: 522px) 100vw, 522px","maxHeight":209,"maxWidth":522}}}}}},{"node":{"fields":{"slug":"/posts/Deep-learning-prediction-of-radiation-induced-xerostomia-with-supervised-contrastive-pre-training-and-cluster-guided-loss","categorySlug":"/category/medical-ai/"},"frontmatter":{"title":"Deep learning prediction of radiation-induced xerostomia with supervised contrastive pre-training and cluster-guided loss","date":"2024-01-01T14:13:40.121Z","category":"Medical AI","description":"<p>We propose a novel deep learning framework for predicting radiation-induced xerostomia using supervised contrastive pre-training and cluster-guided loss.</p> <p style=\"font-style: italic;\"><span style=\"font-weight: bold\">Bohua Wan</span>, T. McNutt, R. Ger, H. Quon, J. Lee</p> <p style=\"font-style: italic;\">Proc. SPIE Medical Imaging 2024 (2024).</p>","github":"","paper":"https://doi.org/10.1117/12.3004498","socialImage":{"publicURL":"/static/d64185c384f0737dd62cd896b484ca93/architecture.jpg","childImageSharp":{"fluid":{"base64":"data:image/jpeg;base64,/9j/2wBDABALDA4MChAODQ4SERATGCgaGBYWGDEjJR0oOjM9PDkzODdASFxOQERXRTc4UG1RV19iZ2hnPk1xeXBkeFxlZ2P/2wBDARESEhgVGC8aGi9jQjhCY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2P/wgARCAARABQDASIAAhEBAxEB/8QAFwABAQEBAAAAAAAAAAAAAAAAAAEEBf/EABQBAQAAAAAAAAAAAAAAAAAAAAD/2gAMAwEAAhADEAAAAepogKIAD//EABgQAAMBAQAAAAAAAAAAAAAAAAECEAMh/9oACAEBAAEFAlRhrew3/8QAFBEBAAAAAAAAAAAAAAAAAAAAIP/aAAgBAwEBPwEf/8QAFBEBAAAAAAAAAAAAAAAAAAAAIP/aAAgBAgEBPwEf/8QAGBAAAgMAAAAAAAAAAAAAAAAAAQIhMEH/2gAIAQEABj8CZy8HKf/EAB0QAAIBBAMAAAAAAAAAAAAAAAABIRARMUFRgcH/2gAIAQEAAT8hbxfSRd80kYnZ6GLB/9oADAMBAAIAAwAAABBwDwD/xAAUEQEAAAAAAAAAAAAAAAAAAAAg/9oACAEDAQE/EB//xAAUEQEAAAAAAAAAAAAAAAAAAAAg/9oACAECAQE/EB//xAAcEAEAAgIDAQAAAAAAAAAAAAABABEhMUFRcWH/2gAIAQEAAT8QbxmK4AaA44zENvXUt7iJaSlr1iBQhWFldd/JwjSa/J//2Q==","aspectRatio":1.1645962732919255,"src":"/static/d64185c384f0737dd62cd896b484ca93/9f35d/architecture.jpg","srcSet":"/static/d64185c384f0737dd62cd896b484ca93/faa31/architecture.jpg 750w,\n/static/d64185c384f0737dd62cd896b484ca93/9f35d/architecture.jpg 1347w","sizes":"(max-width: 1347px) 100vw, 1347px","maxHeight":1157,"maxWidth":1347}}}}}},{"node":{"fields":{"slug":"/posts/Spatial-temporal-attention-for-video-based-assessment-of-intraoperative-surgical-skill","categorySlug":"/category/surgical-ai/"},"frontmatter":{"title":"Spatial-temporal attention for video-based assessment of intraoperative surgical skill","date":"2023-06-01T14:13:40.121Z","category":"Surgical AI","description":"<p>We propose a spatial-temporal attention mechanism for automated surgical skill assessment from intraoperative videos, enabling objective evaluation of surgical performance.</p> <p style=\"font-style: italic;\"><span style=\"font-weight: bold\">Bohua Wan</span>, M. Peven, G. Hager, S. Sikder, S. S. Vedula</p> <p style=\"font-style: italic;\">Scientific Reports (2024).</p>","github":"","paper":"https://doi.org/10.1038/s41598-024-77176-1","socialImage":{"publicURL":"/static/3b5003decd88871595c9c6fa4f2d2e75/fig1.jpg","childImageSharp":{"fluid":{"base64":"data:image/jpeg;base64,/9j/2wBDABALDA4MChAODQ4SERATGCgaGBYWGDEjJR0oOjM9PDkzODdASFxOQERXRTc4UG1RV19iZ2hnPk1xeXBkeFxlZ2P/2wBDARESEhgVGC8aGi9jQjhCY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2P/wgARCAAHABQDASIAAhEBAxEB/8QAFgABAQEAAAAAAAAAAAAAAAAAAAED/8QAFgEBAQEAAAAAAAAAAAAAAAAAAQAD/9oADAMBAAIQAxAAAAHWmiEf/8QAFxABAAMAAAAAAAAAAAAAAAAAAAERQf/aAAgBAQABBQK0t//EABQRAQAAAAAAAAAAAAAAAAAAABD/2gAIAQMBAT8BP//EABQRAQAAAAAAAAAAAAAAAAAAABD/2gAIAQIBAT8BP//EABcQAQADAAAAAAAAAAAAAAAAAAEAEDH/2gAIAQEABj8CdjX/xAAZEAEAAwEBAAAAAAAAAAAAAAABABExQSH/2gAIAQEAAT8hRXrBEdPIpbdn/9oADAMBAAIAAwAAABCAP//EABQRAQAAAAAAAAAAAAAAAAAAABD/2gAIAQMBAT8QP//EABURAQEAAAAAAAAAAAAAAAAAAAAR/9oACAECAQE/EIj/xAAbEAABBAMAAAAAAAAAAAAAAAABABFRgSEx8P/aAAgBAQABPxCuUG50MZIYYUERtSX/2Q==","aspectRatio":3.0737704918032787,"src":"/static/3b5003decd88871595c9c6fa4f2d2e75/6d402/fig1.jpg","srcSet":"/static/3b5003decd88871595c9c6fa4f2d2e75/faa31/fig1.jpg 750w,\n/static/3b5003decd88871595c9c6fa4f2d2e75/d79bd/fig1.jpg 1500w,\n/static/3b5003decd88871595c9c6fa4f2d2e75/6d402/fig1.jpg 1770w","sizes":"(max-width: 1770px) 100vw, 1770px","maxHeight":576,"maxWidth":1770}}}}}},{"node":{"fields":{"slug":"/posts/combining-adda-with-deep-coral-unsupervised-domain-adaptation-for-image-classification","categorySlug":"/category/domain-adaptation/"},"frontmatter":{"title":"Combining ADDA with Deep CORAL: Unsupervised Domain Adaptation for Image Classification","date":"2021-05-23T14:13:40.121Z","category":"Domain Adaptation","description":"<p>We combine Adversarial Discriminative Domain Adaptation (ADDA) with Deep CORAL to allow ADDA better utilize the pretrained initialization. Vanilla ADDA diverses drastically from the initialization resulting much poorer results in early epochs comparing to the initialization. It requires sophisticated fine-tuning for ADDA to give satisfying results. With our novel modifications ADDA-CORAL can be trained extremely faster and yields better results.</p> <p style=\"font-style: italic;\"><span style=\"font-weight: bold\">Bohua Wan</span>, Cong Mu, Ruzhang Zhao, Zhuoying Li (Ordered by alphabetic)</p>","github":null,"paper":null,"socialImage":{"publicURL":"/static/d0e03a59c26b7452974a9f8fabd18d99/demo.gif","childImageSharp":null}}}},{"node":{"fields":{"slug":"/posts/dyadic-relational-graph-convolutional-networks-for-skeleton-based-human-interaction-recognition","categorySlug":"/category/action-recognition/"},"frontmatter":{"title":"Dyadic Relational Graph Convolutional Networks for Skeleton-based Human Interaction Recognition","date":"2021-02-19T14:13:40.121Z","category":"Action Recognition","description":"<p>We apply Graph Convolutional Networks on skeleton-based human-human interaction recognitions. We designed a Relational Adjacency Matrix (RAM) to represent dynamic relational graphs on the two actor's skeletons.</p> <p style=\"font-style: italic;\">Liping Zhu*, <span style=\"font-weight: bold\">Bohua Wan*</span>, Chengyang Li, Gangyi Tian, Yi Hou, Kun Yuan</p> <p style=\"font-style: italic;\">Pattern Recognition 115 (2021): 107920.</p>","github":"https://github.com/GlenGGG/DR-GCN","paper":"https://www.sciencedirect.com/science/article/pii/S0031320321001072","socialImage":{"publicURL":"/static/99d2a147d44057e4dd6664a84a9d8f20/structure.jpg","childImageSharp":{"fluid":{"base64":"data:image/jpeg;base64,/9j/2wBDABALDA4MChAODQ4SERATGCgaGBYWGDEjJR0oOjM9PDkzODdASFxOQERXRTc4UG1RV19iZ2hnPk1xeXBkeFxlZ2P/2wBDARESEhgVGC8aGi9jQjhCY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2P/wgARCAAFABQDASIAAhEBAxEB/8QAFwABAAMAAAAAAAAAAAAAAAAAAAECBf/EABUBAQEAAAAAAAAAAAAAAAAAAAAB/9oADAMBAAIQAxAAAAHasECP/8QAFhAAAwAAAAAAAAAAAAAAAAAAABAR/9oACAEBAAEFAlT/xAAUEQEAAAAAAAAAAAAAAAAAAAAQ/9oACAEDAQE/AT//xAAUEQEAAAAAAAAAAAAAAAAAAAAQ/9oACAECAQE/AT//xAAUEAEAAAAAAAAAAAAAAAAAAAAQ/9oACAEBAAY/An//xAAXEAADAQAAAAAAAAAAAAAAAAAAAREh/9oACAEBAAE/IbppabR//9oADAMBAAIAAwAAABBwD//EABQRAQAAAAAAAAAAAAAAAAAAABD/2gAIAQMBAT8QP//EABQRAQAAAAAAAAAAAAAAAAAAABD/2gAIAQIBAT8QP//EABoQAQADAAMAAAAAAAAAAAAAAAEAESExQVH/2gAIAQEAAT8QVo5e2FrFEPSNCGM//9k=","aspectRatio":3.8461538461538463,"src":"/static/99d2a147d44057e4dd6664a84a9d8f20/3696c/structure.jpg","srcSet":"/static/99d2a147d44057e4dd6664a84a9d8f20/faa31/structure.jpg 750w,\n/static/99d2a147d44057e4dd6664a84a9d8f20/d79bd/structure.jpg 1500w,\n/static/99d2a147d44057e4dd6664a84a9d8f20/3696c/structure.jpg 2244w","sizes":"(max-width: 2244px) 100vw, 2244px","maxHeight":582,"maxWidth":2244}}}}}}]}},"pageContext":{"tag":"Research","currentPage":0,"postsLimit":8,"postsOffset":0,"prevPagePath":"/research","nextPagePath":"/research/page/1","hasPrevPage":false,"hasNextPage":false}},"staticQueryHashes":["251939775","401334301","41472230"]}