{"componentChunkName":"component---src-templates-page-template-js","path":"/pages/publication/","result":{"data":{"markdownRemark":{"id":"4ac83c79-f7a7-51c5-8740-9b89fc562a17","html":"<h2 id=\"publications\" style=\"position:relative;\"><a href=\"#publications\" aria-label=\"publications permalink\" class=\"anchor before\"><svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>Publications</h2>\n<h3 id=\"2025\" style=\"position:relative;\"><a href=\"#2025\" aria-label=\"2025 permalink\" class=\"anchor before\"><svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>2025</h3>\n<ul>\n<li>Killeen, B. D.*, <strong>Wan, B.*</strong>, Kulkarni, A. V., Drenkow, N., Oberst, M., Yi, P. H., &#x26; Unberath, M. (2025). <a href=\"https://arxiv.org/abs/2502.09688\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Towards Virtual Clinical Trials of Radiology AI with Conditional Generative Modeling</a>. arXiv preprint arXiv:2502.09688. <a href=\"/posts/Towards-Virtual-Clinical-Trials-of-Radiology-AI-with-Conditional-Generative-Modeling/\">[post]</a></li>\n<li><strong>Wan, B.</strong>, McNutt, T., Quon, H., &#x26; Lee, J. (2025, April). <a href=\"https://doi.org/10.1117/12.3046796\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Deep learning xerostomia prediction model with anatomy normalization and high-resolution class activation map</a>. In Medical Imaging 2025: Image-Guided Procedures, Robotic Interventions, and Modeling (Vol. 13408, pp. 275-279). SPIE. <a href=\"/posts/Deep-learning-xerostomia-prediction-model-with-anatomy-normalization-and-high-resolution-class-activation-map/\">[post]</a></li>\n<li>Gong, Z., <strong>Wan, B.*</strong>, Paranjape, J. N., Sikder, S., Patel, V. M., &#x26; Vedula, S. S. (2025). <a href=\"https://doi.org/10.1007/s11548-025-03406-0\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Evaluating the generalizability of video-based assessment of intraoperative surgical skill in capsulorhexis</a>. International Journal of Computer Assisted Radiology and Surgery, 1-9.</li>\n<li>Wan, B., McNutt, T. R., Quon, H., &#x26; Lee, J. (2025, July). <a href=\"https://aapm.confex.com/aapm/2025am/meetingapp.cgi/Paper/16769\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Explainable Xerostomia Prediction with Decoupled High Resolution Class Activation Map</a>. In AAPM 67th Annual Meeting &#x26; Exhibition. AAPM.</li>\n</ul>\n<h3 id=\"2024\" style=\"position:relative;\"><a href=\"#2024\" aria-label=\"2024 permalink\" class=\"anchor before\"><svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>2024</h3>\n<ul>\n<li><strong>Wan, B.</strong>, Peven, M., Hager, G., Sikder, S., &#x26; Vedula, S. S. (2024). <a href=\"https://www.nature.com/articles/s41598-024-77176-1\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Spatial-temporal attention for video-based assessment of intraoperative surgical skill. Scientific reports</a>, 14(1), 26912. <a href=\"/posts/Spatial-temporal-attention-for-video-based-assessment-of-intraoperative-surgical-skill/\">[post]</a></li>\n<li><strong>Wan, B.</strong>, McNutt, T., Ger, R., Quon, H., &#x26; Lee, J. (2024, April). <a href=\"doi.org/10.1117/12.3004498\">Deep learning prediction of radiation-induced xerostomia with supervised contrastive pre-training and cluster-guided loss</a>. In Medical Imaging 2024: Computer-Aided Diagnosis (Vol. 12927, pp. 286-291). SPIE. <a href=\"/posts/Deep-learning-prediction-of-radiation-induced-xerostomia-with-supervised-contrastive-pre-training-and-cluster-guided-loss/\">[post]</a></li>\n</ul>\n<h3 id=\"2022\" style=\"position:relative;\"><a href=\"#2022\" aria-label=\"2022 permalink\" class=\"anchor before\"><svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>2022</h3>\n<ul>\n<li><strong>Wan, B.</strong>, Caffo, B., &#x26; Vedula, S. S. (2022). <a href=\"https://doi.org/10.3389/frai.2022.872720\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">A unified framework on generalizability of clinical prediction models</a>. Frontiers in Artificial Intelligence, 5, 872720.</li>\n</ul>\n<h3 id=\"2021\" style=\"position:relative;\"><a href=\"#2021\" aria-label=\"2021 permalink\" class=\"anchor before\"><svg aria-hidden=\"true\" focusable=\"false\" height=\"16\" version=\"1.1\" viewBox=\"0 0 16 16\" width=\"16\"><path fill-rule=\"evenodd\" d=\"M4 9h1v1H4c-1.5 0-3-1.69-3-3.5S2.55 3 4 3h4c1.45 0 3 1.69 3 3.5 0 1.41-.91 2.72-2 3.25V8.59c.58-.45 1-1.27 1-2.09C10 5.22 8.98 4 8 4H4c-.98 0-2 1.22-2 2.5S3 9 4 9zm9-3h-1v1h1c1 0 2 1.22 2 2.5S13.98 12 13 12H9c-.98 0-2-1.22-2-2.5 0-.83.42-1.64 1-2.09V6.25c-1.09.53-2 1.84-2 3.25C6 11.31 7.55 13 9 13h4c1.45 0 3-1.69 3-3.5S14.5 6 13 6z\"></path></svg></a>2021</h3>\n<ul>\n<li>Zhu, L.*, <strong>Wan, B.*</strong>, Li, C., Tian, G., Hou, Y., &#x26; Yuan, K. <a href=\"https://doi.org/10.1016/j.patcog.2021.107920\" target=\"_blank\" rel=\"nofollow noopener noreferrer\">Dyadic relational graph convolutional networks for skeleton-based human interaction recognition</a> (Pattern Recognition, 2021). <a href=\"/posts/dyadic-relational-graph-convolutional-networks-for-skeleton-based-human-interaction-recognition/\">[post]</a></li>\n</ul>","frontmatter":{"title":"Publication","date":null,"description":null,"socialImage":null}}},"pageContext":{"slug":"/pages/publication/"}},"staticQueryHashes":["251939775","401334301","41472230"]}