{"componentChunkName":"component---src-templates-tag-template-js","path":"/tag/computer-vision","result":{"data":{"site":{"siteMetadata":{"title":"Bohua Wan's personal site","subtitle":"PhD student majored in computer science at Johns Hopkins University."}},"allMarkdownRemark":{"edges":[{"node":{"fields":{"slug":"/posts/Spatial-temporal-attention-for-video-based-assessment-of-intraoperative-surgical-skill","categorySlug":"/category/surgical-ai/"},"frontmatter":{"title":"Spatial-temporal attention for video-based assessment of intraoperative surgical skill","date":"2023-06-01T14:13:40.121Z","category":"Surgical AI","description":"<p>We propose a spatial-temporal attention mechanism for automated surgical skill assessment from intraoperative videos, enabling objective evaluation of surgical performance.</p> <p style=\"font-style: italic;\"><span style=\"font-weight: bold\">Bohua Wan</span>, M. Peven, G. Hager, S. Sikder, S. S. Vedula</p> <p style=\"font-style: italic;\">Scientific Reports (2024).</p>","github":"","paper":"https://doi.org/10.1038/s41598-024-77176-1","socialImage":{"publicURL":"/static/3b5003decd88871595c9c6fa4f2d2e75/fig1.jpg","childImageSharp":{"fluid":{"base64":"data:image/jpeg;base64,/9j/2wBDABALDA4MChAODQ4SERATGCgaGBYWGDEjJR0oOjM9PDkzODdASFxOQERXRTc4UG1RV19iZ2hnPk1xeXBkeFxlZ2P/2wBDARESEhgVGC8aGi9jQjhCY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2P/wgARCAAHABQDASIAAhEBAxEB/8QAFgABAQEAAAAAAAAAAAAAAAAAAAED/8QAFgEBAQEAAAAAAAAAAAAAAAAAAQAD/9oADAMBAAIQAxAAAAHWmiEf/8QAFxABAAMAAAAAAAAAAAAAAAAAAAERQf/aAAgBAQABBQK0t//EABQRAQAAAAAAAAAAAAAAAAAAABD/2gAIAQMBAT8BP//EABQRAQAAAAAAAAAAAAAAAAAAABD/2gAIAQIBAT8BP//EABcQAQADAAAAAAAAAAAAAAAAAAEAEDH/2gAIAQEABj8CdjX/xAAZEAEAAwEBAAAAAAAAAAAAAAABABExQSH/2gAIAQEAAT8hRXrBEdPIpbdn/9oADAMBAAIAAwAAABCAP//EABQRAQAAAAAAAAAAAAAAAAAAABD/2gAIAQMBAT8QP//EABURAQEAAAAAAAAAAAAAAAAAAAAR/9oACAECAQE/EIj/xAAbEAABBAMAAAAAAAAAAAAAAAABABFRgSEx8P/aAAgBAQABPxCuUG50MZIYYUERtSX/2Q==","aspectRatio":3.0737704918032787,"src":"/static/3b5003decd88871595c9c6fa4f2d2e75/6d402/fig1.jpg","srcSet":"/static/3b5003decd88871595c9c6fa4f2d2e75/faa31/fig1.jpg 750w,\n/static/3b5003decd88871595c9c6fa4f2d2e75/d79bd/fig1.jpg 1500w,\n/static/3b5003decd88871595c9c6fa4f2d2e75/6d402/fig1.jpg 1770w","sizes":"(max-width: 1770px) 100vw, 1770px","maxHeight":576,"maxWidth":1770}}}}}},{"node":{"fields":{"slug":"/posts/combining-adda-with-deep-coral-unsupervised-domain-adaptation-for-image-classification","categorySlug":"/category/domain-adaptation/"},"frontmatter":{"title":"Combining ADDA with Deep CORAL: Unsupervised Domain Adaptation for Image Classification","date":"2021-05-23T14:13:40.121Z","category":"Domain Adaptation","description":"<p>We combine Adversarial Discriminative Domain Adaptation (ADDA) with Deep CORAL to allow ADDA better utilize the pretrained initialization. Vanilla ADDA diverses drastically from the initialization resulting much poorer results in early epochs comparing to the initialization. It requires sophisticated fine-tuning for ADDA to give satisfying results. With our novel modifications ADDA-CORAL can be trained extremely faster and yields better results.</p> <p style=\"font-style: italic;\"><span style=\"font-weight: bold\">Bohua Wan</span>, Cong Mu, Ruzhang Zhao, Zhuoying Li (Ordered by alphabetic)</p>","github":null,"paper":null,"socialImage":{"publicURL":"/static/d0e03a59c26b7452974a9f8fabd18d99/demo.gif","childImageSharp":null}}}},{"node":{"fields":{"slug":"/posts/dyadic-relational-graph-convolutional-networks-for-skeleton-based-human-interaction-recognition","categorySlug":"/category/action-recognition/"},"frontmatter":{"title":"Dyadic Relational Graph Convolutional Networks for Skeleton-based Human Interaction Recognition","date":"2021-02-19T14:13:40.121Z","category":"Action Recognition","description":"<p>We apply Graph Convolutional Networks on skeleton-based human-human interaction recognitions. We designed a Relational Adjacency Matrix (RAM) to represent dynamic relational graphs on the two actor's skeletons.</p> <p style=\"font-style: italic;\">Liping Zhu*, <span style=\"font-weight: bold\">Bohua Wan*</span>, Chengyang Li, Gangyi Tian, Yi Hou, Kun Yuan</p> <p style=\"font-style: italic;\">Pattern Recognition 115 (2021): 107920.</p>","github":"https://github.com/GlenGGG/DR-GCN","paper":"https://www.sciencedirect.com/science/article/pii/S0031320321001072","socialImage":{"publicURL":"/static/99d2a147d44057e4dd6664a84a9d8f20/structure.jpg","childImageSharp":{"fluid":{"base64":"data:image/jpeg;base64,/9j/2wBDABALDA4MChAODQ4SERATGCgaGBYWGDEjJR0oOjM9PDkzODdASFxOQERXRTc4UG1RV19iZ2hnPk1xeXBkeFxlZ2P/2wBDARESEhgVGC8aGi9jQjhCY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2NjY2P/wgARCAAFABQDASIAAhEBAxEB/8QAFwABAAMAAAAAAAAAAAAAAAAAAAECBf/EABUBAQEAAAAAAAAAAAAAAAAAAAAB/9oADAMBAAIQAxAAAAHasECP/8QAFhAAAwAAAAAAAAAAAAAAAAAAABAR/9oACAEBAAEFAlT/xAAUEQEAAAAAAAAAAAAAAAAAAAAQ/9oACAEDAQE/AT//xAAUEQEAAAAAAAAAAAAAAAAAAAAQ/9oACAECAQE/AT//xAAUEAEAAAAAAAAAAAAAAAAAAAAQ/9oACAEBAAY/An//xAAXEAADAQAAAAAAAAAAAAAAAAAAAREh/9oACAEBAAE/IbppabR//9oADAMBAAIAAwAAABBwD//EABQRAQAAAAAAAAAAAAAAAAAAABD/2gAIAQMBAT8QP//EABQRAQAAAAAAAAAAAAAAAAAAABD/2gAIAQIBAT8QP//EABoQAQADAAMAAAAAAAAAAAAAAAEAESExQVH/2gAIAQEAAT8QVo5e2FrFEPSNCGM//9k=","aspectRatio":3.8461538461538463,"src":"/static/99d2a147d44057e4dd6664a84a9d8f20/3696c/structure.jpg","srcSet":"/static/99d2a147d44057e4dd6664a84a9d8f20/faa31/structure.jpg 750w,\n/static/99d2a147d44057e4dd6664a84a9d8f20/d79bd/structure.jpg 1500w,\n/static/99d2a147d44057e4dd6664a84a9d8f20/3696c/structure.jpg 2244w","sizes":"(max-width: 2244px) 100vw, 2244px","maxHeight":582,"maxWidth":2244}}}}}}]}},"pageContext":{"tag":"Computer Vision","currentPage":0,"postsLimit":8,"postsOffset":0,"prevPagePath":"/tag/computer-vision","nextPagePath":"/tag/computer-vision/page/1","hasPrevPage":false,"hasNextPage":false}},"staticQueryHashes":["251939775","401334301","41472230"]}