<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>MacsLAB</title><link>https://jbnu.macs.or.kr/en/</link><atom:link href="https://jbnu.macs.or.kr/en/index.xml" rel="self" type="application/rss+xml"/><description>MacsLAB</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 25 Mar 2024 00:00:00 +0000</lastBuildDate><item><title>Graduate Student Recruitment (Fall 2026)</title><link>https://jbnu.macs.or.kr/en/notification/26-04-25-%EC%84%9D%EB%B0%95%EC%82%AC%ED%95%99%EC%83%9D%EB%AA%A8%EC%A7%91-2%ED%95%99%EA%B8%B0/</link><pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/notification/26-04-25-%EC%84%9D%EB%B0%95%EC%82%AC%ED%95%99%EC%83%9D%EB%AA%A8%EC%A7%91-2%ED%95%99%EA%B8%B0/</guid><description>&lt;p>The MACS Lab at Jeonbuk National University is looking for &lt;strong>Master&amp;rsquo;s and PhD (Integrated) students&lt;/strong> for the Fall 2026 semester.&lt;/p>
&lt;p>We welcome applications from those who wish to conduct research ranging from AI theory to practical applications and development.&lt;/p>
&lt;h2 id="research-areas">Research Areas&lt;/h2>
&lt;ul>
&lt;li>Medical AI (Medical Image Analysis, Diagnosis Support, Multi-modal Medical Data)&lt;/li>
&lt;li>Vision &amp;amp; Language based Multi-modal AI&lt;/li>
&lt;li>Generalization AI such as Domain Adaptation, Test-time Adaptation, Federated Learning&lt;/li>
&lt;li>Applied AI in Remote Sensing, Aerospace, and Contents&lt;/li>
&lt;li>AI-based Full-Stack System and Service Development&lt;/li>
&lt;li>AI + Mathematics/Optimization/Statistics-based Modeling&lt;/li>
&lt;/ul>
&lt;h2 id="qualifications">Qualifications&lt;/h2>
&lt;ul>
&lt;li>Those who want to consistently conduct research in Master&amp;rsquo;s/PhD programs&lt;/li>
&lt;li>Those who want to systematically proceed with paper reading/implementation/experimentation/analysis&lt;/li>
&lt;li>Those who enjoy technically defining and solving real-world problems&lt;/li>
&lt;li>Those who can participate responsibly in collaboration and communication&lt;/li>
&lt;/ul>
&lt;h2 id="opportunities">Opportunities&lt;/h2>
&lt;ul>
&lt;li>Experience the entire research process from topic discovery to experimental design and paper writing&lt;/li>
&lt;li>Mentoring aimed at submission to international conferences/journals&lt;/li>
&lt;li>Project-based practical development and deployment experience&lt;/li>
&lt;li>Customized research guidance connecting individual interests with lab directions&lt;/li>
&lt;/ul>
&lt;h2 id="how-to-apply">How to Apply&lt;/h2>
&lt;ul>
&lt;li>Email: &lt;strong>&lt;a href="mailto:ksl@jbnu.ac.kr">ksl@jbnu.ac.kr&lt;/a>&lt;/strong>&lt;/li>
&lt;li>Subject line example: &lt;code>[Graduate Application] Name / Program (Master's, PhD, Integrated)&lt;/code>&lt;/li>
&lt;li>Please attach/include the following in your email:
&lt;ul>
&lt;li>CV (Free format)&lt;/li>
&lt;li>Brief self-introduction and research interests&lt;/li>
&lt;li>(Optional) Portfolio, GitHub, papers written/participated in, project links&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;p>If you are interested, please feel free to email us or visit the lab.&lt;/p></description></item><item><title>PRIME: Ultra-Low-Rank Principal-Residual Model Merging</title><link>https://jbnu.macs.or.kr/en/publication/0040-prime-ultra-low-rank-principal-residual-model-merging/</link><pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0040-prime-ultra-low-rank-principal-residual-model-merging/</guid><description/></item><item><title>Undergraduate Research Assistant Recruitment (Fall 2026)</title><link>https://jbnu.macs.or.kr/en/notification/26-04-25-%ED%95%99%EB%B6%80%EC%97%B0%EA%B5%AC%EC%83%9D%EB%AA%A8%EC%A7%91-2%ED%95%99%EA%B8%B0/</link><pubDate>Sat, 25 Apr 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/notification/26-04-25-%ED%95%99%EB%B6%80%EC%97%B0%EA%B5%AC%EC%83%9D%EB%AA%A8%EC%A7%91-2%ED%95%99%EA%B8%B0/</guid><description>&lt;p>Our lab is recruiting undergraduate research assistants for the Fall 2026 semester. If you are interested in any of the following, please contact us by email or visit us.&lt;/p>
&lt;ul>
&lt;li>Students interested in Contents (Webtoons, Video, etc.)&lt;/li>
&lt;li>Students interested in AI utilizing Contents&lt;/li>
&lt;li>Students interested in Generative AI&lt;/li>
&lt;li>Students interested in OCR and AI&lt;/li>
&lt;li>Students interested in Medical AI&lt;/li>
&lt;li>Students interested in AI &amp;amp; Brain/Neural Networks&lt;/li>
&lt;li>Students who want to develop applications using AI&lt;/li>
&lt;li>Students who want to learn AI theory and mathematics systematically&lt;/li>
&lt;li>Students who want to build practical development skills alongside research&lt;/li>
&lt;li>Students who want to learn Full-stack development using latest frameworks&lt;/li>
&lt;li>Students who want to systematically grow their coding skills&lt;/li>
&lt;/ul>
&lt;p>If you are interested, please feel free to send an email or visit the lab.&lt;/p>
&lt;ul>
&lt;li>Email: &lt;strong>&lt;a href="mailto:ksl@jbnu.ac.kr">ksl@jbnu.ac.kr&lt;/a>&lt;/strong>&lt;/li>
&lt;/ul></description></item><item><title>Human-Intervention Segmentation via Federated Intent Embedding and Multi-Mask Recommendation</title><link>https://jbnu.macs.or.kr/en/publication/0041-human-intervention-segmentation-via-federated-intent-embedding-and-multi-mask-recommendation/</link><pubDate>Wed, 01 Apr 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0041-human-intervention-segmentation-via-federated-intent-embedding-and-multi-mask-recommendation/</guid><description/></item><item><title>Bayesian Multiclass Segmentation for Remote Sensing: Integrating User Priors and Uncertainty</title><link>https://jbnu.macs.or.kr/en/publication/0039-bayesian-multi-class-segmentation-for-remote-sensing-integrating-user-priors-and-uncertainty/</link><pubDate>Mon, 23 Feb 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0039-bayesian-multi-class-segmentation-for-remote-sensing-integrating-user-priors-and-uncertainty/</guid><description/></item><item><title>Congratulations on CVPR 2026 Acceptance!</title><link>https://jbnu.macs.or.kr/en/post/25-12-09-cvpr2026-accepted/</link><pubDate>Sat, 21 Feb 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/post/25-12-09-cvpr2026-accepted/</guid><description>&lt;p>We are excited to announce that our paper has been accepted to &lt;strong>CVPR 2026&lt;/strong>:&lt;/p>
&lt;p>&lt;strong>Yeongsu Kim&lt;/strong>, &lt;strong>Seo-Yeon Choi&lt;/strong>, and &lt;strong>Kyungsu Lee&lt;/strong>,
&amp;ldquo;&lt;strong>Human-Intervention Segmentation via Federated Intent Embedding and Multi-Mask Recommendation&lt;/strong>.&amp;rdquo;&lt;/p>
&lt;ul>
&lt;li>Venue: &lt;strong>CVPR 2026 (Conference)&lt;/strong>&lt;/li>
&lt;li>Subject Area: &lt;strong>Vision applications and systems&lt;/strong>&lt;/li>
&lt;li>Keywords: &lt;strong>Computer Vision&lt;/strong>, &lt;strong>Machine Learning&lt;/strong>, &lt;strong>User Experience Design&lt;/strong>&lt;/li>
&lt;li>Student Paper: &lt;strong>Yes&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>Abstract:&lt;/p>
&lt;p>Artificial intelligence (AI) has advanced radiology, yet variability across hospitals and devices undermines reliability and trust. We present a federated learning framework that combines frequency-domain harmonization and instruction-conditioned personalization to deliver consistent and interpretable diagnostic outcomes. Using FFT-based reconstructions informed by radiomics descriptors, the system reduces equipment dependency, while CLIP-based text conditioning enables clinicians to guide reconstructions to local practices and patient needs. We evaluated the framework across four hospitals with fifteen radiologists and fifty patients, spanning polyp detection, rotator cuff tear diagnosis, pneumothorax classification, and breast cancer classification/segmentation. Results show significant gains in accuracy, calibration, and robustness under cross-site transfer, without introducing prohibitive latency. Radiologists reported improved interpretability and preserved professional agency, while patients expressed greater trust, reduced anxiety, and stronger acceptance of AI involvement. This work advances a human-centered design for medical AI, aligning federated learning with transparency, equity, and trustworthy deployment.&lt;/p>
&lt;p>Congratulations to the authors on this excellent result.&lt;/p></description></item><item><title>Graduate Student Recruitment (Spring 2026)</title><link>https://jbnu.macs.or.kr/en/notification/26-02-20-%EC%84%9D%EB%B0%95%EC%82%AC%ED%95%99%EC%83%9D%EB%AA%A8%EC%A7%91/</link><pubDate>Fri, 20 Feb 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/notification/26-02-20-%EC%84%9D%EB%B0%95%EC%82%AC%ED%95%99%EC%83%9D%EB%AA%A8%EC%A7%91/</guid><description>&lt;p>The MACS Lab at Jeonbuk National University is looking for &lt;strong>Master&amp;rsquo;s and PhD (Integrated) students&lt;/strong> for the Spring 2026 semester.&lt;/p>
&lt;p>We welcome applications from those who wish to conduct research ranging from AI theory to practical applications and development.&lt;/p>
&lt;h2 id="research-areas">Research Areas&lt;/h2>
&lt;ul>
&lt;li>Medical AI (Medical Image Analysis, Diagnosis Support, Multi-modal Medical Data)&lt;/li>
&lt;li>Vision &amp;amp; Language based Multi-modal AI&lt;/li>
&lt;li>Generalization AI such as Domain Adaptation, Test-time Adaptation, Federated Learning&lt;/li>
&lt;li>Applied AI in Remote Sensing, Aerospace, and Contents&lt;/li>
&lt;li>AI-based Full-Stack System and Service Development&lt;/li>
&lt;li>AI + Mathematics/Optimization/Statistics-based Modeling&lt;/li>
&lt;/ul>
&lt;h2 id="qualifications">Qualifications&lt;/h2>
&lt;ul>
&lt;li>Those who want to consistently conduct research in Master&amp;rsquo;s/PhD programs&lt;/li>
&lt;li>Those who want to systematically proceed with paper reading/implementation/experimentation/analysis&lt;/li>
&lt;li>Those who enjoy technically defining and solving real-world problems&lt;/li>
&lt;li>Those who can participate responsibly in collaboration and communication&lt;/li>
&lt;/ul>
&lt;h2 id="opportunities">Opportunities&lt;/h2>
&lt;ul>
&lt;li>Experience the entire research process from topic discovery to experimental design and paper writing&lt;/li>
&lt;li>Mentoring aimed at submission to international conferences/journals&lt;/li>
&lt;li>Project-based practical development and deployment experience&lt;/li>
&lt;li>Customized research guidance connecting individual interests with lab directions&lt;/li>
&lt;/ul>
&lt;h2 id="how-to-apply">How to Apply&lt;/h2>
&lt;ul>
&lt;li>Email: &lt;strong>&lt;a href="mailto:ksl@jbnu.ac.kr">ksl@jbnu.ac.kr&lt;/a>&lt;/strong>&lt;/li>
&lt;li>Subject line example: &lt;code>[Graduate Application] Name / Program (Master's, PhD, Integrated)&lt;/code>&lt;/li>
&lt;li>Please attach/include the following in your email:
&lt;ul>
&lt;li>CV (Free format)&lt;/li>
&lt;li>Brief self-introduction and research interests&lt;/li>
&lt;li>(Optional) Portfolio, GitHub, papers written/participated in, project links&lt;/li>
&lt;/ul>
&lt;/li>
&lt;/ul>
&lt;p>If you are interested, please feel free to email us or visit the lab.&lt;/p></description></item><item><title>Undergraduate Research Assistant Recruitment (Spring 2026)</title><link>https://jbnu.macs.or.kr/en/notification/26-02-20-%ED%95%99%EB%B6%80%EC%97%B0%EA%B5%AC%EC%83%9D%EB%AA%A8%EC%A7%91/</link><pubDate>Fri, 20 Feb 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/notification/26-02-20-%ED%95%99%EB%B6%80%EC%97%B0%EA%B5%AC%EC%83%9D%EB%AA%A8%EC%A7%91/</guid><description>&lt;p>Our lab is recruiting undergraduate research assistants. If you are interested in at least one of the following topics, please contact us by email or visit us in person.&lt;/p>
&lt;ul>
&lt;li>Students interested in Contents (Webtoons, Video, etc.)&lt;/li>
&lt;li>Students interested in AI utilizing Contents&lt;/li>
&lt;li>Students interested in Generative AI&lt;/li>
&lt;li>Students interested in OCR and AI&lt;/li>
&lt;li>Students interested in Medical AI&lt;/li>
&lt;li>Students interested in AI &amp;amp; Brain/Neural Networks&lt;/li>
&lt;li>Students who want to develop applications using AI&lt;/li>
&lt;li>Students who want to learn AI theory and mathematics systematically&lt;/li>
&lt;li>Students who simply want to learn development systematically, even if not the above research&lt;/li>
&lt;li>Students who want to learn Full-stack development using latest trend frameworks&lt;/li>
&lt;li>Students who want to be good at coding&lt;/li>
&lt;li>Students who want to have various experiences&lt;/li>
&lt;/ul>
&lt;p>Our lab focuses not only on AI-related research but also on practical development. We look forward to hearing from interested students.&lt;/p></description></item><item><title>Congratulations on AISTATS 2026 Acceptance!</title><link>https://jbnu.macs.or.kr/en/post/26-02-22-aistats2026-accepted/</link><pubDate>Sun, 01 Feb 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/post/26-02-22-aistats2026-accepted/</guid><description>&lt;p>We are excited to share that our paper has been accepted to &lt;strong>AISTATS 2026&lt;/strong>:&lt;/p>
&lt;p>&lt;strong>Seo-Yeon Choi&lt;/strong>, and &lt;strong>Kyungsu Lee&lt;/strong>*, &amp;ldquo;&lt;strong>TCP: Context-Aware Pooling via Top-k% Activation Selection&lt;/strong>,&amp;rdquo; &lt;em>Annual Conference on Artificial Intelligence and Statistics (AISTATS 2026)&lt;/em>.&lt;/p>
&lt;p>This is a strong result at a &lt;strong>Top BK/CS venue&lt;/strong>, and we warmly congratulate &lt;strong>Seo-Yeon Choi&lt;/strong> (first author) and &lt;strong>Kyungsu Lee&lt;/strong> (corresponding author).&lt;/p>
&lt;ul>
&lt;li>Venue: &lt;strong>AISTATS 2026&lt;/strong>&lt;/li>
&lt;li>Badge: &lt;strong>Top&lt;/strong>&lt;/li>
&lt;li>Badge: &lt;strong>BK/CS&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>Congratulations again to the authors on this excellent achievement.&lt;/p></description></item><item><title>TCP: Context-Aware Pooling via Top-k% Activation Selection</title><link>https://jbnu.macs.or.kr/en/publication/0028-context-aware-pooling-via-top-k-activation-selection/</link><pubDate>Sat, 31 Jan 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0028-context-aware-pooling-via-top-k-activation-selection/</guid><description/></item><item><title>Human-Centered Personalization in Radiology AI: Evaluating Trust, Usability, and Cross-Hospital Robustness</title><link>https://jbnu.macs.or.kr/en/publication/0027-human-centered-personalization-in-radiology-ai/</link><pubDate>Fri, 30 Jan 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0027-human-centered-personalization-in-radiology-ai/</guid><description/></item><item><title>Congratulations on CHI 2026 Acceptance</title><link>https://jbnu.macs.or.kr/en/post/26-02-22-chi2026-accepted/</link><pubDate>Thu, 22 Jan 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/post/26-02-22-chi2026-accepted/</guid><description>&lt;p>We are delighted to announce that our paper has been accepted to &lt;strong>CHI 2026&lt;/strong>:&lt;/p>
&lt;p>&lt;strong>Seo-Yeon Choi&lt;/strong>, and &lt;strong>Kyungsu Lee&lt;/strong>*, &amp;ldquo;&lt;strong>Human-Centered Personalization in Radiology AI: Evaluating Trust, Usability, and Cross-Hospital Robustness&lt;/strong>,&amp;rdquo; &lt;em>ACM CHI Conference on Human Factors in Computing Systems (CHI 2026)&lt;/em>.&lt;/p>
&lt;p>This paper was accepted to a &lt;strong>Top BK/CS conference&lt;/strong>, and we sincerely congratulate &lt;strong>Seo-Yeon Choi&lt;/strong> (first author) and &lt;strong>Kyungsu Lee&lt;/strong> (corresponding author).&lt;/p>
&lt;ul>
&lt;li>Venue: &lt;strong>CHI 2026&lt;/strong>&lt;/li>
&lt;li>Badge: &lt;strong>Top&lt;/strong>&lt;/li>
&lt;li>Badge: &lt;strong>BK/CS&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>Congratulations to the authors for this outstanding milestone.&lt;/p></description></item><item><title>Anatomy-Aware Distillation with Memory-Augmented SAM2 for Fracture Detection</title><link>https://jbnu.macs.or.kr/en/publication/0038-anatomy-aware-distillation-with-memory-augmented-sam2-for-fracture-detection-ksc2025/</link><pubDate>Tue, 16 Dec 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0038-anatomy-aware-distillation-with-memory-augmented-sam2-for-fracture-detection-ksc2025/</guid><description/></item><item><title>KSC 2025: Participation and Paper Presentation</title><link>https://jbnu.macs.or.kr/en/event/25-12-16-ksc2025/</link><pubDate>Tue, 16 Dec 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/event/25-12-16-ksc2025/</guid><description>&lt;p>MACS Lab attended &lt;strong>KSC 2025 (Korea Software Congress)&lt;/strong> to present our work and discuss recent trends in AI and software systems with researchers across academia and industry.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="KSC 2025 - Dongjun Kang" srcset="
/en/event/25-12-16-ksc2025/ksc2025-kang_hu0e157a0bb5d8fe83de734d15ee307e39_3070786_114b167a8774d1bbf85e0231a0d0aad5.webp 400w,
/en/event/25-12-16-ksc2025/ksc2025-kang_hu0e157a0bb5d8fe83de734d15ee307e39_3070786_e55047cbd5497cbd1aad0baf1b74a726.webp 760w,
/en/event/25-12-16-ksc2025/ksc2025-kang_hu0e157a0bb5d8fe83de734d15ee307e39_3070786_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-12-16-ksc2025/ksc2025-kang_hu0e157a0bb5d8fe83de734d15ee307e39_3070786_114b167a8774d1bbf85e0231a0d0aad5.webp"
width="760"
height="428"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>KSC 2025 on-site photo (Dongjun Kang)&lt;/em>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="KSC 2025 - Kyungsu Lee" srcset="
/en/event/25-12-16-ksc2025/ksc2025-kyungsu_hu22e4b3dad178c734d9558371d5b65e60_3299955_1be5cbefe782ae94d2c9bf87a6264a9f.webp 400w,
/en/event/25-12-16-ksc2025/ksc2025-kyungsu_hu22e4b3dad178c734d9558371d5b65e60_3299955_e3e2c89a154a56f3593f4a16bfedd104.webp 760w,
/en/event/25-12-16-ksc2025/ksc2025-kyungsu_hu22e4b3dad178c734d9558371d5b65e60_3299955_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-12-16-ksc2025/ksc2025-kyungsu_hu22e4b3dad178c734d9558371d5b65e60_3299955_1be5cbefe782ae94d2c9bf87a6264a9f.webp"
width="760"
height="428"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>KSC 2025 on-site photo (Kyungsu Lee)&lt;/em>&lt;/p>
&lt;h3 id="presented-work">Presented Work&lt;/h3>
&lt;p>&lt;strong>Dongjun Kang (first author), Kyungsu Lee (corresponding author)&lt;/strong>&lt;br>
&lt;strong>Anatomy-Aware Distillation with Memory-Augmented SAM2 for Fracture Detection&lt;/strong>&lt;/p>
&lt;p>The proposed approach combines a SAM2-based teacher-student structure with memory-augmented distillation to improve fracture detection accuracy and robustness under limited-data conditions.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Method summary" srcset="
/en/event/25-12-16-ksc2025/ksc2025-method_hu4f92f9f3d69e1ae90e3ab7d5f326fb26_48024_8e5ef9344c1bb2db503f4976c0aeece4.webp 400w,
/en/event/25-12-16-ksc2025/ksc2025-method_hu4f92f9f3d69e1ae90e3ab7d5f326fb26_48024_1ef31757a18cd84a92f267ea6a8b1ff8.webp 760w,
/en/event/25-12-16-ksc2025/ksc2025-method_hu4f92f9f3d69e1ae90e3ab7d5f326fb26_48024_1200x1200_fit_q75_h2_lanczos_3.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-12-16-ksc2025/ksc2025-method_hu4f92f9f3d69e1ae90e3ab7d5f326fb26_48024_8e5ef9344c1bb2db503f4976c0aeece4.webp"
width="760"
height="178"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>Method overview and key results&lt;/em>&lt;/p>
&lt;ul>
&lt;li>Related Publication: &lt;a href="https://jbnu.macs.or.kr/publication/0038-anatomy-aware-distillation-with-memory-augmented-sam2-for-fracture-detection-ksc2025/">/publication/0038-anatomy-aware-distillation-with-memory-augmented-sam2-for-fracture-detection-ksc2025/&lt;/a>&lt;/li>
&lt;li>Conference Website: &lt;a href="https://www.kiise.or.kr/conference/KSC/2025/" target="_blank" rel="noopener">https://www.kiise.or.kr/conference/KSC/2025/&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>RTSS 2025 (Boston): Conference Participation</title><link>https://jbnu.macs.or.kr/en/event/25-12-16-rtss2025/</link><pubDate>Tue, 16 Dec 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/event/25-12-16-rtss2025/</guid><description>&lt;p>MACS Lab attended &lt;strong>RTSS 2025 (IEEE Real-Time Systems Symposium)&lt;/strong> in Boston and explored recent advances in real-time systems and AI system design through keynote talks, technical sessions, and poster presentations.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="RTSS 2025 session" srcset="
/en/event/25-12-16-rtss2025/rtss-2025-session_hu98a84f095cb4546b7d79fa04d3f7d5b0_2709202_977aa5cb1ecb3097cb80de15c960adcc.webp 400w,
/en/event/25-12-16-rtss2025/rtss-2025-session_hu98a84f095cb4546b7d79fa04d3f7d5b0_2709202_361a67bbb2a515ee35d0958eab0d8c71.webp 760w,
/en/event/25-12-16-rtss2025/rtss-2025-session_hu98a84f095cb4546b7d79fa04d3f7d5b0_2709202_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-12-16-rtss2025/rtss-2025-session_hu98a84f095cb4546b7d79fa04d3f7d5b0_2709202_977aa5cb1ecb3097cb80de15c960adcc.webp"
width="760"
height="428"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>Main conference session at RTSS 2025&lt;/em>&lt;/p>
&lt;p>During the conference, we reviewed active topics in real-time intelligence and system-level reliability, and discussed potential directions for future cross-disciplinary research.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="RTSS signboard - Seoyeon Choi" srcset="
/en/event/25-12-16-rtss2025/rtss-2025-seoyeon_hub8686b8f6bd1ad67a4ce33e78aa30024_4495550_2434b1cdd2d4f49076086d1a0d492f76.webp 400w,
/en/event/25-12-16-rtss2025/rtss-2025-seoyeon_hub8686b8f6bd1ad67a4ce33e78aa30024_4495550_90f2b4f3d1caa3e247654a8918a5aa32.webp 760w,
/en/event/25-12-16-rtss2025/rtss-2025-seoyeon_hub8686b8f6bd1ad67a4ce33e78aa30024_4495550_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-12-16-rtss2025/rtss-2025-seoyeon_hub8686b8f6bd1ad67a4ce33e78aa30024_4495550_2434b1cdd2d4f49076086d1a0d492f76.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>At the RTSS 2025 venue (Seoyeon Choi)&lt;/em>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="RTSS signboard - Dongjun Kang" srcset="
/en/event/25-12-16-rtss2025/rtss-2025-kang_hu15c96c2f7212804332e290be5e91c0c7_3440644_a1a80764a9f1d0b960206c019712768f.webp 400w,
/en/event/25-12-16-rtss2025/rtss-2025-kang_hu15c96c2f7212804332e290be5e91c0c7_3440644_2d6d6aaf5ec5d0bef0fd983a622ae289.webp 760w,
/en/event/25-12-16-rtss2025/rtss-2025-kang_hu15c96c2f7212804332e290be5e91c0c7_3440644_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-12-16-rtss2025/rtss-2025-kang_hu15c96c2f7212804332e290be5e91c0c7_3440644_a1a80764a9f1d0b960206c019712768f.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>At the RTSS 2025 venue (Dongjun Kang)&lt;/em>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="RTSS signboard - Kyungsu Lee" srcset="
/en/event/25-12-16-rtss2025/rtss-2025-kyungsu_hu435797633041f54f7c769d287113ac0b_3465618_d93715768983de07ea439a7035ddd03f.webp 400w,
/en/event/25-12-16-rtss2025/rtss-2025-kyungsu_hu435797633041f54f7c769d287113ac0b_3465618_0977a187556e0d26b8211b4addce7d00.webp 760w,
/en/event/25-12-16-rtss2025/rtss-2025-kyungsu_hu435797633041f54f7c769d287113ac0b_3465618_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-12-16-rtss2025/rtss-2025-kyungsu_hu435797633041f54f7c769d287113ac0b_3465618_d93715768983de07ea439a7035ddd03f.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>At the RTSS 2025 venue (Kyungsu Lee)&lt;/em>&lt;/p>
&lt;p>The trip also provided meaningful exposure to the broader international research community and collaboration landscape.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Boston view during RTSS 2025" srcset="
/en/event/25-12-16-rtss2025/rtss-2025-boston-river_hu8af1b211478f0376965ed40adb097758_2554485_59fb0cb3356381208793451e79941ffc.webp 400w,
/en/event/25-12-16-rtss2025/rtss-2025-boston-river_hu8af1b211478f0376965ed40adb097758_2554485_505d968496f54f73ecd060971fa7f8d3.webp 760w,
/en/event/25-12-16-rtss2025/rtss-2025-boston-river_hu8af1b211478f0376965ed40adb097758_2554485_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-12-16-rtss2025/rtss-2025-boston-river_hu8af1b211478f0376965ed40adb097758_2554485_59fb0cb3356381208793451e79941ffc.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>Boston during RTSS 2025&lt;/em>&lt;/p>
&lt;ul>
&lt;li>Conference Website: &lt;a href="https://2025.rtss.org/" target="_blank" rel="noopener">https://2025.rtss.org/&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Field Validated Hybrid ESP-NOW and Long Range IoT Monitoring System for Energy Autonomous Precision Agriculture</title><link>https://jbnu.macs.or.kr/en/publication/0030-field-validated-hybrid-esp-now-and-long-range-iot-monitoring-system-for-energy-autonomous-precision-agriculture/</link><pubDate>Mon, 08 Dec 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0030-field-validated-hybrid-esp-now-and-long-range-iot-monitoring-system-for-energy-autonomous-precision-agriculture/</guid><description/></item><item><title>(2025 KOSOMBE) Sagang Hong Won Best Poster Award</title><link>https://jbnu.macs.or.kr/en/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/</link><pubDate>Sat, 08 Nov 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/</guid><description>&lt;p>Congratulations!&lt;/p>
&lt;p>&lt;strong>Sagang Hong&lt;/strong>, a Master&amp;rsquo;s student at MacsLAB, won the &lt;strong>Best Poster Award&lt;/strong> at the &lt;strong>2025 Fall Conference of the Korean Society of Medical and Biological Engineering (KOSOMBE)&lt;/strong>.&lt;/p>
&lt;p>This achievement recognizes his outstanding research presented at the conference held from November 6 to 8, 2025, at Inje University (Gimhae).&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Sagang Hong Winning the Best Poster Award" srcset="
/en/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-hong_hu0a24969032ad521939112188b0bc9088_2540243_d5ea32f0dcef711ab43ee4bdd10b98da.webp 400w,
/en/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-hong_hu0a24969032ad521939112188b0bc9088_2540243_a8a68efbf626c2454f44c5b83686cc1e.webp 760w,
/en/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-hong_hu0a24969032ad521939112188b0bc9088_2540243_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-hong_hu0a24969032ad521939112188b0bc9088_2540243_d5ea32f0dcef711ab43ee4bdd10b98da.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>KOSOMBE 2025 Fall Conference Award Ceremony&lt;/em>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Best Poster Award Certificate" srcset="
/en/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-certificate_hu01482258f0aeaddc3f97c8ad09dc40ed_2589951_0f8171e174df474e2792120d3de6ab23.webp 400w,
/en/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-certificate_hu01482258f0aeaddc3f97c8ad09dc40ed_2589951_a99aebc77cd68d0f0056ca3038aa8463.webp 760w,
/en/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-certificate_hu01482258f0aeaddc3f97c8ad09dc40ed_2589951_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-certificate_hu01482258f0aeaddc3f97c8ad09dc40ed_2589951_0f8171e174df474e2792120d3de6ab23.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>Best Poster Award Certificate for Sagang Hong&lt;/em>&lt;/p>
&lt;p>Award-winning Paper Information:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Sagang Hong (1st author), Junyoung Kim, Kyungsu Lee (Corresponding author)&lt;/strong>&lt;/li>
&lt;li>&lt;strong>SAM2-based Bayesian Prompt Adaptation for Cross-Modality Medical Segmentation&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>We sincerely congratulate Sagang Hong on his award and look forward to more excellent research achievements from MacsLAB.&lt;/p>
&lt;p>Related Links:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://jbnu.macs.or.kr/publication/0034-sam2-based-bayesian-prompt-adaptation-for-cross-modality-medical-segmentation/">/publication/0034-sam2-based-bayesian-prompt-adaptation-for-cross-modality-medical-segmentation/&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://www.kosombe.or.kr/register/2025_fall/program/sub07.html" target="_blank" rel="noopener">KOSOMBE 2025 Fall Conference Program&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Anatomy-Aware Distillation with Memory-Augmented SAM2 for Fracture Detection</title><link>https://jbnu.macs.or.kr/en/publication/0033-anatomy-aware-distillation-with-memory-augmented-sam2-for-fracture-detection/</link><pubDate>Fri, 07 Nov 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0033-anatomy-aware-distillation-with-memory-augmented-sam2-for-fracture-detection/</guid><description/></item><item><title>KOSOMBE 2025 Fall: Participation and Presentations</title><link>https://jbnu.macs.or.kr/en/event/25-11-07-kosombe2025-fall/</link><pubDate>Fri, 07 Nov 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/event/25-11-07-kosombe2025-fall/</guid><description>&lt;p>MACS Lab participated in the &lt;strong>Korean Society of Medical and Biological Engineering (KOSOMBE) 2025 Fall Conference&lt;/strong> and presented our recent work in medical AI and medical image analysis.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="KOSOMBE 2025 Fall group photo" srcset="
/en/event/25-11-07-kosombe2025-fall/kosombe2025-team_hu4e45b5b53b651ee880f8ac1a2cd25559_158443_36dc4525bac033eb7472691ed8569d6d.webp 400w,
/en/event/25-11-07-kosombe2025-fall/kosombe2025-team_hu4e45b5b53b651ee880f8ac1a2cd25559_158443_06662e3bcb2d23cd65c4742e415d62b0.webp 760w,
/en/event/25-11-07-kosombe2025-fall/kosombe2025-team_hu4e45b5b53b651ee880f8ac1a2cd25559_158443_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-11-07-kosombe2025-fall/kosombe2025-team_hu4e45b5b53b651ee880f8ac1a2cd25559_158443_36dc4525bac033eb7472691ed8569d6d.webp"
width="760"
height="428"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>MACS Lab members at KOSOMBE 2025 Fall&lt;/em>&lt;/p>
&lt;h2 id="presentation-1">Presentation 1&lt;/h2>
&lt;p>&lt;strong>Dongjun Kang (first author), Kyungsu Lee (corresponding author)&lt;/strong>&lt;br>
&lt;strong>Anatomy-Aware Distillation with Memory-Augmented SAM2 for Fracture Detection&lt;/strong>&lt;br>
KOSOMBE 2025 Fall (Poster 244)&lt;/p>
&lt;p>This work proposes a memory-augmented knowledge distillation framework to transfer anatomy-aware knowledge from a large SAM2 model to a lightweight fracture detector.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Dongjun Kang presenting poster" srcset="
/en/event/25-11-07-kosombe2025-fall/kosombe2025-kang_hua7e91e92d7a735366513af105f6631b9_196334_9fec02093651dfb2f0e47ec4e88c92d2.webp 400w,
/en/event/25-11-07-kosombe2025-fall/kosombe2025-kang_hua7e91e92d7a735366513af105f6631b9_196334_c920294560aa06b460ec264c55aa49c2.webp 760w,
/en/event/25-11-07-kosombe2025-fall/kosombe2025-kang_hua7e91e92d7a735366513af105f6631b9_196334_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-11-07-kosombe2025-fall/kosombe2025-kang_hua7e91e92d7a735366513af105f6631b9_196334_9fec02093651dfb2f0e47ec4e88c92d2.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>Poster 244 presentation&lt;/em>&lt;/p>
&lt;ul>
&lt;li>Related Publication: &lt;a href="https://jbnu.macs.or.kr/publication/0033-anatomy-aware-distillation-with-memory-augmented-sam2-for-fracture-detection/">/publication/0033-anatomy-aware-distillation-with-memory-augmented-sam2-for-fracture-detection/&lt;/a>&lt;/li>
&lt;/ul>
&lt;h2 id="presentation-2">Presentation 2&lt;/h2>
&lt;p>&lt;strong>Sakang Hong (first author), Jun-Yung Kim, Kyungsu Lee (corresponding author)&lt;/strong>&lt;br>
&lt;strong>SAM2-based Bayesian Prompt Adaptation for Cross-Modality Medical Segmentation&lt;/strong>&lt;br>
KOSOMBE 2025 Fall (Poster 247)&lt;/p>
&lt;p>This work introduces a SAM2-based Bayesian prompt adaptation method (BayesPrompt) for cross-modality medical image segmentation under few-shot settings.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Sakang Hong presenting poster" srcset="
/en/event/25-11-07-kosombe2025-fall/kosombe2025-hong_hu8413a532f873609d528fb5c9d69f0ccd_181679_800d354e98e5b1308059f09431302e27.webp 400w,
/en/event/25-11-07-kosombe2025-fall/kosombe2025-hong_hu8413a532f873609d528fb5c9d69f0ccd_181679_9ea76af7e12942fb802875afc25e0ec3.webp 760w,
/en/event/25-11-07-kosombe2025-fall/kosombe2025-hong_hu8413a532f873609d528fb5c9d69f0ccd_181679_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-11-07-kosombe2025-fall/kosombe2025-hong_hu8413a532f873609d528fb5c9d69f0ccd_181679_800d354e98e5b1308059f09431302e27.webp"
width="760"
height="428"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>Poster 247 presentation&lt;/em>&lt;/p>
&lt;ul>
&lt;li>Related Publication: &lt;a href="https://jbnu.macs.or.kr/publication/0034-sam2-based-bayesian-prompt-adaptation-for-cross-modality-medical-segmentation/">/publication/0034-sam2-based-bayesian-prompt-adaptation-for-cross-modality-medical-segmentation/&lt;/a>&lt;/li>
&lt;/ul>
&lt;p>The conference offered productive exchanges with researchers in biomedical engineering, especially on practical deployment paths for medical AI.&lt;/p>
&lt;ul>
&lt;li>Program: &lt;a href="https://www.kosombe.or.kr/register/2025_fall/program/sub07.html" target="_blank" rel="noopener">https://www.kosombe.or.kr/register/2025_fall/program/sub07.html&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>SAM2-based Bayesian Prompt Adaptation for Cross-Modality Medical Segmentation</title><link>https://jbnu.macs.or.kr/en/publication/0034-sam2-based-bayesian-prompt-adaptation-for-cross-modality-medical-segmentation/</link><pubDate>Fri, 07 Nov 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0034-sam2-based-bayesian-prompt-adaptation-for-cross-modality-medical-segmentation/</guid><description/></item><item><title>Memory-Guided Personalization for Physician-Specific Diagnostic Inference</title><link>https://jbnu.macs.or.kr/en/publication/0032-memory-guided-personalization-for-physician-specific-diagnostic-inference/</link><pubDate>Sun, 19 Oct 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0032-memory-guided-personalization-for-physician-specific-diagnostic-inference/</guid><description/></item><item><title>Patient-Centric Statistical Multi-Modal Fusion for Medical Diagnosis: Integrating DICOM, Radiomics, and Patient Attributes</title><link>https://jbnu.macs.or.kr/en/publication/0031-patient-centric-statistical-multi-modal-fusion-for-medical-diagnosis-integrating-dicom-radiomics-and-patient-attributes/</link><pubDate>Sun, 19 Oct 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0031-patient-centric-statistical-multi-modal-fusion-for-medical-diagnosis-integrating-dicom-radiomics-and-patient-attributes/</guid><description/></item><item><title>(AIxMHC 2025) Best Poster Award</title><link>https://jbnu.macs.or.kr/en/post/25-10-15-aixmhc2025-best-poster-award/</link><pubDate>Wed, 15 Oct 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/post/25-10-15-aixmhc2025-best-poster-award/</guid><description>&lt;p>Congratulations!&lt;/p>
&lt;p>The team of &lt;strong>Seo-Yeon Choi, Haeyun Lee, and Kyungsu Lee&lt;/strong> from MacsLAB won the &lt;strong>Best Poster Award&lt;/strong> at &lt;strong>AIxMHC 2025&lt;/strong>.&lt;/p>
&lt;p>The award-winning paper is:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Statistical Multi-Modal Fusion for Patient-Centric Medical Diagnosis Using DICOM&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Best Poster Award Certificate" srcset="
/en/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-award-certificate_hu420fe1e716ed07059b84a96f8010be5d_2442018_1ab0c8323a5516935bb253f89bd93dfd.webp 400w,
/en/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-award-certificate_hu420fe1e716ed07059b84a96f8010be5d_2442018_e9cb3708b2e9db178fccfa9ec8279297.webp 760w,
/en/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-award-certificate_hu420fe1e716ed07059b84a96f8010be5d_2442018_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-award-certificate_hu420fe1e716ed07059b84a96f8010be5d_2442018_1ab0c8323a5516935bb253f89bd93dfd.webp"
width="760"
height="428"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>AIxMHC 2025 Best Poster Award Certificate&lt;/em>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="AIxMHC 2025 Scene" srcset="
/en/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-team_hue6024be751a019995de8a04ac33dd409_441204_e01dd8ed4f415c7b15fe0068462460b5.webp 400w,
/en/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-team_hue6024be751a019995de8a04ac33dd409_441204_f8613804cec2e4591f337b40e6c0d879.webp 760w,
/en/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-team_hue6024be751a019995de8a04ac33dd409_441204_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-team_hue6024be751a019995de8a04ac33dd409_441204_e01dd8ed4f415c7b15fe0068462460b5.webp"
width="760"
height="570"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>AIxMHC 2025 Team Photo&lt;/em>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Poster Presentation" srcset="
/en/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-poster_hu4708d36416e7036bee5557970d51c3a1_3489223_def59102ea64156a981431559c18ac02.webp 400w,
/en/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-poster_hu4708d36416e7036bee5557970d51c3a1_3489223_e010e70fae6dba5f4ae53abc41e85823.webp 760w,
/en/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-poster_hu4708d36416e7036bee5557970d51c3a1_3489223_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-poster_hu4708d36416e7036bee5557970d51c3a1_3489223_def59102ea64156a981431559c18ac02.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>Poster Presentation Scene&lt;/em>&lt;/p>
&lt;p>MacsLAB will continue to strive for clinically meaningful research achievements in the field of medical AI.&lt;/p>
&lt;p>Related Links:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://jbnu.macs.or.kr/publication/0036-statistical-multi-modal-fusion-for-patient-centric-medical-diagnosis-using-dicom/">/publication/0036-statistical-multi-modal-fusion-for-patient-centric-medical-diagnosis-using-dicom/&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://chaoneng.github.io/aixmhc2025.github.io/" target="_blank" rel="noopener">AIxMHC 2025&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Statistical Latent Manifold-Guided Framework for Generative Super-Resolution</title><link>https://jbnu.macs.or.kr/en/publication/0037-statistical-latent-manifold-guided-framework-for-generative-super-resolution-aixmhc2025/</link><pubDate>Wed, 15 Oct 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0037-statistical-latent-manifold-guided-framework-for-generative-super-resolution-aixmhc2025/</guid><description/></item><item><title>Statistical Multi-Modal Fusion for Patient-Centric Medical Diagnosis Using DICOM</title><link>https://jbnu.macs.or.kr/en/publication/0036-statistical-multi-modal-fusion-for-patient-centric-medical-diagnosis-using-dicom/</link><pubDate>Wed, 15 Oct 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0036-statistical-multi-modal-fusion-for-patient-centric-medical-diagnosis-using-dicom/</guid><description/></item><item><title>AIxMHC 2025 (Taiwan): Conference Participation and Paper Presentations</title><link>https://jbnu.macs.or.kr/en/event/25-10-13-aixmhc2025/</link><pubDate>Mon, 13 Oct 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/event/25-10-13-aixmhc2025/</guid><description>&lt;p>MACS Lab participated in &lt;strong>AIxMHC 2025 (The 2nd International Conference on Artificial Intelligence for Medicine, Health, and Care)&lt;/strong> in Taichung, Taiwan. We presented two papers and discussed recent research trends in medical AI with international researchers.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="AIxMHC 2025 group photo" srcset="
/en/event/25-10-13-aixmhc2025/aixmhc2025-team_hue6024be751a019995de8a04ac33dd409_441204_e01dd8ed4f415c7b15fe0068462460b5.webp 400w,
/en/event/25-10-13-aixmhc2025/aixmhc2025-team_hue6024be751a019995de8a04ac33dd409_441204_f8613804cec2e4591f337b40e6c0d879.webp 760w,
/en/event/25-10-13-aixmhc2025/aixmhc2025-team_hue6024be751a019995de8a04ac33dd409_441204_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-10-13-aixmhc2025/aixmhc2025-team_hue6024be751a019995de8a04ac33dd409_441204_e01dd8ed4f415c7b15fe0068462460b5.webp"
width="760"
height="570"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>AIxMHC 2025, Taichung, Taiwan&lt;/em>&lt;/p>
&lt;h3 id="presented-papers">Presented Papers&lt;/h3>
&lt;ol>
&lt;li>&lt;strong>Statistical Multi-Modal Fusion for Patient-Centric Medical Diagnosis Using DICOM&lt;/strong> (Poster)&lt;/li>
&lt;li>&lt;strong>Statistical Latent Manifold-Guided Framework for Generative Super-Resolution&lt;/strong> (Regular)&lt;/li>
&lt;/ol>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="On-site discussion" srcset="
/en/event/25-10-13-aixmhc2025/aixmhc2025-discussion_hu7496d3ec3483f081b7560b213cbb3875_2572288_02ed85364770424c83f71a82df124dc6.webp 400w,
/en/event/25-10-13-aixmhc2025/aixmhc2025-discussion_hu7496d3ec3483f081b7560b213cbb3875_2572288_8dcc53f298754fc8468738091a455216.webp 760w,
/en/event/25-10-13-aixmhc2025/aixmhc2025-discussion_hu7496d3ec3483f081b7560b213cbb3875_2572288_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-10-13-aixmhc2025/aixmhc2025-discussion_hu7496d3ec3483f081b7560b213cbb3875_2572288_02ed85364770424c83f71a82df124dc6.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>Q&amp;amp;A and technical discussion after the presentation&lt;/em>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Poster session" srcset="
/en/event/25-10-13-aixmhc2025/aixmhc2025-poster_hu4708d36416e7036bee5557970d51c3a1_3489223_def59102ea64156a981431559c18ac02.webp 400w,
/en/event/25-10-13-aixmhc2025/aixmhc2025-poster_hu4708d36416e7036bee5557970d51c3a1_3489223_e010e70fae6dba5f4ae53abc41e85823.webp 760w,
/en/event/25-10-13-aixmhc2025/aixmhc2025-poster_hu4708d36416e7036bee5557970d51c3a1_3489223_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-10-13-aixmhc2025/aixmhc2025-poster_hu4708d36416e7036bee5557970d51c3a1_3489223_def59102ea64156a981431559c18ac02.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>Poster presentation session&lt;/em>&lt;/p>
&lt;p>The conference provided a valuable opportunity to exchange ideas on practical medical AI applications and to expand possibilities for international collaboration.&lt;/p>
&lt;ul>
&lt;li>Conference Website: &lt;a href="https://chaoneng.github.io/aixmhc2025.github.io/" target="_blank" rel="noopener">https://chaoneng.github.io/aixmhc2025.github.io/&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Expert-level differentiation of incomplete Kawasaki disease and pneumonia from echocardiography via multiple large receptive attention mechanisms</title><link>https://jbnu.macs.or.kr/en/publication/0029-expert-level-differentiation-of-incomplete-kawasaki-disease-and-pneumonia-from-echocardiography-via-multiple-large-receptive-attention-mechanisms/</link><pubDate>Mon, 01 Sep 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0029-expert-level-differentiation-of-incomplete-kawasaki-disease-and-pneumonia-from-echocardiography-via-multiple-large-receptive-attention-mechanisms/</guid><description/></item><item><title>Congratulations to Seo-Yeon Choi (Student Researcher, Undergraduate Researcher) on Two Papers Accepted to ICCV 2025 Workshops!</title><link>https://jbnu.macs.or.kr/en/post/25-07-15-iccv2025-workshop-accepted/</link><pubDate>Sat, 19 Jul 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/post/25-07-15-iccv2025-workshop-accepted/</guid><description>&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;p>We are thrilled to announce that our undergraduate researcher, &lt;strong>Seo-Yeon Choi&lt;/strong>, has achieved a remarkable accomplishment—&lt;strong>two papers have been accepted to workshops at ICCV 2025 Workshops (CVAMD / VADH25)&lt;/strong>!&lt;/p>
&lt;p>Even more exciting, one paper was selected for an &lt;strong>oral presentation&lt;/strong> and the other for a &lt;strong>poster presentation&lt;/strong>. Having two papers accepted at such a prestigious venue as ICCV is a truly outstanding feat, especially for an undergraduate researcher. This is a testament to Seo-Yeon’s dedication, hard work, and innovative research.&lt;/p>
&lt;p>Congratulations once again, Seo-Yeon! We look forward to seeing both the oral and poster presentations at ICCV 2025 in Hawaii! 🌺🌴&lt;/p>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;hr>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;h2 id="patient-centric-statistical-multi-modal-fusion-for-medical-diagnosis-integrating-dicom-radiomics-and-patient-attributes">Patient-Centric Statistical Multi-Modal Fusion for Medical Diagnosis: Integrating DICOM, Radiomics, and Patient Attributes&lt;/h2>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Info1" srcset="
/media/ICCVW2025/VADH25_hud6f6290f3a18db0f534527358b362b21_95045_dbc3182c80acc67fbf2fe26e94091705.webp 400w,
/media/ICCVW2025/VADH25_hud6f6290f3a18db0f534527358b362b21_95045_d6d84911d7285fedee838b6a4e15187c.webp 760w,
/media/ICCVW2025/VADH25_hud6f6290f3a18db0f534527358b362b21_95045_1200x1200_fit_q75_h2_lanczos_3.webp 1200w"
src="https://jbnu.macs.or.kr/media/ICCVW2025/VADH25_hud6f6290f3a18db0f534527358b362b21_95045_dbc3182c80acc67fbf2fe26e94091705.webp"
width="760"
height="344"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h3 id="abstract">Abstract&lt;/h3>
&lt;p>Deep learning (DL) has led to substantial progress in medical image analysis, particularly for disease classification. However, the integration of patient-specific attributes, such as age, body mass index (BMI), and lifestyle factors with radiomics and raw imaging data remains a key challenge in the development of personalized diagnostic models. To alleviate this, in this research, we propose a novel multi-modal framework, denoted as Statistically Coherent Network (SCN), which jointly models imaging, radiomic, and patient metadata through a structured multi-space latent representation. SCN facilitates distributional coherence across subpopulations by leveraging a newly devised statistics-based loss in conjunction with a triplet loss, thereby aligning feature distributions among clinically similar cohorts. This statistical alignment using T-test facilitates more interpretable and robust representation learning across heterogeneous patient groups. We evaluate SCN on four clinically diverse tasks, including breast cancer (mammography), obstructive sleep apnea (CT), rotator cuff tear (MRI), and Cormack-Lehane grading (X-ray), and demonstrate the consistent improvements over conventional single-space and multi-modal baselines. The experimental results highlight the importance of explicitly incorporating patient metadata, in terms of multimodal learning, to enhance model generalizability and clinical relevance.&lt;/p>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;hr>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;h2 id="memory-guided-personalization-for-physician-specific-diagnostic-inference">Memory-Guided Personalization for Physician-Specific Diagnostic Inference&lt;/h2>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Info1" srcset="
/media/ICCVW2025/CVAMD25_hu8df5408106eb8ca5ee757ac685ce145c_335676_f410bd5bc1b00259ce8a46d2fa0e8c80.webp 400w,
/media/ICCVW2025/CVAMD25_hu8df5408106eb8ca5ee757ac685ce145c_335676_137ea6e216745a85720efe3c219c4721.webp 760w,
/media/ICCVW2025/CVAMD25_hu8df5408106eb8ca5ee757ac685ce145c_335676_1200x1200_fit_q75_h2_lanczos_3.webp 1200w"
src="https://jbnu.macs.or.kr/media/ICCVW2025/CVAMD25_hu8df5408106eb8ca5ee757ac685ce145c_335676_f410bd5bc1b00259ce8a46d2fa0e8c80.webp"
width="760"
height="369"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h3 id="abstract-1">Abstract&lt;/h3>
&lt;p>Recent advances in deep learning have improved diagnostic precision across medical imaging tasks. However, clinical adoption remains limited due to a mismatch between model outputs and the diverse reasoning styles of physicians. Prior personalization efforts have primarily focused on patient-specific adaptation, overlooking clinician-specific variability. We propose a physician-centric diagnostic framework that supports real-time, adaptive inference tailored to individual clinicians. The system consists of three stages: supervised learning, Human-in-the-Loop guidance, and personalized deployment. Physician feedback is encoded as memory-based priors and reused at inference without retraining, enabling lightweight, end-to-end personalization. We validate our method on detection and segmentation tasks including parathyroid localization, breast cancer segmentation, and rotator cuff tear analysis. Results demonstrated that our model adapts effectively to individual diagnostic styles while maintaining high accuracy in clinical workflows.&lt;/p>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;hr>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;p>Once again, congratulations to Seo-Yeon Choi for this outstanding achievement. Let’s look forward to an inspiring and impactful presentation at ICCV 2025! 🚀🎉&lt;/p>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;hr>
&lt;p>&lt;br>&lt;br>&lt;/p></description></item><item><title>Participation and Presentations at KCC 2025</title><link>https://jbnu.macs.or.kr/en/event/25-07-02-kcc/</link><pubDate>Wed, 02 Jul 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/event/25-07-02-kcc/</guid><description>&lt;p>In July 2025, MACS Lab attended the Korea Computer Congress 2025 (KCC 2025) held at ICC Jeju. The team joined technical sessions, poster presentations, and workshops to share research outcomes and gain broader academic perspectives.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="MACS Lab at KCC 2025" srcset="
/en/event/25-07-02-kcc/featured_hu703cba79da00bea64015c6fae6c5697d_3584757_210b8aa07ca4d900954343f60eb77aae.webp 400w,
/en/event/25-07-02-kcc/featured_hu703cba79da00bea64015c6fae6c5697d_3584757_fb77d69578d372372f6f565908c1d6fe.webp 760w,
/en/event/25-07-02-kcc/featured_hu703cba79da00bea64015c6fae6c5697d_3584757_1200x1200_fit_q75_h2_lanczos_3.webp 1200w"
src="https://jbnu.macs.or.kr/en/event/25-07-02-kcc/featured_hu703cba79da00bea64015c6fae6c5697d_3584757_210b8aa07ca4d900954343f60eb77aae.webp"
width="760"
height="428"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>MACS Lab at KCC 2025&lt;/em>&lt;/p>
&lt;p>Professor Kyungsu Lee presented in the early-career researcher session under the topic &lt;strong>&amp;ldquo;Applications of Trustworthy AI&amp;rdquo;&lt;/strong>, followed by active discussions with domestic and international participants.&lt;/p>
&lt;p>Students in our integrated and graduate tracks also participated in poster and session programs, receiving valuable feedback and building new research connections.&lt;/p>
&lt;ul>
&lt;li>Conference Website: &lt;a href="https://www.kiise.or.kr/conference/kcc/2025/" target="_blank" rel="noopener">https://www.kiise.or.kr/conference/kcc/2025/&lt;/a>&lt;/li>
&lt;li>Early-Career Session Program: &lt;a href="https://kcc2025.kiise.or.kr/Proceedings/chart.asp" target="_blank" rel="noopener">https://kcc2025.kiise.or.kr/Proceedings/chart.asp&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Statistical Latent Manifold-Guided Framework for Generative Super-Resolution</title><link>https://jbnu.macs.or.kr/en/publication/0035-statistical-latent-manifold-guided-framework-for-generative-super-resolution/</link><pubDate>Wed, 02 Jul 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0035-statistical-latent-manifold-guided-framework-for-generative-super-resolution/</guid><description/></item><item><title>Real-time Self-supervised Ultrasound Image Enhancement Using Test-Time Adaptation for Sophisticated Rotator Cuff Tear Diagnosis</title><link>https://jbnu.macs.or.kr/en/publication/0026-real-time-self-supervised-ultrasound-image-enhancement-using-test-time-adaptation-for-sophisticated-rotator-cuff-tear-diagnosis/</link><pubDate>Mon, 31 Mar 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0026-real-time-self-supervised-ultrasound-image-enhancement-using-test-time-adaptation-for-sophisticated-rotator-cuff-tear-diagnosis/</guid><description/></item><item><title>Congratulations to Yeongsu Kim (Student Researcher, Undergraduate Researcher) on ICLR 2025 Workshop Acceptance!</title><link>https://jbnu.macs.or.kr/en/post/25-03-03-iclr2025-workshop-accepted/</link><pubDate>Mon, 03 Mar 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/post/25-03-03-iclr2025-workshop-accepted/</guid><description>&lt;p>Exciting news! Our undergraduate researcher, Yeongsu Kim, has achieved an outstanding milestone—his paper has been accepted to the ML4RS Workshop at ICLR 2025!&lt;/p>
&lt;p>As an undergraduate student, getting a paper into a prestigious venue like ICLR is no small feat, and this accomplishment is a testament to his dedication and hard work. Congratulations once again, Yeongsu!&lt;/p>
&lt;p>Looking forward to seeing the research presented in Singapore this April! 🚀🎉&lt;/p>
&lt;h4 id="abstract">Abstract&lt;/h4>
&lt;p>Over the past few decades, geospatial objects have been extensively recognized as significant components in remote sensing applications, including environmental monitoring, urban planning, and defense. Particularly, accurate segmentation of objects has aimed at meaningful observations from aerial imagery, leading to the necessity of deep learning-based methodologies. However, conventional deep learning-based segmentation methodologies exhibit limited generalization capabilities across diverse geographical domains due to inherent variations in regional characteristics and data distribution shifts. Furthermore, most existing approaches strongly rely on static, pre-trained models lacking the adaptability to handle previously unseen data. To alleviate these limitations, we propose a novel Few-shot Semi-Online Adaptation framework incorporating interactive user feedback to iteratively refine segmentation outputs. By leveraging online learning and test-time adaptation, our approach enables models to continuously be accurate based on minimal user corrections, ensuring flexibility and adaptability to new environments. Experimental results demonstrate that our method effectively enhances the segmentation accuracy with minimal user intervention, bridging the gap between automated segmentation and domain-specific expertise. Our research contributes to the development of interactive, user-adaptive segmentation models to facilitate geospatial analysis more efficiently and reliably.&lt;/p></description></item><item><title>Interactive Few-shot Online Adaptation for User-Guided Segmentation in Aerial Images</title><link>https://jbnu.macs.or.kr/en/publication/0025-interactive-few-shot-online-adaptation-for-user-guided-segmentation-in-aerial-images/</link><pubDate>Mon, 03 Mar 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0025-interactive-few-shot-online-adaptation-for-user-guided-segmentation-in-aerial-images/</guid><description/></item><item><title>Connectome Mapping: Shape-Memory Network via Interpretation of Contextual Semantic Information</title><link>https://jbnu.macs.or.kr/en/publication/0023-connectome-mapping-shape-memory-network-via-interpretation-of-contextual-semantic-information/</link><pubDate>Thu, 23 Jan 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0023-connectome-mapping-shape-memory-network-via-interpretation-of-contextual-semantic-information/</guid><description/></item><item><title>One Paper Accepted to ICLR 2025!</title><link>https://jbnu.macs.or.kr/en/post/25-01-23-iclr2025-accepted/</link><pubDate>Thu, 23 Jan 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/post/25-01-23-iclr2025-accepted/</guid><description>&lt;p>Thrilled to announce that our paper, &amp;ldquo;Connectome Mapping: Shape-Memory Network via Interpretation of Contextual Semantic Information,&amp;rdquo; has been accepted to ICLR 2025! See you in Singapore in April!&lt;/p>
&lt;h4 id="abstract">Abstract&lt;/h4>
&lt;p>Contextual semantic information plays a pivotal role in the brain&amp;rsquo;s visual interpretation of the surrounding environment. When processing visual information, electrical signals within synapses facilitate the dynamic activation and deactivation of synaptic connections, guided by the contextual semantic information associated with different objects. In the realm of Artificial Intelligence (AI), neural networks have emerged as powerful tools to emulate complex signaling systems, enabling tasks such as classification and segmentation by understanding visual information. However, conventional neural networks have limitations in simulating the conditional activation and deactivation of synapses, collectively known as the connectome, a comprehensive map of neural connections in the brain. Additionally, the pixel-wise inference mechanism of conventional neural networks failed to account for the explicit utilization of contextual semantic information in the prediction process. To overcome these limitations, we developed a novel neural network, dubbed the Shape Memory Network (SMN), which excels in two key areas: (1) faithfully emulating the intricate mechanism of the brain&amp;rsquo;s connectome, and (2) explicitly incorporating contextual semantic information during the inference process. The SMN memorizes the structure suitable for contextual semantic information and leverages this structure at the inference phase. The structural transformation emulates the conditional activation and deactivation of synaptic connections within the connectome. Rigorous experimentation carried out across a range of semantic segmentation benchmarks demonstrated the outstanding performance of the SMN, highlighting its superiority and effectiveness. Furthermore, our pioneering network on connectome emulation reveals the immense potential of the SMN for next-generation neural networks.&lt;/p></description></item><item><title>SoN: Selective Optimal Network for Smartphone-based Indoor Localization in Real-time</title><link>https://jbnu.macs.or.kr/en/publication/0024-selective-optimal-network-for-smartphone-based-indoor-localization-in-real-time/</link><pubDate>Mon, 20 Jan 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0024-selective-optimal-network-for-smartphone-based-indoor-localization-in-real-time/</guid><description/></item><item><title>Machine Learning-Enhanced Skull-Universal Acoustic Hologram for Efficient Transcranial Ultrasound Neuromodulation Across Varied Rodent Skulls</title><link>https://jbnu.macs.or.kr/en/publication/0022-machine-learning-enhanced-skull-universal-acoustic-hologram-for-efficient-transcranial-ultrasound-neuromodulation-across-varied-rodent-skulls/</link><pubDate>Sun, 01 Dec 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0022-machine-learning-enhanced-skull-universal-acoustic-hologram-for-efficient-transcranial-ultrasound-neuromodulation-across-varied-rodent-skulls/</guid><description/></item><item><title>(Fall 2024) Awarded at K-Health Medical AI Hackathon</title><link>https://jbnu.macs.or.kr/en/post/24-11-20-k-health-%EC%88%98%EC%83%81/</link><pubDate>Wed, 20 Nov 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/post/24-11-20-k-health-%EC%88%98%EC%83%81/</guid><description>&lt;p>Congratulations!&lt;/p>
&lt;p>Undergraduate researchers Dayoung Kang, Sumin Kim, Sehyun Park, and Seo-Yeon Choi from MacsLAB won the Grand Prize at the 2024 K-Health Medical AI Hackathon.&lt;/p>
&lt;p>The competition focused on developing an AI model for breast mass segmentation using mammography image data. The students developed, trained, and tuned their own segmentation model, achieving high validation performance by utilizing public datasets in addition to the provided dataset, leading to their victory.&lt;/p></description></item><item><title>People</title><link>https://jbnu.macs.or.kr/en/people/</link><pubDate>Tue, 11 Jun 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/people/</guid><description/></item><item><title>Computational Science</title><link>https://jbnu.macs.or.kr/en/field/07-computational-science/</link><pubDate>Mon, 01 Apr 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/field/07-computational-science/</guid><description>&lt;p>Computational science research at MacsLAB focuses on numerically modeling complex natural and engineering systems and combining them with data-driven methods to provide predictive and explainable interpretations. We expand problems that are difficult to address through experiments alone into simulations and support practical decision-making by increasing consistency with real-world data.&lt;/p></description></item><item><title>AI</title><link>https://jbnu.macs.or.kr/en/field/00-ai/</link><pubDate>Sun, 31 Mar 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/field/00-ai/</guid><description>&lt;p>At MacsLAB, we develop specialized AI technologies applicable to diverse domains. Our research spans various fields, including medicine, aerospace, and content creation, with a primary focus on providing customized solutions for solving real-world problems. The core objective of our research is to realize a better future by effectively addressing challenges in these sectors through advanced AI techniques.&lt;/p></description></item><item><title>Medical Imaging</title><link>https://jbnu.macs.or.kr/en/field/01-medical-imaging/</link><pubDate>Fri, 29 Mar 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/field/01-medical-imaging/</guid><description>&lt;p>Medical AI research at MacsLAB focuses on disease diagnosis, environment improvement, and enhancing the efficiency of medical services. We develop cutting-edge technologies that enable early detection and accurate diagnosis of diseases by creating smartphone-based diagnostic tools, 3D ultrasound imaging platforms, and automated deep learning-based analysis systems. These technologies aim to maximize the potential of AI in the medical field and improve patient treatment outcomes.&lt;/p></description></item><item><title>Healthcare</title><link>https://jbnu.macs.or.kr/en/field/02-healthcare/</link><pubDate>Thu, 28 Mar 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/field/02-healthcare/</guid><description>&lt;p>MacsLAB focuses on the development of AI technologies applicable to the medical and healthcare sectors. We develop innovative solutions aimed at providing personalized medical services, including mobile-based diagnostic systems, federated learning for privacy protection, and automated deep learning platforms based on multi-task and few-shot learning for the mobile diagnosis of skin diseases. Our research is centered on providing high-quality diagnostic services while ensuring the protection of patient privacy.&lt;/p></description></item><item><title>Contents</title><link>https://jbnu.macs.or.kr/en/field/03-contents/</link><pubDate>Wed, 27 Mar 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/field/03-contents/</guid><description>&lt;p>AI research in Contents and Webtoons focuses on developing AI technologies for creative content production. Research in this field explores new visual storytelling methods using AI and seeks innovative ways to enhance user experience. Furthermore, we aim to open new horizons in webtoon production by leveraging Generative AI and natural language processing techniques. Through the combination of art and technology, we simultaneously pursue creativity and efficiency in the content creation process.&lt;/p></description></item><item><title>Aerospace</title><link>https://jbnu.macs.or.kr/en/field/04-aerospace/</link><pubDate>Tue, 26 Mar 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/field/04-aerospace/</guid><description>&lt;p>The development of AI technology in the aerospace field focuses on contributing to the advancement of aerospace science. This research includes the analysis of remote sensing imagery, building extraction from aerial images, and the interpretation of ground observation data, developing advanced AI algorithms for aerospace exploration and Earth observation. This enables more efficient data analysis and interpretation, opening new horizons for aerospace research.&lt;/p></description></item><item><title>Medical Mathematics</title><link>https://jbnu.macs.or.kr/en/field/05-mathematics/</link><pubDate>Mon, 25 Mar 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/field/05-mathematics/</guid><description>&lt;p>Research on mathematics and optimization theory related to AI serves to build the foundation for technological development. In our laboratory, we explore mathematical methodologies to improve the efficiency and accuracy of AI models through stochastic adaptive activation functions and optimization algorithms. This contributes to establishing the fundamental theoretical basis for the advancement of AI technology. Furthermore, we research techniques for treating diseases and statistically analyzing data based on mathematical modeling.&lt;/p></description></item><item><title>Development</title><link>https://jbnu.macs.or.kr/en/field/06-development/</link><pubDate>Sun, 24 Mar 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/field/06-development/</guid><description>&lt;p>In the field of development, we focus on Full-Stack application development, which plays a crucial role in the practical application and deployment of AI technologies developed in our laboratory. We provide customized solutions to meet real-user needs and solve problems, aiming to create social value by applying research results to everyday life.&lt;/p></description></item><item><title>Fine-Grained Binary Object Segmentation in Remote Sensing Imagery via Path-Selective Test-Time Adaptation</title><link>https://jbnu.macs.or.kr/en/publication/0020-fine-grained-binary-object-segmentation-in-remote-sensing-imagery-via-path-selective-test-time-adaptation/</link><pubDate>Wed, 20 Mar 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0020-fine-grained-binary-object-segmentation-in-remote-sensing-imagery-via-path-selective-test-time-adaptation/</guid><description/></item><item><title>Predicting Obstructive Sleep Apnea Based on Computed Tomography Scan Using Deep Learning Models</title><link>https://jbnu.macs.or.kr/en/publication/0021-predicting-obstructive-sleep-apnea-based-on-computed-tomography-scan-using-deep-learning-models/</link><pubDate>Tue, 12 Mar 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0021-predicting-obstructive-sleep-apnea-based-on-computed-tomography-scan-using-deep-learning-models/</guid><description/></item><item><title>(Spring 2024) Joined Department of Computer Science &amp; Artificial Intelligence, Jeonbuk National University</title><link>https://jbnu.macs.or.kr/en/post/24-03-02-%EC%8B%A0%EC%9E%84%EA%B5%90%EC%9B%90%EC%9E%84%EC%9A%A9/</link><pubDate>Fri, 08 Mar 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/post/24-03-02-%EC%8B%A0%EC%9E%84%EA%B5%90%EC%9B%90%EC%9E%84%EC%9A%A9/</guid><description>&lt;p>Professor Kyungsu Lee appointed to the Department of Computer Science &amp;amp; Artificial Intelligence, Jeonbuk National University.&lt;/p></description></item><item><title>Intelligent Bladder Volume Monitoring for Wearable Ultrasound Devices: Enhancing Accuracy through Deep Learning-based Coarse-to-Fine Shape Estimation</title><link>https://jbnu.macs.or.kr/en/publication/0019-bladder/</link><pubDate>Fri, 05 Jan 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0019-bladder/</guid><description/></item><item><title>Fine-Tuning Network in Federated Learning for Personalized Skin Diagnosis</title><link>https://jbnu.macs.or.kr/en/publication/0018-fine-tuning-network-in-federated-learning-for-personalized-skin-diagnosis/</link><pubDate>Sun, 01 Oct 2023 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0018-fine-tuning-network-in-federated-learning-for-personalized-skin-diagnosis/</guid><description/></item><item><title>Self-supervised Domain Adaptive Segmentation of Breast Cancer Via Test-time Fine-Tuning</title><link>https://jbnu.macs.or.kr/en/publication/0017-self-supervised-domain-adaptive-segmentation-of-breast-cancer-via-test-time-fine-tuning/</link><pubDate>Sun, 01 Oct 2023 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0017-self-supervised-domain-adaptive-segmentation-of-breast-cancer-via-test-time-fine-tuning/</guid><description/></item><item><title>USIM Gate: Upsampling Module for Segmenting Precise Boundaries Concerning Entropy</title><link>https://jbnu.macs.or.kr/en/publication/0016-usim-gate--upsampling-module-for-segmenting-precise-boundaries-concerning-entropy/</link><pubDate>Thu, 27 Apr 2023 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0016-usim-gate--upsampling-module-for-segmenting-precise-boundaries-concerning-entropy/</guid><description/></item><item><title>CSS-Net: Classification and Substitution for Segmentation of Rotator Cuff Tear</title><link>https://jbnu.macs.or.kr/en/publication/0015-css-net--classification-and-substitution-for-segmentation-of-rotator-cuff-tear/</link><pubDate>Thu, 08 Dec 2022 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0015-css-net--classification-and-substitution-for-segmentation-of-rotator-cuff-tear/</guid><description/></item><item><title>Stochastic Adaptive Activation Function</title><link>https://jbnu.macs.or.kr/en/publication/0014-stochastic-adaptive-activation-function/</link><pubDate>Sun, 04 Dec 2022 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0014-stochastic-adaptive-activation-function/</guid><description/></item><item><title>Contact</title><link>https://jbnu.macs.or.kr/en/contact/</link><pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/contact/</guid><description/></item><item><title>Tour</title><link>https://jbnu.macs.or.kr/en/tour/</link><pubDate>Mon, 24 Oct 2022 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/tour/</guid><description/></item><item><title>Deep-Learning Model Associating Lateral Cervical Radiographic Features with Cormack--lehane Grade 3 Or 4 Glottic View</title><link>https://jbnu.macs.or.kr/en/publication/0013-deep-learning-model-associating-lateral-cervical-radiographic-features-with-cormack--lehane-grade-3-or-4-glottic-view/</link><pubDate>Wed, 05 Oct 2022 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0013-deep-learning-model-associating-lateral-cervical-radiographic-features-with-cormack--lehane-grade-3-or-4-glottic-view/</guid><description/></item><item><title>USG-Net: Deep Learning-based Ultrasound Scanning-Guide for an Orthopedic Sonographer</title><link>https://jbnu.macs.or.kr/en/publication/0012-usg-net--deep-learning-based-ultrasound-scanning-guide-for-an-orthopedic-sonographer/</link><pubDate>Sat, 17 Sep 2022 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0012-usg-net--deep-learning-based-ultrasound-scanning-guide-for-an-orthopedic-sonographer/</guid><description/></item><item><title>Multi-task and Few-shot Learning-based Fully Automatic Deep Learning Platform for Mobile Diagnosis of Skin Diseases</title><link>https://jbnu.macs.or.kr/en/publication/0010-multi-task-and-few-shot-learning-based-fully-automatic-deep-learning-platform-for-mobile-diagnosis-of-skin-diseases/</link><pubDate>Mon, 25 Jul 2022 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0010-multi-task-and-few-shot-learning-based-fully-automatic-deep-learning-platform-for-mobile-diagnosis-of-skin-diseases/</guid><description/></item><item><title>Speckle Reduction Via Deep Content-aware Image Prior for Precise Breast Tumor Segmentation in an Ultrasound Image</title><link>https://jbnu.macs.or.kr/en/publication/0011-speckle-reduction-via-deep-content-aware-image-prior-for-precise-breast-tumor-segmentation-in-an-ultrasound-image/</link><pubDate>Mon, 25 Jul 2022 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0011-speckle-reduction-via-deep-content-aware-image-prior-for-precise-breast-tumor-segmentation-in-an-ultrasound-image/</guid><description/></item><item><title>Intelligent Smartphone-based Multimode Imaging Otoscope for the Mobile Diagnosis of Otitis Media</title><link>https://jbnu.macs.or.kr/en/publication/0009-intelligent-smartphone-based-multimode-imaging-otoscope-for-the-mobile-diagnosis-of-otitis-media/</link><pubDate>Tue, 23 Nov 2021 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0009-intelligent-smartphone-based-multimode-imaging-otoscope-for-the-mobile-diagnosis-of-otitis-media/</guid><description/></item><item><title>Self-Mutating Network for Domain Adaptive Segmentation in Aerial Images</title><link>https://jbnu.macs.or.kr/en/publication/0008-self-mutating-network-for-domain-adaptive-segmentation-in-aerial-images/</link><pubDate>Sun, 17 Oct 2021 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0008-self-mutating-network-for-domain-adaptive-segmentation-in-aerial-images/</guid><description/></item><item><title>Boundary-oriented Binary Building Segmentation Model with Two Scheme Learning for Aerial Images</title><link>https://jbnu.macs.or.kr/en/publication/0007-boundary-oriented-binary-building-segmentation-model-with-two-scheme-learning-for-aerial-images/</link><pubDate>Mon, 26 Jul 2021 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0007-boundary-oriented-binary-building-segmentation-model-with-two-scheme-learning-for-aerial-images/</guid><description/></item><item><title>Federated Learning for Thyroid Ultrasound Image Analysis to Protect Personal Information: Validation Study in a Real Health Care Environment</title><link>https://jbnu.macs.or.kr/en/publication/0006-federated-learning-for-thyroid-ultrasound-image-analysis-to-protect-personal-information--validation-study-in-a-real-health-care-environment/</link><pubDate>Tue, 18 May 2021 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0006-federated-learning-for-thyroid-ultrasound-image-analysis-to-protect-personal-information--validation-study-in-a-real-health-care-environment/</guid><description/></item><item><title>Local Similarity Siamese Network for Urban Land Change Detection on Remote Sensing Images</title><link>https://jbnu.macs.or.kr/en/publication/0005-local-similarity-siamese-network-for-urban-land-change-detection-on-remote-sensing-images/</link><pubDate>Fri, 26 Mar 2021 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0005-local-similarity-siamese-network-for-urban-land-change-detection-on-remote-sensing-images/</guid><description/></item><item><title>Imbalanced Loss-Integrated Deep-Learning-based Ultrasound Image Analysis for Diagnosis of Rotator-Cuff Tear</title><link>https://jbnu.macs.or.kr/en/publication/0004-imbalanced-loss-integrated-deep-learning-based-ultrasound-image-analysis-for-diagnosis-of-rotator-cuff-tear/</link><pubDate>Mon, 22 Mar 2021 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0004-imbalanced-loss-integrated-deep-learning-based-ultrasound-image-analysis-for-diagnosis-of-rotator-cuff-tear/</guid><description/></item><item><title>Domain Adaptive Transfer Attack-based Segmentation Networks for Building Extraction from Aerial Images</title><link>https://jbnu.macs.or.kr/en/publication/0003-domain-adaptive-transfer-attack-based-segmentation-networks-for-building-extraction-from-aerial-images/</link><pubDate>Fri, 31 Jul 2020 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0003-domain-adaptive-transfer-attack-based-segmentation-networks-for-building-extraction-from-aerial-images/</guid><description/></item><item><title>Fully-automatic Deep Learning-based Analysis for Determination of the Invasiveness of Breast Cancer Cells in an Acoustic Trap</title><link>https://jbnu.macs.or.kr/en/publication/0002-fully-automatic-deep-learning-based-analysis-for-determination-of-the-invasiveness-of-breast-cancer-cells-in-an-acoustic-trap/</link><pubDate>Mon, 11 May 2020 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0002-fully-automatic-deep-learning-based-analysis-for-determination-of-the-invasiveness-of-breast-cancer-cells-in-an-acoustic-trap/</guid><description/></item><item><title>Wide-field 3D Ultrasound Imaging Platform with a Semi-Automatic 3D Segmentation Algorithm for Quantitative Analysis of Rotator Cuff Tears</title><link>https://jbnu.macs.or.kr/en/publication/0001-wide-field-3d-ultrasound-imaging-platform-with-a-semi-automatic-3d-segmentation-algorithm-for-quantitative-analysis-of-rotator-cuff-tears/</link><pubDate>Mon, 06 Apr 2020 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0001-wide-field-3d-ultrasound-imaging-platform-with-a-semi-automatic-3d-segmentation-algorithm-for-quantitative-analysis-of-rotator-cuff-tears/</guid><description/></item><item><title>Smartphone-based Spectral Imaging Otoscope: System Development and Preliminary Study for Evaluation of Its Potential As a Mobile Diagnostic Tool</title><link>https://jbnu.macs.or.kr/en/publication/0000-smartphone-based-spectral-imaging-otoscope--system-development-and-preliminary-study-for-evaluation-of-its-potential-as-a-mobile-diagnostic-tool/</link><pubDate>Thu, 05 Mar 2020 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/publication/0000-smartphone-based-spectral-imaging-otoscope--system-development-and-preliminary-study-for-evaluation-of-its-potential-as-a-mobile-diagnostic-tool/</guid><description/></item><item><title/><link>https://jbnu.macs.or.kr/en/admin/config.yml</link><pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/en/admin/config.yml</guid><description/></item></channel></rss>