<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>News | MacsLAB</title><link>https://jbnu.macs.or.kr/ko/post/</link><atom:link href="https://jbnu.macs.or.kr/ko/post/index.xml" rel="self" type="application/rss+xml"/><description>News</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>ko</language><lastBuildDate>Sat, 21 Feb 2026 00:00:00 +0000</lastBuildDate><item><title>Congratulations on CVPR 2026 Acceptance!</title><link>https://jbnu.macs.or.kr/ko/post/25-12-09-cvpr2026-accepted/</link><pubDate>Sat, 21 Feb 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/ko/post/25-12-09-cvpr2026-accepted/</guid><description>&lt;p>We are excited to announce that our paper has been accepted to &lt;strong>CVPR 2026&lt;/strong>:&lt;/p>
&lt;p>&lt;strong>Yeongsu Kim&lt;/strong>, &lt;strong>Seo-Yeon Choi&lt;/strong>, and &lt;strong>Kyungsu Lee&lt;/strong>,
&amp;ldquo;&lt;strong>Human-Intervention Segmentation via Federated Intent Embedding and Multi-Mask Recommendation&lt;/strong>.&amp;rdquo;&lt;/p>
&lt;ul>
&lt;li>Venue: &lt;strong>CVPR 2026 (Conference)&lt;/strong>&lt;/li>
&lt;li>Subject Area: &lt;strong>Vision applications and systems&lt;/strong>&lt;/li>
&lt;li>Keywords: &lt;strong>Computer Vision&lt;/strong>, &lt;strong>Machine Learning&lt;/strong>, &lt;strong>User Experience Design&lt;/strong>&lt;/li>
&lt;li>Student Paper: &lt;strong>Yes&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>Abstract:&lt;/p>
&lt;p>Artificial intelligence (AI) has advanced radiology, yet variability across hospitals and devices undermines reliability and trust. We present a federated learning framework that combines frequency-domain harmonization and instruction-conditioned personalization to deliver consistent and interpretable diagnostic outcomes. Using FFT-based reconstructions informed by radiomics descriptors, the system reduces equipment dependency, while CLIP-based text conditioning enables clinicians to guide reconstructions to local practices and patient needs. We evaluated the framework across four hospitals with fifteen radiologists and fifty patients, spanning polyp detection, rotator cuff tear diagnosis, pneumothorax classification, and breast cancer classification/segmentation. Results show significant gains in accuracy, calibration, and robustness under cross-site transfer, without introducing prohibitive latency. Radiologists reported improved interpretability and preserved professional agency, while patients expressed greater trust, reduced anxiety, and stronger acceptance of AI involvement. This work advances a human-centered design for medical AI, aligning federated learning with transparency, equity, and trustworthy deployment.&lt;/p>
&lt;p>Congratulations to the authors on this excellent result.&lt;/p></description></item><item><title>Congratulations on AISTATS 2026 Acceptance!</title><link>https://jbnu.macs.or.kr/ko/post/26-02-22-aistats2026-accepted/</link><pubDate>Sun, 01 Feb 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/ko/post/26-02-22-aistats2026-accepted/</guid><description>&lt;p>We are excited to share that our paper has been accepted to &lt;strong>AISTATS 2026&lt;/strong>:&lt;/p>
&lt;p>&lt;strong>Seo-Yeon Choi&lt;/strong>, and &lt;strong>Kyungsu Lee&lt;/strong>*, &amp;ldquo;&lt;strong>TCP: Context-Aware Pooling via Top-k% Activation Selection&lt;/strong>,&amp;rdquo; &lt;em>Annual Conference on Artificial Intelligence and Statistics (AISTATS 2026)&lt;/em>.&lt;/p>
&lt;p>This is a strong result at a &lt;strong>Top BK/CS venue&lt;/strong>, and we warmly congratulate &lt;strong>Seo-Yeon Choi&lt;/strong> (first author) and &lt;strong>Kyungsu Lee&lt;/strong> (corresponding author).&lt;/p>
&lt;ul>
&lt;li>Venue: &lt;strong>AISTATS 2026&lt;/strong>&lt;/li>
&lt;li>Badge: &lt;strong>Top&lt;/strong>&lt;/li>
&lt;li>Badge: &lt;strong>BK/CS&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>Congratulations again to the authors on this excellent achievement.&lt;/p></description></item><item><title>Congratulations on CHI 2026 Acceptance</title><link>https://jbnu.macs.or.kr/ko/post/26-02-22-chi2026-accepted/</link><pubDate>Thu, 22 Jan 2026 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/ko/post/26-02-22-chi2026-accepted/</guid><description>&lt;p>We are delighted to announce that our paper has been accepted to &lt;strong>CHI 2026&lt;/strong>:&lt;/p>
&lt;p>&lt;strong>Seo-Yeon Choi&lt;/strong>, and &lt;strong>Kyungsu Lee&lt;/strong>*, &amp;ldquo;&lt;strong>Human-Centered Personalization in Radiology AI: Evaluating Trust, Usability, and Cross-Hospital Robustness&lt;/strong>,&amp;rdquo; &lt;em>ACM CHI Conference on Human Factors in Computing Systems (CHI 2026)&lt;/em>.&lt;/p>
&lt;p>This paper was accepted to a &lt;strong>Top BK/CS conference&lt;/strong>, and we sincerely congratulate &lt;strong>Seo-Yeon Choi&lt;/strong> (first author) and &lt;strong>Kyungsu Lee&lt;/strong> (corresponding author).&lt;/p>
&lt;ul>
&lt;li>Venue: &lt;strong>CHI 2026&lt;/strong>&lt;/li>
&lt;li>Badge: &lt;strong>Top&lt;/strong>&lt;/li>
&lt;li>Badge: &lt;strong>BK/CS&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>Congratulations to the authors for this outstanding milestone.&lt;/p></description></item><item><title>(2025 KOSOMBE) 홍사강 학생 우수포스터상 수상</title><link>https://jbnu.macs.or.kr/ko/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/</link><pubDate>Sat, 08 Nov 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/ko/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/</guid><description>&lt;p>축하합니다!&lt;/p>
&lt;p>MacsLAB 석사과정 &lt;strong>홍사강&lt;/strong> 학생이 &lt;strong>대한의용생체공학회 2025년 추계학술대회&lt;/strong>에서
&lt;strong>우수포스터상&lt;/strong>을 수상했습니다.&lt;/p>
&lt;p>이번 수상은 2025년 11월 6일부터 8일까지 인제대학교(김해)에서 열린 학술대회에서
우수한 연구 성과를 인정받아 이루어진 성과입니다.&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="홍사강 학생 우수포스터상 수상 현장" srcset="
/ko/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-hong_hu0a24969032ad521939112188b0bc9088_2540243_d5ea32f0dcef711ab43ee4bdd10b98da.webp 400w,
/ko/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-hong_hu0a24969032ad521939112188b0bc9088_2540243_a8a68efbf626c2454f44c5b83686cc1e.webp 760w,
/ko/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-hong_hu0a24969032ad521939112188b0bc9088_2540243_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/ko/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-hong_hu0a24969032ad521939112188b0bc9088_2540243_d5ea32f0dcef711ab43ee4bdd10b98da.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>대한의용생체공학회 2025 추계학술대회 수상 현장&lt;/em>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="우수포스터상 상장" srcset="
/ko/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-certificate_hu01482258f0aeaddc3f97c8ad09dc40ed_2589951_0f8171e174df474e2792120d3de6ab23.webp 400w,
/ko/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-certificate_hu01482258f0aeaddc3f97c8ad09dc40ed_2589951_a99aebc77cd68d0f0056ca3038aa8463.webp 760w,
/ko/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-certificate_hu01482258f0aeaddc3f97c8ad09dc40ed_2589951_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/ko/post/25-11-08-kosombe-%EC%9A%B0%EC%88%98%ED%8F%AC%EC%8A%A4%ED%84%B0%EC%83%81/kosombe2025-award-certificate_hu01482258f0aeaddc3f97c8ad09dc40ed_2589951_0f8171e174df474e2792120d3de6ab23.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>홍사강 학생 우수포스터상 상장&lt;/em>&lt;/p>
&lt;p>수상 논문 정보:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>홍사강(1저자), 김준영, 이경수(교신저자)&lt;/strong>&lt;/li>
&lt;li>&lt;strong>크로스 모달리티 의료 세그멘테이션에 대한 SAM2 기반 베이지안 프롬프트 적응&lt;/strong>
(SAM2-based Bayesian Prompt Adaptation for Cross-Modality Medical Segmentation)&lt;/li>
&lt;/ul>
&lt;p>홍사강 학생의 수상을 진심으로 축하하며,
앞으로도 MacsLAB의 뛰어난 연구 성과가 계속 이어지길 기대합니다.&lt;/p>
&lt;p>관련 링크:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://jbnu.macs.or.kr/publication/0034-sam2-based-bayesian-prompt-adaptation-for-cross-modality-medical-segmentation/">/publication/0034-sam2-based-bayesian-prompt-adaptation-for-cross-modality-medical-segmentation/&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://www.kosombe.or.kr/register/2025_fall/program/sub07.html" target="_blank" rel="noopener">대한의용생체공학회 2025 추계학술대회 프로그램&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>(AIxMHC 2025) Best Poster Award 수상</title><link>https://jbnu.macs.or.kr/ko/post/25-10-15-aixmhc2025-best-poster-award/</link><pubDate>Wed, 15 Oct 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/ko/post/25-10-15-aixmhc2025-best-poster-award/</guid><description>&lt;p>축하합니다!&lt;/p>
&lt;p>MacsLAB의 &lt;strong>Seo-Yeon Choi, Haeyun Lee, Kyungsu Lee&lt;/strong> 팀이
&lt;strong>AIxMHC 2025&lt;/strong>에서 &lt;strong>Best Poster Award&lt;/strong>를 수상했습니다.&lt;/p>
&lt;p>수상 논문은 다음과 같습니다.&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Statistical Multi-Modal Fusion for Patient-Centric Medical Diagnosis Using DICOM&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Best Poster Award 상장" srcset="
/ko/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-award-certificate_hu420fe1e716ed07059b84a96f8010be5d_2442018_1ab0c8323a5516935bb253f89bd93dfd.webp 400w,
/ko/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-award-certificate_hu420fe1e716ed07059b84a96f8010be5d_2442018_e9cb3708b2e9db178fccfa9ec8279297.webp 760w,
/ko/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-award-certificate_hu420fe1e716ed07059b84a96f8010be5d_2442018_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/ko/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-award-certificate_hu420fe1e716ed07059b84a96f8010be5d_2442018_1ab0c8323a5516935bb253f89bd93dfd.webp"
width="760"
height="428"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>AIxMHC 2025 Best Poster Award Certificate&lt;/em>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="AIxMHC 2025 현장" srcset="
/ko/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-team_hue6024be751a019995de8a04ac33dd409_441204_e01dd8ed4f415c7b15fe0068462460b5.webp 400w,
/ko/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-team_hue6024be751a019995de8a04ac33dd409_441204_f8613804cec2e4591f337b40e6c0d879.webp 760w,
/ko/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-team_hue6024be751a019995de8a04ac33dd409_441204_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/ko/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-team_hue6024be751a019995de8a04ac33dd409_441204_e01dd8ed4f415c7b15fe0068462460b5.webp"
width="760"
height="570"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>AIxMHC 2025 현장 단체 사진&lt;/em>&lt;/p>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="포스터 발표 현장" srcset="
/ko/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-poster_hu4708d36416e7036bee5557970d51c3a1_3489223_def59102ea64156a981431559c18ac02.webp 400w,
/ko/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-poster_hu4708d36416e7036bee5557970d51c3a1_3489223_e010e70fae6dba5f4ae53abc41e85823.webp 760w,
/ko/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-poster_hu4708d36416e7036bee5557970d51c3a1_3489223_1200x1200_fit_q75_h2_lanczos.webp 1200w"
src="https://jbnu.macs.or.kr/ko/post/25-10-15-aixmhc2025-best-poster-award/aixmhc2025-poster_hu4708d36416e7036bee5557970d51c3a1_3489223_def59102ea64156a981431559c18ac02.webp"
width="428"
height="760"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;em>수상 논문 포스터 발표 현장&lt;/em>&lt;/p>
&lt;p>앞으로도 MacsLAB은 의료 AI 분야에서 임상적으로 의미 있는 연구 성과를 이어가겠습니다.&lt;/p>
&lt;p>관련 링크:&lt;/p>
&lt;ul>
&lt;li>&lt;a href="https://jbnu.macs.or.kr/publication/0036-statistical-multi-modal-fusion-for-patient-centric-medical-diagnosis-using-dicom/">/publication/0036-statistical-multi-modal-fusion-for-patient-centric-medical-diagnosis-using-dicom/&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://chaoneng.github.io/aixmhc2025.github.io/" target="_blank" rel="noopener">AIxMHC 2025&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Congratulations to Seo-Yeon Choi (Student Researcher, 학부연구생) on Two Papers Accepted to ICCV 2025 Workshops!</title><link>https://jbnu.macs.or.kr/ko/post/25-07-15-iccv2025-workshop-accepted/</link><pubDate>Sat, 19 Jul 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/ko/post/25-07-15-iccv2025-workshop-accepted/</guid><description>&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;p>We are thrilled to announce that our undergraduate researcher, &lt;strong>Seo-Yeon Choi (최서연)&lt;/strong>, has achieved a remarkable accomplishment—&lt;strong>two papers have been accepted to workshops at ICCV 2025 Workshops (CVAMD / VADH25)&lt;/strong>!&lt;/p>
&lt;p>Even more exciting, one paper was selected for an &lt;strong>oral presentation&lt;/strong> and the other for a &lt;strong>poster presentation&lt;/strong>. Having two papers accepted at such a prestigious venue as ICCV is a truly outstanding feat, especially for an undergraduate researcher. This is a testament to Seo-Yeon’s dedication, hard work, and innovative research.&lt;/p>
&lt;p>Congratulations once again, Seo-Yeon! We look forward to seeing both the oral and poster presentations at ICCV 2025 in Hawaii! 🌺🌴&lt;/p>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;hr>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;h2 id="patient-centric-statistical-multi-modal-fusion-for-medical-diagnosis-integrating-dicom-radiomics-and-patient-attributes">Patient-Centric Statistical Multi-Modal Fusion for Medical Diagnosis: Integrating DICOM, Radiomics, and Patient Attributes&lt;/h2>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Info1" srcset="
/media/ICCVW2025/VADH25_hud6f6290f3a18db0f534527358b362b21_95045_dbc3182c80acc67fbf2fe26e94091705.webp 400w,
/media/ICCVW2025/VADH25_hud6f6290f3a18db0f534527358b362b21_95045_d6d84911d7285fedee838b6a4e15187c.webp 760w,
/media/ICCVW2025/VADH25_hud6f6290f3a18db0f534527358b362b21_95045_1200x1200_fit_q75_h2_lanczos_3.webp 1200w"
src="https://jbnu.macs.or.kr/media/ICCVW2025/VADH25_hud6f6290f3a18db0f534527358b362b21_95045_dbc3182c80acc67fbf2fe26e94091705.webp"
width="760"
height="344"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h3 id="abstract">Abstract&lt;/h3>
&lt;p>Deep learning (DL) has led to substantial progress in medical image analysis, particularly for disease classification. However, the integration of patient-specific attributes, such as age, body mass index (BMI), and lifestyle factors with radiomics and raw imaging data remains a key challenge in the development of personalized diagnostic models. To alleviate this, in this research, we propose a novel multi-modal framework, denoted as Statistically Coherent Network (SCN), which jointly models imaging, radiomic, and patient metadata through a structured multi-space latent representation. SCN facilitates distributional coherence across subpopulations by leveraging a newly devised statistics-based loss in conjunction with a triplet loss, thereby aligning feature distributions among clinically similar cohorts. This statistical alignment using T-test facilitates more interpretable and robust representation learning across heterogeneous patient groups. We evaluate SCN on four clinically diverse tasks, including breast cancer (mammography), obstructive sleep apnea (CT), rotator cuff tear (MRI), and Cormack-Lehane grading (X-ray), and demonstrate the consistent improvements over conventional single-space and multi-modal baselines. The experimental results highlight the importance of explicitly incorporating patient metadata, in terms of multimodal learning, to enhance model generalizability and clinical relevance.&lt;/p>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;hr>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;h2 id="memory-guided-personalization-for-physician-specific-diagnostic-inference">Memory-Guided Personalization for Physician-Specific Diagnostic Inference&lt;/h2>
&lt;p>
&lt;figure >
&lt;div class="d-flex justify-content-center">
&lt;div class="w-100" >&lt;img alt="Info1" srcset="
/media/ICCVW2025/CVAMD25_hu8df5408106eb8ca5ee757ac685ce145c_335676_f410bd5bc1b00259ce8a46d2fa0e8c80.webp 400w,
/media/ICCVW2025/CVAMD25_hu8df5408106eb8ca5ee757ac685ce145c_335676_137ea6e216745a85720efe3c219c4721.webp 760w,
/media/ICCVW2025/CVAMD25_hu8df5408106eb8ca5ee757ac685ce145c_335676_1200x1200_fit_q75_h2_lanczos_3.webp 1200w"
src="https://jbnu.macs.or.kr/media/ICCVW2025/CVAMD25_hu8df5408106eb8ca5ee757ac685ce145c_335676_f410bd5bc1b00259ce8a46d2fa0e8c80.webp"
width="760"
height="369"
loading="lazy" data-zoomable />&lt;/div>
&lt;/div>&lt;/figure>
&lt;/p>
&lt;h3 id="abstract-1">Abstract&lt;/h3>
&lt;p>Recent advances in deep learning have improved diagnostic precision across medical imaging tasks. However, clinical adoption remains limited due to a mismatch between model outputs and the diverse reasoning styles of physicians. Prior personalization efforts have primarily focused on patient-specific adaptation, overlooking clinician-specific variability. We propose a physician-centric diagnostic framework that supports real-time, adaptive inference tailored to individual clinicians. The system consists of three stages: supervised learning, Human-in-the-Loop guidance, and personalized deployment. Physician feedback is encoded as memory-based priors and reused at inference without retraining, enabling lightweight, end-to-end personalization. We validate our method on detection and segmentation tasks including parathyroid localization, breast cancer segmentation, and rotator cuff tear analysis. Results demonstrated that our model adapts effectively to individual diagnostic styles while maintaining high accuracy in clinical workflows.&lt;/p>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;hr>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;p>Once again, congratulations to Seo-Yeon Choi for this outstanding achievement. Let’s look forward to an inspiring and impactful presentation at ICCV 2025! 🚀🎉&lt;/p>
&lt;p>&lt;br>&lt;br>&lt;/p>
&lt;hr>
&lt;p>&lt;br>&lt;br>&lt;/p></description></item><item><title>Congratulations to Yeongsu Kim (Student Researcher, 학부연구생) on ICLR 2025 Workshop Acceptance!</title><link>https://jbnu.macs.or.kr/ko/post/25-03-03-iclr2025-workshop-accepted/</link><pubDate>Mon, 03 Mar 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/ko/post/25-03-03-iclr2025-workshop-accepted/</guid><description>&lt;p>Exciting news! Our undergraduate researcher, Yeongsu Kim, has achieved an outstanding milestone—his paper has been accepted to the ML4RS Workshop at ICLR 2025!&lt;/p>
&lt;p>As an undergraduate student, getting a paper into a prestigious venue like ICLR is no small feat, and this accomplishment is a testament to his dedication and hard work. Congratulations once again, Yeongsu!&lt;/p>
&lt;p>Looking forward to seeing the research presented in Singapore this April! 🚀🎉&lt;/p>
&lt;h4 id="abstract">Abstract&lt;/h4>
&lt;p>Over the past few decades, geospatial objects have been extensively recognized as significant components in remote sensing applications, including environmental monitoring, urban planning, and defense. Particularly, accurate segmentation of objects has aimed at meaningful observations from aerial imagery, leading to the necessity of deep learning-based methodologies. However, conventional deep learning-based segmentation methodologies exhibit limited generalization capabilities across diverse geographical domains due to inherent variations in regional characteristics and data distribution shifts. Furthermore, most existing approaches strongly rely on static, pre-trained models lacking the adaptability to handle previously unseen data. To alleviate these limitations, we propose a novel Few-shot Semi-Online Adaptation framework incorporating interactive user feedback to iteratively refine segmentation outputs. By leveraging online learning and test-time adaptation, our approach enables models to continuously be accurate based on minimal user corrections, ensuring flexibility and adaptability to new environments. Experimental results demonstrate that our method effectively enhances the segmentation accuracy with minimal user intervention, bridging the gap between automated segmentation and domain-specific expertise. Our research contributes to the development of interactive, user-adaptive segmentation models to facilitate geospatial analysis more efficiently and reliably.&lt;/p></description></item><item><title>One Paper Accepted to ICLR 2025!</title><link>https://jbnu.macs.or.kr/ko/post/25-01-23-iclr2025-accepted/</link><pubDate>Thu, 23 Jan 2025 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/ko/post/25-01-23-iclr2025-accepted/</guid><description>&lt;p>Thrilled to announce that our paper, &amp;ldquo;Connectome Mapping: Shape-Memory Network via Interpretation of Contextual Semantic Information,&amp;rdquo; has been accepted to ICLR 2025! See you in Singapore in April!&lt;/p>
&lt;h4 id="abstract">Abstract&lt;/h4>
&lt;p>Contextual semantic information plays a pivotal role in the brain&amp;rsquo;s visual interpretation of the surrounding environment. When processing visual information, electrical signals within synapses facilitate the dynamic activation and deactivation of synaptic connections, guided by the contextual semantic information associated with different objects. In the realm of Artificial Intelligence (AI), neural networks have emerged as powerful tools to emulate complex signaling systems, enabling tasks such as classification and segmentation by understanding visual information. However, conventional neural networks have limitations in simulating the conditional activation and deactivation of synapses, collectively known as the connectome, a comprehensive map of neural connections in the brain. Additionally, the pixel-wise inference mechanism of conventional neural networks failed to account for the explicit utilization of contextual semantic information in the prediction process. To overcome these limitations, we developed a novel neural network, dubbed the Shape Memory Network (SMN), which excels in two key areas: (1) faithfully emulating the intricate mechanism of the brain&amp;rsquo;s connectome, and (2) explicitly incorporating contextual semantic information during the inference process. The SMN memorizes the structure suitable for contextual semantic information and leverages this structure at the inference phase. The structural transformation emulates the conditional activation and deactivation of synaptic connections within the connectome. Rigorous experimentation carried out across a range of semantic segmentation benchmarks demonstrated the outstanding performance of the SMN, highlighting its superiority and effectiveness. Furthermore, our pioneering network on connectome emulation reveals the immense potential of the SMN for next-generation neural networks.&lt;/p></description></item><item><title>(24년 2학기) K-Health 의료 인공지능 해커톤 수상</title><link>https://jbnu.macs.or.kr/ko/post/24-11-20-k-health-%EC%88%98%EC%83%81/</link><pubDate>Wed, 20 Nov 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/ko/post/24-11-20-k-health-%EC%88%98%EC%83%81/</guid><description>&lt;p>축하드립니다!&lt;/p>
&lt;p>MacsLAB에서 학부연구생으로 있는 강다영, 김수민, 박세현, 최서연학생들이 2024 K-Health 의료 인공지능 해커톤에서 최우수상을 수상하였습니다.&lt;/p>
&lt;p>대회는 유방맘모그래피(Mammography) 영상데이터를 이용한 유방종괴 영역 분할 AI 학습 모델 개발을 주제로 진행되었습니다. 학생들은 자체적인 Segmentation model을 개발/학습/tuning하여, 대회에서 제공해준 dataset 이외에 public datasets을 활용하여 높은 validation 성능을 달성하였으며, 이에 최우수상을 수상하였습니다.&lt;/p></description></item><item><title>(24년 1학기) 전북대학교 컴퓨터인공지능학부 부임</title><link>https://jbnu.macs.or.kr/ko/post/24-03-02-%EC%8B%A0%EC%9E%84%EA%B5%90%EC%9B%90%EC%9E%84%EC%9A%A9/</link><pubDate>Fri, 08 Mar 2024 00:00:00 +0000</pubDate><guid>https://jbnu.macs.or.kr/ko/post/24-03-02-%EC%8B%A0%EC%9E%84%EA%B5%90%EC%9B%90%EC%9E%84%EC%9A%A9/</guid><description>&lt;p>이경수 교수, 전북대학교 컴퓨터인공지능학부 임명&lt;/p>
&lt;!-- 최근, 저는 전북대학교 컴퓨터인공지능학부의 교수로 임명되었습니다. 이 새로운 역할을 통해, 제가 열정을 가지고 연구해온 분야인 딥 러닝, 특히 연합학습과 표현 학습을 사용한 medical 도메인에서의 응용과 함께, 도메인 적응 및 테스트 타임 학습과 같은 컴퓨터 비전 응용을 더 깊게 탐구하게 될 것입니다.
저의 연구는 AI의 기초 이론부터 시작하여, 이미지 처리를 포함한 다양한 특성화 분야의 애플리케이션에 이르기까지 넓은 범위를 아우릅니다. 저는 이론과 실용 사이의 균형을 중시하며, 우리 삶을 변화시킬 수 있는 혁신적인 기술을 개발하는 데 초점을 맞추고 있습니다.
전북대학교에서의 이 새로운 시작은 저에게 매우 의미가 큽니다. 저는 여기서 학생들에게 지식을 전달하고, 함께 배우며 성장할 기회를 갖게 되어 매우 기쁩니다. 또한, 전 세계의 연구자들과 협력하여 우리 분야의 경계를 넓히고, 의료 분야에서 AI의 잠재력을 극대화하기 위한 연구를 지속해 나갈 것입니다.
마지막으로, 전북대학교 컴퓨터인공지능학부에서의 제 역할이 기대되며, 앞으로 이 분야에서의 여정이 더욱 흥미로운 도전이 될 것이라 확신합니다. 여러분의 지속적인 관심과 지원을 부탁드립니다. --></description></item></channel></rss>