[C] Conference, [J] Journal, [W] Workshopt, [P] Preprint, [*] Equal contribution, [^] Equal correspondence

2024

[P6]ConDS: Context Distribution Shift for Robust In-Context Learning

[P5]Distribured In-Context Learning Under Non-IID Among Clients

[P4]Prompt Learning with Noisy Labels

[P3]ORBIS: Open Dataset Can Rescue You From Dataset Bias Problem

[P2]Large Language Models In Medical Term Classification And Unexpected Misalignment Between Response and Reasoning

[P1]Augmented Risk Prediction for Onset of Alzheimer’s Disease from Electronic Health Records with Large Language Models

[W7] VaCoDe: Visual Augmented Contrastive Learning (πŸ“„)

[W6] FedDr+: Stabilizing Dot-Regression with Global Feature Distillation for Federated Learning (πŸ“„)

[C11] Comparison of Prompt Engineering and Fine-Tuning Strategies in Large Language Models in the Classification of Clinical Notes (πŸ“„)

[C10] Fine-Tuning Pre-trained Models for Robustness Under Noisy Labels (πŸ“„)

[C9] Active Prompt Learning in Vision Language Models (πŸ“„)

2023

[C8] NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models (πŸ“„)

[C7] Denoising After Entropy-based Debiasing: A Robust Training Method for Dataset Bias with Noisy Labels (πŸ“„)

[C6/W3] CUDA: Curriculum of Data Augmentation for Long-tailed Recognition (πŸ“„)

[C5/W2] Mitigating Dataset Bias By Using Per-sample Gradient (πŸ“„)

[W1] Efficient Utilization of Pre-trained Model for Learning with Noisy Labels (πŸ“„)

Before 2022

[C4] Client Sampling Algorithm in Federated Learning via Combinatorial Averaging and Multi-armed Bandits (πŸ“„)

[C3] Neuro-DCF: Design of Wireless MAC via Multi-Agent Reinforcement Learning Approach (πŸ“„)

[C2] Enlarging Discriminative Power by Adding an Extra Class in Unsupervised Domain Adaptation (πŸ“„)

[C1] Multi-armed Bandit with Additional Observations (πŸ“„)