Seongjun Yang
Hi, I am a NLP Research Engineer at KRAFTON AI. Prior to joining KRAFTON AI, I earned my Bachelor's degree in Computer Science from Yonsei University and completed my Master's at the Graduate School of AI at Korea Advanced Institute of Science and Technology (KAIST), where I was fortunate to be advised by Prof. Edward Choi.
My future research interests involve developing methods to ensure trustworthy AI systems, particularly in mitigating risks such as privacy leaks and threats when handling diverse data types. I also aim to create evaluation frameworks that enhance safety and accountability in generative AI, focusing on high-confidence decision-making in areas like patents, law, and healthcare.
Email  / 
CV  / 
Google Scholar  / 
Github / 
Twitter
|
|
You can view all papers by visiting Google Scholar or the Publications section of the CV.
|
|
Predictive pipelined decoding: A compute-latency trade-off for exact LLM decoding
Seongjun Yang *, Gibbeum Lee *, Jaewoong Cho, Dimitris Papailiopoulos, Kangwook Lee
TMLR 2024
Paper
This paper presents Predictive Pipelined Decoding (PPD), an approach that speeds up decoding in Large Language Models (LLMs) while maintaining the exact same output as the original decoding.
|
|
Towards the Practical Utility of Federated Learning in the Medical Domain
Seongjun Yang *, Hyeonji Hwang *, Daeyoung Kim, Radhika Dua, Jong-Yeup Kim, Eunho Yang, Edward Choi
CHIL 2023
Paper /
Code
The study introduces federated learning benchmarks for three medical datasets to aid adoption in healthcare. We evaluate six algorithms and a hybrid method (FedPxN), it finds simpler methods often outperform complex ones, with the hybrid consistently performing well.
|
|
Task agnostic and post-hoc unseen distribution detection.
Radhika Dua , Seongjun Yang, Yixuan Li, Edward Choi
WACV 2023
Paper /
Code
This study designs a novel clustering-based ensembling method, called Task Agnostic and Post-hoc Unseen Distribution Detection (TAPUDD) that utilizes the features extracted from a model trained on a specific task.
|
|
Ehrsql: A practical text-to-sql benchmark for electronic health records
Gyubok Lee , Hyeonji Hwang, Seongsu Bae, Yeonsu Kwon, Woncheol Shin, Seongjun Yang, Minjoon Seo, Jong-Yeup Kim, Edward Choi
NeurIPS 2022 Datasets and Benchmarks
Paper /
Code
The study introduces a new EHR text-to-SQL dataset with time expressions and unanswerable questions, collected from 222 hospital staff and linked to the MIMIC-III and eICU databases. It challenges models to generate SQL queries for diverse hospital needs, handle time-sensitive questions, and determine question answerability based on prediction confidence.
|
|
Improving lexically constrained neural machine translation with source-conditioned masked span prediction
Gyubok Lee *, Seongjun Yang *, Edward Choi
ACL 21
Paper /
Code
The paper proposes a benchmark and training strategy inspired by masked span prediction models to improve neural machine translation of domain-specific terms. This approach enhances terminology accuracy and sentence-level translation across three specialized datasets in two language pairs.
|
|
NLP Research Engineer, KRAFTON AI
Division director: Prof. Kangwook Lee
Instruct-tune LLMs, such as LLaMA, and develop prompting strategies for in-game applications.
|
|
AI Researcher, NHN Cloud
Designed tutorials for benchmarking Korean Language Models.
|
|
Graduate Student Researcher, KAIST
Advisor: Prof. Edward Choi
Chosen as a Graduate Student Researcher at KAIST's Graduate School of AI, under Prof. Edward Choi, focusing on Federated Learning and Natural Language Processing.
|
Teaching
|
|
Industry-academia collaboration project
Supervisor: Prof. Wooju Kim
Mentor: Prof. Haemin Jung
Sponsor: HYUNDAI NGV
Period: June 2018 – October 2019
Project Name: Research on methodologies for developing an information classification system and prototype
I conducted text data extraction and semantic parsing from various file types (PDF, text).
|
Design and source code from Jon Barron and Radhika Dua's website.
|