People

Alan Akbik

Principal Investigator

Machine Learning

HU Berlin

 

Email:

 

Photo: SCIoI

← People Overview

Alan Akbik

Alan Akbik

Photo: SCIoI

Alan focuses on research in machine learning (ML) and natural language processing (NLP), with the goal of giving machines the ability to understand and use human language. This spans research topics such as neural language modeling, sample-efficient learning and semantic parsing, as well as application areas in large-scale text analytics. Together with his group and the open source community, he develops the NLP framework Flair (https//github.com/flairNLP/flair) that allows anyone to use state-of-the-art NLP methods in their research or applications. At SCIoI, Alan works at Project A002, Project 44, and Project 45.


Projects

Alan Akbik is member of:


Garbaciauskas, L., Ploner, M., & Akbik, A. (n.d.). Choose Your Transformer: Improved Transferability Estimation of Transformer Models on Classification Tasks. ACL 2024.
Ziletti, A., Akbik, A., Berns, C., Herold, T., Legler, M., & Viell, M. (2022). Medical Coding with Biomedical Transformer Ensembles and Zero/Few-shot Learning. NAACL, 176–187. https://doi.org/10.18653/v1/2022.naacl-industry.21
Milich, M., & Akbik, A. (2023). ZELDA: A Comprehensive Benchmark for Supervised Entity Disambiguation. EACL 2023. https://doi.org/10.18653/v1/2023.eacl-main.151
Akbik, D. S. F. H. A. (2024). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing (EMNLP). https://doi.org/10.18653/v1/2024.emnlp-main.529
Dallabetta, M., Dobberstein, C., Breiding, A., & Akbik, A. (2024). Fundus: A Simple-to-Use News Scraper Optimized for High Quality Extractions. ACL 2024. https://doi.org/10.48550/arXiv.2403.15279
Garbaciauskas, L., Ploner, M., & Akbik, A. (2024). TransformerRanker: A Tool for Efficiently Finding the Best-Suited Language Models for Downstream Classification Tasks. EMNLP 2024.
Golde, J., Hamborg, F., & Akbik, A. (2024). Large-Scale Label Interpretation Learning for Few-Shot Named Entity Recognition. EACL 2024. https://doi.org/10.48550/arXiv.2403.14222
Golde, J., Haller, P., Hamborg, F., Risch, J., & Akbik, A. (2024). Fabricator: An Open Source Toolkit for Generating Labeled Training Data with Teacher LLMs. EMNLP 2023. https://doi.org/10.18653/v1/2023.emnlp-demo.1
Haller, P., Aynetdinov, A., & Akbik, A. (2024). OpinionGPT: Modelling Explicit Biases in Instruction-Tuned LLMs. NAACL 2024. https://doi.org/10.18653/v1/2024.naacl-demo.8
Haller, P., Golde, J., & Akbik, A. (2024). PECC: Problem Extraction and Coding Challenges. COLING 2024. https://doi.org/10.48550/arXiv.2404.18766
Ploner, M., Wiland, J., Pohl, S., & Akbik, A. (2024). LM-Pub-Quiz: A Comprehensive Framework for Zero-Shot Evaluation of Relational Knowledge in Language Models. EMNLP 2024.
Ploner, M., & Akbik, A. (2024). Parameter-Efficient Fine-Tuning: Is There An Optimal Subset of Parameters to Tune? EACL 2024.
Rücker, S., & Akbik, A. (2024). CleanCoNLL: A Nearly Noise-Free Named Entity Recognition Dataset. EMNLP 2023. https://doi.org/10.18653/v1/2023.emnlp-main.533
Schulte, D., Hamborg, F., & Akbik, A. (2024). NoiseBench: Benchmarking the Impact of Real Label Noise on Named Entity Recognition. Empirical Methods in Natural Language Processing (EMNLP). https://aclanthology.org/2024.emnlp-main.1011/
Wiland, J., Ploner, M., & Akbik, A. (2024). BEAR: A Unified Framework for Evaluating Relational Knowledge in Causal and Masked Language Models. NAACL 2024. https://doi.org/10.18653/v1/2024.findings-naacl.155

EACL’s Outstanding Paper Award (2023)

Emmy Noether Grant (2021)

Research

An overview of our scientific work

See our Research Projects