December 7-8, 2022
CoNLL is a yearly conference organized by SIGNLL (ACL's Special Interest Group on Natural Language Learning), focusing on theoretically, cognitively and scientifically motivated approaches to computational linguistics.
This year, CoNLL will be held in a hybrid format: colocated with EMNLP 2022 but also entirely accessible online. The conference's schedule will be considerate to attendees who are at the EMNLP venue, and we plan to have an extra in-person poster session.
CoNLL 2022 Chairs and Organizers
The conference's co-chairs are:
- Antske Fokkens (Vrije Universiteit Amsterdam and Eindhoven University of Technology, The Netherlands)
- Vivek Srikumar (University of Utah, USA)
- Publicity Chair: Jack Hessel (Allen Institute for AI)
- Publication Chair: R. Thomas McCoy (John Hopkins University)
- Local Chairs: Pia Sommerauer and Angel Daza
- SIGNLL President: Julia Hockenmaier (University of Illinois at Urbana-Champaign, USA)
- SIGNLL Secretary: Afra Alishahi (Tilburg University, Netherlands)
The program can be found here
December 9th 2022: The keynote speakers have been announced! Please refer to the dedicated keynote talks section below for more info.
December 8th 2022: The RocketChat for the conference is now available.
November 28th 2022: The program is now available!
July 26th 2022: The ARR commitment page is now available on Softconf. You can commit ARR papers by clicking "Make a new Submission".
April 27th 2022: The Call for Papers is published (see below).
On Day 1
Noah Goodman, Stanford University
What is language for? Talking, teaching, thinking.
Language is a key differentiator between humans and other animals. But what is language for? In this talk I'll first describe the neo-Gricean view that language is for conveying concrete information. Formalizing this view, into the Rational Speech Act framework, explains human language use in coordination games rather well. I'll illustrate this with work on hyperbole and metaphor understanding. And yet the more important role of language may be intergenerational, that is teaching, and internal -- for thinking. I will describe work exploring language used in 'teaching games' and consider the role of language as a medium for thinking in recent work on large language models.
On Day 2
Allyson Ettinger, University of Chicago
“Understanding” and prediction: Disentangling meaning extraction and predictive processes in language models and in humans
The interaction between "understanding" and prediction is a central theme both in current NLP and in psycholinguistics. Evidence indicates that the human brain engages in predictive processing while extracting the meaning of language in real time, while the language models that dominate NLP use training based on prediction in context to learn strategies of language "understanding". In this talk I will discuss work that tackles key problems in both domains by exploring and teasing apart effects of compositional meaning extraction and effects of statistical-associative processes associated with prediction. I will begin with work that diagnoses the linguistic capabilities of popular pre-trained language models, investigating the extent to which these models exhibit robust compositional meaning processing resembling that of humans, versus shallower heuristic sensitivities associated with predictive processes. I will show that with properly controlled tests, we identify important limitations in the capacities of current models to handle compositional meaning as humans do. However, the models' behaviors do show signs of aligning with statistical sensitivities associated with predictive mechanisms in human real-time processing. Leveraging this knowledge, I will then turn to work that directly models the mechanisms underlying human real-time language comprehension, with a focus on understanding how the robust compositional meaning extraction processes exhibited by humans interact with probabilistic predictive mechanisms. I will show that by combining psycholinguistic theory with targeted use of measures from language models, we can strengthen the explanatory power of psycholinguistic theories and achieve nuanced accounts of interacting factors underlying a wide range of observed effects in human language processing.
Allyson Ettinger is an Assistant Professor in the Departments of Linguistics and Computer Science at the University of Chicago. Her interdisciplinary work combines methods and insights from cognitive science, linguistics, and computer science to examine meaning extraction and predictive processes executed during language processing in artificial intelligence systems and in humans. She received her PhD in Linguistics from the University of Maryland, and spent a year as research faculty at the Toyota Technological Institute at Chicago (TTIC) before beginning her appointment at the University of Chicago. She holds an additional courtesy appointment at TTIC.
Call For Papers
SIGNLL invites submissions to the 26th Conference on Computational Natural Language Learning (CoNLL 2022). The focus of CoNLL is on theoretically, cognitively and scientifically motivated approaches to computational linguistics, rather than on work driven by particular engineering applications. Such approaches include:
- Computational learning theory and other techniques for theoretical analysis of machine learning models for NLP
- Models of first, second and bilingual language acquisition by humans
- Models of language evolution and change
- Computational simulation and analysis of findings from psycholinguistic and neurolinguistic experiments
- Analysis and interpretation of NLP models, using methods inspired by cognitive science or linguistics or other methods
- Data resources, techniques and tools for scientifically-oriented research in computational linguistics
- Connections between computational models and formal languages or linguistic theories
- Linguistic typology, translation, and other multilingual work
- Theoretically, cognitively and scientifically motivated approaches to text generation
We welcome work targeting any aspect of language, including:
- Speech and phonology
- Syntax and morphology
- Lexical, compositional and discourse semantics
- Dialogue and interactive language use
- Multimodal and grounded language learning
We do not restrict the topic of submissions to fall into this list. However, the submissions’ relevance to the conference’s focus on theoretically, cognitively and scientifically motivated approaches will play an important role in the review process.
Submitted papers must be anonymous and use the EMNLP 2022 template. Submitted papers may consist of up to 8 pages of content plus unlimited space for references. Authors of accepted papers will have an additional page to address reviewers’ comments in the camera-ready version (9 pages of content in total, excluding references). Optional anonymized supplementary materials and a PDF appendix are allowed, according to the EMNLP 2022 guidelines. Please refer to the EMNLP 2022 Call for Papers for more details on the submission format. Submission is electronic, using the Softconf START conference management system. Note that, unlike EMNLP, we do not mandate that papers have a section discussion limitations of the work. However, we strongly encourage authors have such a section in the appendix.
CoNLL adheres to the ACL anonymity policy, as described in the EMNLP 2022 Call for Papers. Briefly, non-anonymized manuscripts submitted to CoNLL cannot be posted to preprint websites such as arXiv or advertised on social media after May 30th, 2022.
Multiple submission policy
CoNLL 2022 will not accept papers that are currently under submission, or that will be submitted to other meetings or publications, including EMNLP. Papers submitted elsewhere as well as papers that overlap significantly in content or results with papers that will be (or have been) published elsewhere will be rejected. Authors submitting more than one paper to CoNLL 2022 must ensure that the submissions do not overlap significantly (>25%) with each other in content or results.
CoNLL 2022 has the same policy as EMNLP 2022 regarding ARR submissions. This means that CoNLL 2022 will also accept submissions of ARR-reviewed papers, provided that the ARR reviews and meta-reviews are available by the ARR commitment deadline. We follow the EMNLP policy for papers that were previously submitted to ARR, or significantly overlap (>25%) with such submissions.
- Anonymity period begins: May 30th, 2022
- Submission deadline for START direct submissions: Thursday June 30th, 2022
- Commitment deadline for ARR papers: August 1st, 2022
- Notification of acceptance: Mid-September, 2022
- Camera ready papers due: October 20th, 2022 (extended)
- Conference: December 7th, 8th, 2022
All deadlines are at 11:59pm UTC-12h ("anywhere on earth").
Areas and ACs
- Computational Social Science: Tanmoy Chakraborty, Dan Goldwasser, Preslav Nakov
- Interaction and Grounded Language Learning: Dipendra Misra, Mark Yatskar
- Lexical, Compositional and Discourse Semantics: Jena Hwang, Nathan Schneider, Adina Williams
- Multilingual Work and Translation: Maja Popovic, Rui Wang
- Natural Language Generation: Ryan Cotterell, Nanyun Peng
- Resources and tools for scientifically motivated research: Andrew Caines, Roi Reichart
- Simulation and analysis of findings from psycholinguistic and neurolinguistic experiments; language evolution, acquisition, and linguistic theories:
Micha Elsner, Nora Hollenstein
- Speech and Phonology: Emily Prud’hommeaux
- Syntax and Morphology: Rob van der Goot, Joseph Le Roux
- Theoretical Analysis and Interpretation of ML models for NLP: Kai-wei Chang, Dieuwke Hupkes, Kevin Small