CoNLL 2020

November 19-20, 2020

CoNLL is a yearly conference organized by SIGNLL (ACL's Special Interest Group on Natural Language Learning). This year, CoNLL will be colocated with EMNLP 2020, and like EMNLP will be a fully virtual conference.

The CoNLL 2020 proceedings have been published and the conference schedule is available.


CoNLL 2020 Chairs

Our email is conll2020chairs@gmail.com.


News

November 19: Congratulations to the winners of the Best Paper Award! Acquiring language from speech by learning to remember and predict, by Cory Shain and Micha Elsner

November 18: The program of the Shared Task is available here.

November 12: The CoNLL 2020 proceedings and the conference program are now online!

October 9: Camera ready versions are due today. You have an extra page to address the reviewers' comments (9 pages in total). Following the EMNLP 2020 policy, acknowledgments and references do not count towards this page limit.

September 21: The list of accepted papers is available. In addition to these papers, which will be published in the proceedings of CoNLL, we may also accept a small number of highly relevant Findings of EMNLP papers for presentation, through a centralized application process to be run by the EMNLP Workshop Chairs. We will not be able to accommodate requests from individual authors, sorry!

September 18: Notifications of acceptance have been sent to contact authors. Out of the 227 submissions that were sent out for review, 53 were accepted. Thanks to all the authors, reviewers, and area chairs for their work!

July 20: The deadline was July 17. CoNLL will not have an author response period; decisions will be communicated to authors on September 18, 2020, following deliberations between the reviewers, the area chairs and the program chairs.

July 2: The submission site is now available: https://www.softconf.com/emnlp2020/conll2020/

April 22: CoNLL 2020 will host a shared task on Cross-Framework Meaning Representation Parsing.

April 14: Following the EMNLP program chairs' decision to move EMNLP online and change its dates, CoNLL 2020 will also be a virtual conference. The CoNLL submission deadline has been moved to July 17, and the conference will now take place on November 19-20.

March 30: Given the coronavirus crisis, we are coordinating with the EMNLP program chairs, and will adopt their decision as to the format of the conference (virtual, physical, or hybrid). Regardless of the format of the conference, we are sticking to our original paper submission deadline for now (June 5).

February 27: We are soliciting expressions of interest from individuals who would like to serve as Area Chairs for the conference. Area Chairs will help recruit reviewers, coordinate the reviewing process, and create a diverse and high-quality program consistent with the scope of the conference (see the Call for Papers below). If you are willing to be considered, please fill out this form by March 11, 2020.


Invited Speakers

  • Emmanuel Dupoux (Ecole des Hautes Etudes en Sciences Sociales and Facebook AI Research, Paris, France)
    Learning Language Like Infants Do: Self Supervised Learning From Raw Audio
  • Kristina Toutanova (Google, Seattle, USA)
    Toward Progress in Text Representations for Question Answering

Call for Papers

SIGNLL invites submissions to the 24th Conference on Computational Natural Language Learning (CoNLL 2020). The main focus of CoNLL is on theoretically, cognitively and scientifically motivated approaches to computational linguistics, rather than on work driven by particular engineering applications. Such approaches include:

  • Computational learning theory and other techniques for theoretical analysis of machine learning models for NLP
  • Models of first, second and bilingual language acquisition by humans
  • Models of language evolution and change
  • Computational simulation and analysis of findings from psycholinguistic and neurolinguistic experiments
  • Analysis and interpretation of NLP models, using methods inspired by cognitive science or linguistics or other methods
  • Data resources, techniques and tools for scientifically-oriented research in computational linguistics
  • Connections between computational models and formal languages or linguistic theories
  • Linguistic typology and other multilingual work

We welcome work targeting any aspect of language, including:

  • Speech and phonology
  • Syntactic parsing
  • Lexical, compositional and discourse semantics
  • Dialogue and interactive language use
  • Sociolinguistics
  • Multimodal and grounded language learning

Submitted papers must be anonymous and use the EMNLP 2020 template. Submitted papers may consist of up to 8 pages of content plus unlimited space for references. Authors of accepted papers will have an additional page to address reviewers’ comments in the camera-ready version (9 pages of content in total, excluding references). Anonymized supplementary materials are allowed as an optional PDF appendix, in line with EMNLP 2020 guidelines. Submission is electronic, using the Softconf START conference management system: https://www.softconf.com/emnlp2020/conll2020/

CoNLL adheres to the ACL anonymity policy, as described in the EMNLP 2020 Call for Papers. Briefly, manuscripts submitted to CoNLL cannot be posted to preprint websites such as arXiv or advertised on social media after June 17.

Multiple submission policy

CoNLL 2020 will not accept papers that have been or will be submitted to other meetings or publications. Papers submitted elsewhere as well as papers that overlap significantly in content or results with papers that will be (or have been) published elsewhere will be rejected. Authors submitting more than one paper to CoNLL 2020 must ensure that the submissions do not overlap significantly (>25%) with each other in content or results.


Important Dates

  • Anonymity period starts: June 17, 2020
  • Paper submission deadline: July 17, 2020
  • Notification of acceptance and end of anonymity period: September 18, 2020
  • Camera-ready due: October 9, 2020
  • Conference: November 19-20, 2020 (right after EMNLP 2020)

All deadlines are at 11:59pm UTC-12h ("anywhere on earth").


Shared Task

CoNLL 2020 hosts a shared task on Cross-Framework Meaning Representation Parsing (MRP 2020).


Area Chairs

  • Aida Nematzadeh
  • Alvin Grisom II
  • Andrew Caines
  • Arianna Bisazza
  • Barry Devereux
  • Colin Bannard
  • Daniel Cer
  • David Schlangen
  • Erik Velldal
  • Greg Durrett
  • Grzegorz Chrupala
  • Jacob Andreas
  • Kevin Duh
  • Kyle Gorman
  • Leon Bergen
  • Marten van Schijndel
  • Michael Roth
  • Raffaella Bernardi
  • Roi Reichert
  • Sam Bowman
  • Stella Frank
  • Tim O’Donnell
  • Vivek Srikumar
  • Yevgeni Berzak
  • Yonatan Belinkov
  • Yulia Tsvetkov

Best Paper Award

Acquiring language from speech by learning to remember and predict, by Cory Shain and Micha Elsner


Presentation and Registration Requirements

In order to virtually attend or present at CoNLL 2020, you must register through EMNLP 2020, selecting one of the options allowing your participation to workshops. All accepted papers must be presented at the conference in order to appear in the proceedings. At least one author of each accepted paper must register for CoNLL 2020. Accepted papers will be presented orally (we will follow the virtual conference model adopted by ACL and EMNLP). All accepted papers will be published in the conference proceedings. 


CoNLL 2020 Sponsors

We gratefully acknowledge support from Google Research.

Google logo


Program

All accepted papers will have a pre-recorded 12-minute video talk. Videos with captions and slides will be available before the start of the conference and you can watch them and interact with the authors via RocketChat at your convenience. Each paper has been assigned to a live Q&A session based on a number of factors, ranging from coherence of paper topics in a session to timezone compatibility. These live Q&A sessions will take the form of a 2-hour Gather.town session, where authors of 8-10 papers are available in a virtual "room". Participants can steer their avatar around the 2D environment, and engage in video-chat with nearby authors and/or other participants.
The two invited talks are also pre-recorded. You can watch them at your convenience or on the designated program session. The program includes a live Zoom Q&A session with each of the invited speakers. The Best Paper awards will also be announced live in the Opening Remarks session.

An overview of the schedule can be found here.

Day 1: November 19, 2020

5:45-7:45 New York (UTC-5) / 11:45-13:45 Amsterdam (UTC+1)

Shared Task: Cross-Framework Meaning Representation Parsing. Detailed program.

8:00-10:00 New York (UTC-5) / 14:00-16:00 Amsterdam (UTC+1): Session 1 (live Q&A in Gather Town)

  • Finding The Right One and Resolving it Payal Khullar, Arghya Bhattacharya and Manish Shrivastava
  • Relations between comprehensibility and adequacy errors in machine translation output Maja Popović
  • Cross-lingual Embeddings Reveal Universal and Lineage-Specific Patterns in Grammatical Gender Assignment Hartger Veeman, Marc Allassonnière-Tang, Aleksandrs Berdicevskis and Ali Basirat
  • Modelling Lexical Ambiguity with Density Matrices Francois Meyer and Martha Lewis
  • Representation Learning for Type-Driven Composition Gijs Wijnholds, Mehrnoosh Sadrzadeh and Stephen Clark
  • On the Computational Power of Transformers and Its Implications in Sequence Modeling Satwik Bhattamishra, Arkil Patel and Navin Goyal
  • Filler-gaps that neural networks fail to generalize Debasmita Bhattacharya and Marten van Schijndel
  • An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference Tianyu Liu, Zheng Xin, Xiaoan Ding, Baobao Chang and Zhifang Sui
  • Disentangling dialects: a neural approach to Indo-Aryan historical phonology and subgrouping Chundra Cathcart and Taraka Rama
  • TaxiNLI: Taking a Ride up the NLU Hill Pratik Joshi, Somak Aditya, Aalok Sathe and Monojit Choudhury

10:00-10:50 New York (UTC-5) / 16:00-16:50 Amsterdam (UTC+1)

Pre-recorded invited talk by Kristina Toutanova (Google, Seattle, USA)
Toward Progress in Text Representations for Question Answering

10:50-11:00 New York (UTC-5) / 16:50-17:00 Amsterdam (UTC+1)

Opening remarks and announcement of Best Paper award (live session via Zoom)

11:00-11:30 New York (UTC-5) / 17:00-17:30 Amsterdam (UTC+1)

Live Q&A with invited speaker Kristina Toutanova (live session via Zoom)

11:45-13:45 New York (UTC-5) / 17:45-19:45 Amsterdam (UTC+1): Session 2 (live Q&A in Gather Town)

  • A simple repair mechanism can alleviate computational demands of pragmatic reasoning: simulations and complexity analysis Jacqueline van Arkel, Marieke Woensdregt, Mark Dingemanse and Mark Blokpoel
  • Bridging Information-Seeking Human Gaze and Machine Reading Comprehension Jonathan Malmaud, Roger Levy and Yevgeni Berzak
  • Acquiring language from speech by learning to remember and predict Cory Shain and Micha Elsner
  • Catplayinginthesnow: Impact of Prior Segmentation on a Model of Visually Grounded Speech William Havard, Laurent Besacier and Jean-Pierre Chevrot
  • Learning to ground medical text in a 3D human atlas Dusan Grujicic, Gorjan Radevski, Tinne Tuytelaars and Matthew Blaschko
  • Re-solve it: simulating the acquisition of core semantic competences from small data Aurélie Herbelot
  • Diverse and Relevant Visual Storytelling with Scene Graph Embeddings Xudong Hong, Rakshith shetty, Asad Sayeed, Khushboo Mehra, Vera Demberg and Bernt Schiele
  • Generating Narrative Text in a Switching Dynamical System Noah Weber, Leena Shekhar, Heeyoung Kwon, Niranjan Balasubramanian and Nathanael Chambers
  • Understanding Linguistic Accommodation in Code-Switched Human-Machine Dialogues Tanmay Parekh, Emily Ahn, Yulia Tsvetkov and Alan W Black
  • Cloze Distillation: Improving Neural Language Models with Human Next-Word Predictions Tiwalayo Eisape, Noga Zaslavsky and Roger Levy

14:00-16:00 New York (UTC-5) / 20:00-22:00 Amsterdam (UTC+1): Session 3

  • Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension Ekta Sood, Simon Tannert, Diego Frassinelli, Andreas Bulling and Ngoc Thang Vu
  • How to Probe Sentence Embeddings in Low-Resource Languages: On Structural Design Choices for Probing Task Evaluation Steffen Eger, Johannes Daxenberger and Iryna Gurevych
  • Understanding the Source of Semantic Regularities in Word Embeddings Hsiao-Yu Chiang, Jose Camacho-Collados and Zachary Pardos
  • Word Representations Concentrate and This is Good News! Romain Couillet, Yagmur Gizem Cinar, Eric Gaussier and Muhammad Imran
  • Analogies minus analogy test: measuring regularities in word embeddings Louis Fournier, Emmanuel Dupoux and Ewan Dunbar
  • Word associations and the distance properties of context-aware word embeddings Maria A. Rodriguez and Paola Merlo
  • Discourse structure interacts with reference but not syntax in neural language models Forrest Davis and Marten van Schijndel
  • Analysing Word Representation from the Input and Output Embeddings in Neural Network Language Models Steven Derby, Paul Miller and Barry Devereux
  • Are Pretrained Language Models Symbolic Reasoners over Knowledge? Nora Kassner, Benno Krojer and Hinrich Schütze

Day 2: November 20, 2020

8:00-10:00 New York (UTC-5) / 14:00-16:00 Amsterdam (UTC+1): Session 4 (live Q&A in Gather Town)

  • Neural Proof Nets Konstantinos Kogkalidis, Michael Moortgat and Richard Moot
  • On the Frailty of Universal POS Tags for Neural UD Parsers Mark Anderson and Carlos Gómez-Rodríguez
  • A Corpus of Very Short Scientific Summaries Yifan Chen, Tamara Polajnar, Colin Batchelor and Simone Teufel
  • In Media Res: A Corpus for Evaluating Named Entity Linking with Creative Works Adrian M.P. Brasoveanu, Albert Weichselbraun and Lyndon Nixon
  • TrClaim-19: The First Collection for Turkish Check-Worthy Claim Detection with Annotator Rationales Yavuz Selim Kartal and Mucahid Kutlu
  • Alleviating Digitization Errors in Named Entity Recognition for Historical Documents Emanuela Boros, Ahmed Hamdi, Elvys Linhares Pontes, Luis Adrián Cabrera-Diego, Jose G. Moreno, Nicolas Sidere and Antoine Doucet
  • An Expectation Maximisation Algorithm for Automated Cognate Detection Roddy MacSween and Andrew Caines
  • A Dataset for Linguistic Understanding, Visual Evaluation, and Recognition of Sign Languages: The K-RSL Alfarabi Imashev, Medet Mukushev, Vadim Kimmelman and Anara Sandygulova
  • From Dataset Recycling to Multi-Property Extraction and Beyond Tomasz Dwojak, Michał Pietruszka, Łukasz Borchmann, Jakub Chłędowski and Filip Graliński

10:00-11:00 New York (UTC-5) / 16:00-17:00 Amsterdam (UTC+1)

Pre-recorded invited talk by Emmanuel Dupoux (Ecole des Hautes Etudes en Sciences Sociales and Facebook AI Research, Paris, France)
Learning Language Like Infants Do: Self Supervised Learning From Raw Audio

11:00-11:30 New York (UTC-5) / 17:00-17:30 Amsterdam (UTC+1)

Live Q&A with invited speaker Emmanuel Dupoux (live session via Zoom)

11:45-13:45 New York (UTC-5) / 17:45-19:45 Amsterdam (UTC+1): Session 5 (live Q&A in Gather Town)

  • Classifying Syntactic Errors in Learner Language Leshem Choshen, Dmitry Nikolaev, Yevgeni Berzak and Omri Abend
  • Recurrent babbling: evaluating the acquisition of grammar from limited input data Ludovica Pannitto and Aurélie Herbelot
  • When is a bishop not like a rook? When it's like a rabbi! Multi-prototype BERT embeddings for estimating semantic relationships Gabriella Chronis and Katrin Erk
  • Processing effort is a poor predictor of cross-linguistic word order frequency Brennan Gonering and Emily Morgan
  • ``LazImpa'': Lazy and Impatient neural agents learn to communicate efficiently Mathieu Rita, Rahma Chaabouni and Emmanuel Dupoux
  • Continual Adaptation for Efficient Machine Communication Robert Hawkins, Minae Kwon, Dorsa Sadigh and Noah Goodman
  • Learning Context-free Languages with Nondeterministic Stack RNNs Brian DuSell and David Chiang
  • Identifying robust markers of Parkinson’s disease in typing behaviour using a CNN-LSTM network Neil Dhir, Mathias Edman, Álvaro Sanchez Ferro, Tom Stafford and Colin Bannard
  • How well does surprisal explain N400 amplitude under different experimental conditions? James Michaelov and Benjamin Bergen

14:00-16:00 New York (UTC-5) / 20:00-22:00 Amsterdam (UTC+1): Session 6 (live Q&A in Gather Town)

  • A Corpus for Outbreak Detection of Diseases Prevalent in Latin America Antonella Dellanzo, Viviana Cotik and Jose Ochoa-Luna
  • Enriching Word Embeddings with Temporal and Spatial Information Hongyu Gong, Suma Bhat and Pramod Viswanath
  • Modeling Subjective Assessments of Guilt in Newspaper Crime Narratives Elisa Kreiss, Zijian Wang and Christopher Potts
  • Identifying Incorrect Labels in the CoNLL-2003 Corpus Frederick Reiss, Hong Xu, Bryan Cutler, Karthik Muthuraman and Zachary Eichenberger
  • Don’t Parse, Insert: Multilingual Semantic Parsing with Insertion Based Decoding Qile Zhu, Haidar Khan, Saleh Soltan, Stephen Rawls and Wael Hamza
  • What Are You Trying to Do? Semantic Typing of Event Processes Muhao Chen, Hongming Zhang, Haoyu Wang and Dan Roth
  • [Findings of EMNLP] Stay Hungry, Stay Focused: Generating Informative and Specific Questions in Information-Seeking Conversations Peng Qi, Yuhao Zhang and Christopher D. Manning
  • [Findings of EMNLP] Dynamic Data Selection for Curriculum Learning via Ability Estimation John P. Lalor and Hong Yu
  • [Findings of EMNLP] From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli? Maryam Hashemzadeh, Greta Kaufeld, Martha White, Andrea E. Martin and Alona Fyshe
  • [Findings of EMNLP] Investigating Transferability in Pretrained Language Models Alex Tamkin, Trisha Singh, Davide Giovanardi, Noah Goodman
  • [Findings of EMNLP] Pragmatic Issue-Sensitive Image Captioning Allen Nie, Reuben Cohn-Gordon and Christopher Potts

Accepted Papers

Enriching Word Embeddings with Temporal and Spatial Information. Hongyu Gong, Suma Bhat and Pramod Viswanath.

Interpreting Attention Models with Human Visual Attention in Machine Reading Comprehension. Ekta Sood, Simon Tannert, Diego Frassinelli, Andreas Bulling and Ngoc Thang Vu.

Neural Proof Nets. Konstantinos Kogkalidis, Michael Moortgat and Richard Moot.

TaxiNLI: Taking a Ride up the NLU Hill. Pratik Joshi, Somak Aditya, Aalok Sathe and Monojit Choudhury.

Modeling Subjective Assessments of Guilt in Newspaper Crime Narratives. Elisa Kreiss, Zijian Wang and Christopher Potts.

On The Frailty of Universal POS Tags for Neural UD Parsers. Mark Anderson and Carlos Gómez-Rodríguez.

Classifying Syntactic Errors in Learner Language. Leshem Choshen, Dmitry Nikolaev, Yevgeni Berzak and Omri Abend.

How to Probe Sentence Embeddings in Low-Resource Languages: On Structural Design Choices for Probing Task Evaluation. Steffen Eger, Johannes Daxenberger and Iryna Gurevych.

Understanding the Source of Semantic Regularities in Word Embeddings. Hsiao-Yu Chiang, Jose Camacho-Collados and Zachary Pardos.

Finding The Right One and Resolving it. Payal Khullar, Arghya Bhattacharya and Manish Shrivastava.

Bridging Information-Seeking Human Gaze and Machine Reading Comprehension. Jonathan Malmaud, Roger Levy and Yevgeni Berzak.

A Corpus of Very Short Scientific Summaries. Yifan Chen, Tamara Polajnar, Colin Batchelor and Simone Teufel.

Recurrent babbling: evaluating the acquisition of grammar from limited input data. Ludovica Pannitto and Aurélie Herbelot.

A simple repair mechanism can alleviate computational demands of pragmatic reasoning: simulations and complexity analysis. Jacqueline van Arkel, Marieke Woensdregt, Mark Dingemanse and Mark Blokpoel.

Acquiring language from speech by learning to remember and predict. Cory Shain and Micha Elsner.

Identifying Incorrect Labels in the CoNLL-2003 Corpus. Frederick Reiss, Hong Xu, Bryan Cutler, Karthik Muthuraman and Zachary Eichenberger.

When is a bishop not like a rook? When it's like a rabbi! Multi-prototype BERT embeddings for estimating semantic relationships. Gabriella Chronis and Katrin Erk.

Processing effort is a poor predictor of cross-linguistic word order frequency. Brennan Gonering and Emily Morgan.

Relations between comprehensibility and adequacy errors in machine translation output. Maja Popović.

Cross-lingual Embeddings Reveal Universal and Lineage-Specific Patterns in Grammatical Gender Assignment. Hartger Veeman, Marc Allassonnière-Tang, Aleksandrs Berdicevskis and Ali Basirat.

Modelling Lexical Ambiguity with Density Matrices. Francois Meyer and Martha Lewis.

Catplayinginthesnow: Impact of Prior Segmentation on a Model of Visually Grounded Speech. William Havard, Laurent Besacier and Jean-Pierre Chevrot.

Learning to ground medical text in a 3D human atlas. Dusan Grujicic, Gorjan Radevski, Tinne Tuytelaars and Matthew Blaschko.

Representation Learning for Type-Driven Composition. Gijs Wijnholds, Mehrnoosh Sadrzadeh and Stephen Clark.

Word Representations Concentrate and This is Good News!. Romain Couillet, Yagmur Gizem Cinar, Eric Gaussier and Muhammad Imran.

``LazImpa'': Lazy and Impatient neural agents learn to communicate efficiently. Mathieu Rita, Rahma Chaabouni and Emmanuel Dupoux.

Re-solve it: simulating the acquisition of core semantic competences from small data. Aurélie Herbelot.

In Media Res: A Corpus for Evaluating Named Entity Linking with Creative Works. Adrian M.P. Brasoveanu, Albert Weichselbraun and Lyndon Nixon.

Analogies minus analogy test: measuring regularities in word embeddings. Louis Fournier, Emmanuel Dupoux and Ewan Dunbar.

Word associations and the distance properties of context-aware word embeddings. Maria Andueza Rodriguez and Paola Merlo.

TrClaim-19: The First Collection for Turkish Check-Worthy Claim Detection with Annotator Rationales. Yavuz Selim Kartal and Mucahid Kutlu.

Discourse structure interacts with reference but not syntax in neural language models. Forrest Davis and Marten van Schijndel.

Continual Adaptation for Efficient Machine Communication. Robert Hawkins, Minae Kwon, Dorsa Sadigh and Noah Goodman.

Diverse and Relevant Visual Storytelling with Scene Graph Embeddings. Xudong Hong, Rakshith shetty, Asad Sayeed, Khushboo Mehra, Vera Demberg and Bernt Schiele.

Alleviating Digitization Errors in Named Entity Recognition for Historical Documents. Emanuela Boros, Ahmed Hamdi, Elvys Linhares Pontes, Luis Adrián Cabrera-Diego, Jose G. Moreno, Nicolas Sidere and Antoine Doucet.

Analysing Word Representation from the Input and Output Embeddings in Neural Network Language Models. Steven Derby, Paul Miller and Barry Devereux.

On the Computational Power of Transformers and Its Implications in Sequence Modeling. Satwik Bhattamishra, Arkil Patel and Navin Goyal.

An Expectation Maximisation Algorithm for Automated Cognate Detection. Roddy MacSween and Andrew Caines.

Filler-gaps that neural networks fail to generalize. Debasmita Bhattacharya and Marten van Schijndel.

Don’t Parse, Insert: Multilingual Semantic Parsing with Insertion Based Decoding. Qile Zhu, Haidar Khan, Saleh Soltan, Stephen Rawls and Wael Hamza.

Learning Context-free Languages with Nondeterministic Stack RNNs. Brian DuSell and David Chiang.

Generating Narrative Text in a Switching Dynamical System. Noah Weber, Leena Shekhar, Heeyoung Kwon, Niranjan Balasubramanian and Nathanael Chambers.

What Are You Trying to Do? Semantic Typing of Event Processes. Muhao Chen, Hongming Zhang, Haoyu Wang and Dan Roth.

A Corpus for Outbreak Detection of Diseases Prevalent in Latin America. Antonella Dellanzo, Viviana Cotik and Jose Ochoa-Luna.

Are Pretrained Language Models Symbolic Reasoners over Knowledge? Nora Kassner, Benno Krojer and Hinrich Schütze.

Understanding Linguistic Accommodation in Code-Switched Human-Machine Dialogues. Tanmay Parekh, Emily Ahn, Yulia Tsvetkov and Alan W Black.

Identifying robust markers of Parkinson’s disease in typing behaviour using a CNN-LSTM network. Neil Dhir, Mathias Edman, Álvaro Sanchez Ferro, Tom Stafford and Colin Bannard.

An Empirical Study on Model-agnostic Debiasing Strategies for Robust Natural Language Inference. Tianyu Liu, Zheng Xin, Xiaoan Ding, Baobao Chang and Zhifang Sui.

Cloze Distillation: Improving Neural Language Models with Human Next-Word Predictions. Tiwalayo Eisape, Noga Zaslavsky and Roger Levy.

Disentangling dialects: a neural approach to Indo-Aryan historical phonology and subgrouping. Chundra Cathcart and Taraka Rama.

A Dataset for Linguistic Understanding, Visual Evaluation, and Recognition of Sign Languages: The K-RSL. Alfarabi Imashev, Medet Mukushev, Vadim Kimmelman and Anara Sandygulova.

From Dataset Recycling to Multi-Property Extraction and Beyond. Tomasz Dwojak, Michał Pietruszka, Łukasz Borchmann, Jakub Chłędowski and Filip Graliński.

How well does surprisal explain N400 amplitude under different experimental conditions? James Michaelov and Benjamin Bergen.

Findings of EMNLP papers to be presented at CoNLL

Stay Hungry, Stay Focused: Generating Informative and Specific Questions in Information-Seeking Conversations. Peng Qi, Yuhao Zhang and Christopher D. Manning.

Dynamic Data Selection for Curriculum Learning via Ability Estimation. John P. Lalor and Hong Yu.

From Language to Language-ish: How Brain-Like is an LSTM's Representation of Nonsensical Language Stimuli? Maryam Hashemzadeh, Greta Kaufeld, Martha White, Andrea E. Martin and Alona Fyshe.

Investigating Transferability in Pretrained Language Models. Alex Tamkin, Trisha Singh, Davide Giovanardi, Noah Goodman.

Pragmatic Issue-Sensitive Image Captioning. Allen Nie, Reuben Cohn-Gordon and Christopher Potts.