venue stringclasses 1
value | title stringlengths 18 162 | abstract stringlengths 252 1.89k | doc_id stringlengths 32 32 | publication_year int64 2.02k 2.02k | sentences listlengths 1 13 | events listlengths 1 24 | document listlengths 50 348 |
|---|---|---|---|---|---|---|---|
ACL | Program Transfer for Answering Complex Questions over Knowledge Bases | Program induction for answering complex questions over knowledge bases (KBs) aims to decompose a question into a multi-step program, whose execution against the KB produces the final answer. Learning to induce programs relies on a large number of parallel question-program pairs for the given KB. However, for most KBs, ... | febd573ada9568c635f6d8aeada27ec5 | 2,022 | [
"program induction for answering complex questions over knowledge bases ( kbs ) aims to decompose a question into a multi - step program , whose execution against the kb produces the final answer .",
"learning to induce programs relies on a large number of parallel question - program pairs for the given kb .",
... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "program induction",
"tokens": [
"program",
"induction"
]
}
],
"event_type": "ITT",
"trigger": ... | [
"program",
"induction",
"for",
"answering",
"complex",
"questions",
"over",
"knowledge",
"bases",
"(",
"kbs",
")",
"aims",
"to",
"decompose",
"a",
"question",
"into",
"a",
"multi",
"-",
"step",
"program",
",",
"whose",
"execution",
"against",
"the",
"kb",
"p... |
ACL | Are Natural Language Inference Models IMPPRESsive? Learning IMPlicature and PRESupposition | Natural language inference (NLI) is an increasingly important task for natural language understanding, which requires one to infer whether a sentence entails another. However, the ability of NLI models to make pragmatic inferences remains understudied. We create an IMPlicature and PRESupposition diagnostic dataset (IMP... | 1a6285faf0918175c1ea9e0b7c8ea82e | 2,020 | [
"natural language inference ( nli ) is an increasingly important task for natural language understanding , which requires one to infer whether a sentence entails another .",
"however , the ability of nli models to make pragmatic inferences remains understudied .",
"we create an implicature and presupposition di... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "natural language inference",
"tokens": [
"natural",
"language",
"inference"
]
}
... | [
"natural",
"language",
"inference",
"(",
"nli",
")",
"is",
"an",
"increasingly",
"important",
"task",
"for",
"natural",
"language",
"understanding",
",",
"which",
"requires",
"one",
"to",
"infer",
"whether",
"a",
"sentence",
"entails",
"another",
".",
"however",... |
ACL | Leveraging Task Transferability to Meta-learning for Clinical Section Classification with Limited Data | Identifying sections is one of the critical components of understanding medical information from unstructured clinical notes and developing assistive technologies for clinical note-writing tasks. Most state-of-the-art text classification systems require thousands of in-domain text data to achieve high performance. Howe... | 6bab1cf097070e6d457c9c8fd0e74e57 | 2,022 | [
"identifying sections is one of the critical components of understanding medical information from unstructured clinical notes and developing assistive technologies for clinical note - writing tasks .",
"most state - of - the - art text classification systems require thousands of in - domain text data to achieve h... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
28,
29,
30,
31,
32,
33,
34,
35,
36,
37
],
"text": "state - of - the - art text classi... | [
"identifying",
"sections",
"is",
"one",
"of",
"the",
"critical",
"components",
"of",
"understanding",
"medical",
"information",
"from",
"unstructured",
"clinical",
"notes",
"and",
"developing",
"assistive",
"technologies",
"for",
"clinical",
"note",
"-",
"writing",
... |
ACL | Generate, Delete and Rewrite: A Three-Stage Framework for Improving Persona Consistency of Dialogue Generation | Maintaining a consistent personality in conversations is quite natural for human beings, but is still a non-trivial task for machines. The persona-based dialogue generation task is thus introduced to tackle the personality-inconsistent problem by incorporating explicit persona text into dialogue generation models. Desp... | 71ff0f02bc14a28822f0cdf6c508aae2 | 2,020 | [
"maintaining a consistent personality in conversations is quite natural for human beings , but is still a non - trivial task for machines .",
"the persona - based dialogue generation task is thus introduced to tackle the personality - inconsistent problem by incorporating explicit persona text into dialogue gener... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
2,
3
],
"text": "consistent personality",
"tokens": [
"consistent",
"personality"
]
}
],
"event_type": "ITT",
"... | [
"maintaining",
"a",
"consistent",
"personality",
"in",
"conversations",
"is",
"quite",
"natural",
"for",
"human",
"beings",
",",
"but",
"is",
"still",
"a",
"non",
"-",
"trivial",
"task",
"for",
"machines",
".",
"the",
"persona",
"-",
"based",
"dialogue",
"ge... |
ACL | An In-depth Study on Internal Structure of Chinese Words | Unlike English letters, Chinese characters have rich and specific meanings. Usually, the meaning of a word can be derived from its constituent characters in some way. Several previous works on syntactic parsing propose to annotate shallow word-internal structures for better utilizing character-level information. This w... | 636dd0c8ece0788d40d37b9f500026d8 | 2,021 | [
"unlike english letters , chinese characters have rich and specific meanings .",
"usually , the meaning of a word can be derived from its constituent characters in some way .",
"several previous works on syntactic parsing propose to annotate shallow word - internal structures for better utilizing character - le... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5
],
"text": "chinese characters",
"tokens": [
"chinese",
"characters"
]
}
],
"event_type": "ITT",
"trigger"... | [
"unlike",
"english",
"letters",
",",
"chinese",
"characters",
"have",
"rich",
"and",
"specific",
"meanings",
".",
"usually",
",",
"the",
"meaning",
"of",
"a",
"word",
"can",
"be",
"derived",
"from",
"its",
"constituent",
"characters",
"in",
"some",
"way",
".... |
ACL | Preview, Attend and Review: Schema-Aware Curriculum Learning for Multi-Domain Dialogue State Tracking | Existing dialog state tracking (DST) models are trained with dialog data in a random order, neglecting rich structural information in a dataset. In this paper, we propose to use curriculum learning (CL) to better leverage both the curriculum structure and schema structure for task-oriented dialogs. Specifically, we pro... | a0fd29c17984ed8d2e2b7f86831cb0a4 | 2,021 | [
"existing dialog state tracking ( dst ) models are trained with dialog data in a random order , neglecting rich structural information in a dataset .",
"in this paper , we propose to use curriculum learning ( cl ) to better leverage both the curriculum structure and schema structure for task - oriented dialogs ."... | [
{
"arguments": [
{
"argument_type": "TriedComponent",
"nugget_type": "APP",
"offsets": [
34,
35
],
"text": "curriculum learning",
"tokens": [
"curriculum",
"learning"
]
},
{
"argument_type":... | [
"existing",
"dialog",
"state",
"tracking",
"(",
"dst",
")",
"models",
"are",
"trained",
"with",
"dialog",
"data",
"in",
"a",
"random",
"order",
",",
"neglecting",
"rich",
"structural",
"information",
"in",
"a",
"dataset",
".",
"in",
"this",
"paper",
",",
"... |
ACL | Self-Attentional Models for Lattice Inputs | Lattices are an efficient and effective method to encode ambiguity of upstream systems in natural language processing tasks, for example to compactly capture multiple speech recognition hypotheses, or to represent multiple linguistic analyses. Previous work has extended recurrent neural networks to model lattice inputs... | 8e057b24ffe8ed4a5448b19bb7b9c2bf | 2,019 | [
"lattices are an efficient and effective method to encode ambiguity of upstream systems in natural language processing tasks , for example to compactly capture multiple speech recognition hypotheses , or to represent multiple linguistic analyses .",
"previous work has extended recurrent neural networks to model l... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0
],
"text": "lattices",
"tokens": [
"lattices"
]
},
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offse... | [
"lattices",
"are",
"an",
"efficient",
"and",
"effective",
"method",
"to",
"encode",
"ambiguity",
"of",
"upstream",
"systems",
"in",
"natural",
"language",
"processing",
"tasks",
",",
"for",
"example",
"to",
"compactly",
"capture",
"multiple",
"speech",
"recognitio... |
ACL | Joint Effects of Context and User History for Predicting Online Conversation Re-entries | As the online world continues its exponential growth, interpersonal communication has come to play an increasingly central role in opinion formation and change. In order to help users better engage with each other online, we study a challenging problem of re-entry prediction foreseeing whether a user will come back to ... | 54dc18f3c81976ab42c7f5f4bd591db4 | 2,019 | [
"as the online world continues its exponential growth , interpersonal communication has come to play an increasingly central role in opinion formation and change .",
"in order to help users better engage with each other online , we study a challenging problem of re - entry prediction foreseeing whether a user wil... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
9,
10
],
"text": "interpersonal communication",
"tokens": [
"interpersonal",
"communication"
]
}
],
"event_type": "... | [
"as",
"the",
"online",
"world",
"continues",
"its",
"exponential",
"growth",
",",
"interpersonal",
"communication",
"has",
"come",
"to",
"play",
"an",
"increasingly",
"central",
"role",
"in",
"opinion",
"formation",
"and",
"change",
".",
"in",
"order",
"to",
"... |
ACL | Probing for Predicate Argument Structures in Pretrained Language Models | Thanks to the effectiveness and wide availability of modern pretrained language models (PLMs), recently proposed approaches have achieved remarkable results in dependency- and span-based, multilingual and cross-lingual Semantic Role Labeling (SRL). These results have prompted researchers to investigate the inner workin... | 81cb7fa52062f9d17d9e93a1e4567dec | 2,022 | [
"thanks to the effectiveness and wide availability of modern pretrained language models ( plms ) , recently proposed approaches have achieved remarkable results in dependency - and span - based , multilingual and cross - lingual semantic role labeling ( srl ) .",
"these results have prompted researchers to invest... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
36,
37,
38
],
"text": "semantic role labeling",
"tokens": [
"semantic",
"role",
"labeling"
]
}
],
... | [
"thanks",
"to",
"the",
"effectiveness",
"and",
"wide",
"availability",
"of",
"modern",
"pretrained",
"language",
"models",
"(",
"plms",
")",
",",
"recently",
"proposed",
"approaches",
"have",
"achieved",
"remarkable",
"results",
"in",
"dependency",
"-",
"and",
"... |
ACL | Where to Go for the Holidays: Towards Mixed-Type Dialogs for Clarification of User Goals | Most dialog systems posit that users have figured out clear and specific goals before starting an interaction. For example, users have determined the departure, the destination, and the travel time for booking a flight. However, in many scenarios, limited by experience and knowledge, users may know what they need, but ... | e6819b3ce223923478bb9d3b63e830a6 | 2,022 | [
"most dialog systems posit that users have figured out clear and specific goals before starting an interaction .",
"for example , users have determined the departure , the destination , and the travel time for booking a flight .",
"however , in many scenarios , limited by experience and knowledge , users may kn... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
1,
2
],
"text": "dialog systems",
"tokens": [
"dialog",
"systems"
]
}
],
"event_type": "ITT",
"trigger": {
... | [
"most",
"dialog",
"systems",
"posit",
"that",
"users",
"have",
"figured",
"out",
"clear",
"and",
"specific",
"goals",
"before",
"starting",
"an",
"interaction",
".",
"for",
"example",
",",
"users",
"have",
"determined",
"the",
"departure",
",",
"the",
"destina... |
ACL | DVD: A Diagnostic Dataset for Multi-step Reasoning in Video Grounded Dialogue | A video-grounded dialogue system is required to understand both dialogue, which contains semantic dependencies from turn to turn, and video, which contains visual cues of spatial and temporal scene variations. Building such dialogue systems is a challenging problem, involving various reasoning types on both visual and ... | d45cd0ddedda4f5e033a5ce54cd0afb9 | 2,021 | [
"a video - grounded dialogue system is required to understand both dialogue , which contains semantic dependencies from turn to turn , and video , which contains visual cues of spatial and temporal scene variations .",
"building such dialogue systems is a challenging problem , involving various reasoning types on... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
1,
2,
3,
4,
5
],
"text": "video - grounded dialogue system",
"tokens": [
"video",
"-",
"grounded"... | [
"a",
"video",
"-",
"grounded",
"dialogue",
"system",
"is",
"required",
"to",
"understand",
"both",
"dialogue",
",",
"which",
"contains",
"semantic",
"dependencies",
"from",
"turn",
"to",
"turn",
",",
"and",
"video",
",",
"which",
"contains",
"visual",
"cues",
... |
ACL | MATE-KD: Masked Adversarial TExt, a Companion to Knowledge Distillation | The advent of large pre-trained language models has given rise to rapid progress in the field of Natural Language Processing (NLP). While the performance of these models on standard benchmarks has scaled with size, compression techniques such as knowledge distillation have been key in making them practical. We present ... | bcf2a5086a3b7ab9ae680289f38dad5f | 2,021 | [
"the advent of large pre - trained language models has given rise to rapid progress in the field of natural language processing ( nlp ) .",
"while the performance of these models on standard benchmarks has scaled with size , compression techniques such as knowledge distillation have been key in making them practi... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
19,
20,
21
],
"text": "natural language processing",
"tokens": [
"natural",
"language",
"processing"
]
... | [
"the",
"advent",
"of",
"large",
"pre",
"-",
"trained",
"language",
"models",
"has",
"given",
"rise",
"to",
"rapid",
"progress",
"in",
"the",
"field",
"of",
"natural",
"language",
"processing",
"(",
"nlp",
")",
".",
"while",
"the",
"performance",
"of",
"the... |
ACL | An Automated Framework for Fast Cognate Detection and Bayesian Phylogenetic Inference in Computational Historical Linguistics | We present a fully automated workflow for phylogenetic reconstruction on large datasets, consisting of two novel methods, one for fast detection of cognates and one for fast Bayesian phylogenetic inference. Our results show that the methods take less than a few minutes to process language families that have so far requ... | db2fff29a55036937a41cdace0266be9 | 2,019 | [
"we present a fully automated workflow for phylogenetic reconstruction on large datasets , consisting of two novel methods , one for fast detection of cognates and one for fast bayesian phylogenetic inference .",
"our results show that the methods take less than a few minutes to process language families that hav... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"we",
"present",
"a",
"fully",
"automated",
"workflow",
"for",
"phylogenetic",
"reconstruction",
"on",
"large",
"datasets",
",",
"consisting",
"of",
"two",
"novel",
"methods",
",",
"one",
"for",
"fast",
"detection",
"of",
"cognates",
"and",
"one",
"for",
"fast... |
ACL | Faithful or Extractive? On Mitigating the Faithfulness-Abstractiveness Trade-off in Abstractive Summarization | Despite recent progress in abstractive summarization, systems still suffer from faithfulness errors. While prior work has proposed models that improve faithfulness, it is unclear whether the improvement comes from an increased level of extractiveness of the model outputs as one naive way to improve faithfulness is to m... | 959fbecee82a093efd41a9a4608a4728 | 2,022 | [
"despite recent progress in abstractive summarization , systems still suffer from faithfulness errors .",
"while prior work has proposed models that improve faithfulness , it is unclear whether the improvement comes from an increased level of extractiveness of the model outputs as one naive way to improve faithfu... | [
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
7
],
"text": "systems",
"tokens": [
"systems"
]
},
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": ... | [
"despite",
"recent",
"progress",
"in",
"abstractive",
"summarization",
",",
"systems",
"still",
"suffer",
"from",
"faithfulness",
"errors",
".",
"while",
"prior",
"work",
"has",
"proposed",
"models",
"that",
"improve",
"faithfulness",
",",
"it",
"is",
"unclear",
... |
ACL | Enhancing the generalization for Intent Classification and Out-of-Domain Detection in SLU | Intent classification is a major task in spoken language understanding (SLU). Since most models are built with pre-collected in-domain (IND) training utterances, their ability to detect unsupported out-of-domain (OOD) utterances has a critical effect in practical use. Recent works have shown that using extra data and l... | bbe87249393bc09725f2b0dcfda04997 | 2,021 | [
"intent classification is a major task in spoken language understanding ( slu ) .",
"since most models are built with pre - collected in - domain ( ind ) training utterances , their ability to detect unsupported out - of - domain ( ood ) utterances has a critical effect in practical use .",
"recent works have s... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "intent classification",
"tokens": [
"intent",
"classification"
]
}
],
"event_type": "ITT",
"tr... | [
"intent",
"classification",
"is",
"a",
"major",
"task",
"in",
"spoken",
"language",
"understanding",
"(",
"slu",
")",
".",
"since",
"most",
"models",
"are",
"built",
"with",
"pre",
"-",
"collected",
"in",
"-",
"domain",
"(",
"ind",
")",
"training",
"uttera... |
ACL | PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks | This paper focuses on the Data Augmentation for low-resource Natural Language Understanding (NLU) tasks. We propose Prompt-based Data Augmentation model (PromDA) which only trains small-scale Soft Prompt (i.e., a set of trainable vectors) in the frozen Pre-trained Language Models (PLMs). This avoids human effort in col... | 04e5995f7999d5daad821408248f8262 | 2,022 | [
"this paper focuses on the data augmentation for low - resource natural language understanding ( nlu ) tasks .",
"we propose prompt - based data augmentation model ( promda ) which only trains small - scale soft prompt ( i . e . , a set of trainable vectors ) in the frozen pre - trained language models ( plms ) .... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
8,
9,
10,
11,
12,
13,
17
],
"text": "low - resource natural language understanding ( nlu ) tasks",
"tokens"... | [
"this",
"paper",
"focuses",
"on",
"the",
"data",
"augmentation",
"for",
"low",
"-",
"resource",
"natural",
"language",
"understanding",
"(",
"nlu",
")",
"tasks",
".",
"we",
"propose",
"prompt",
"-",
"based",
"data",
"augmentation",
"model",
"(",
"promda",
")... |
ACL | Analyzing the Limitations of Cross-lingual Word Embedding Mappings | Recent research in cross-lingual word embeddings has almost exclusively focused on offline methods, which independently train word embeddings in different languages and map them to a shared space through linear transformations. While several authors have questioned the underlying isomorphism assumption, which states th... | c84823d8450a619b600d844943a96c1e | 2,019 | [
"recent research in cross - lingual word embeddings has almost exclusively focused on offline methods , which independently train word embeddings in different languages and map them to a shared space through linear transformations .",
"while several authors have questioned the underlying isomorphism assumption , ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4,
5,
6,
7
],
"text": "cross - lingual word embeddings",
"tokens": [
"cross",
"-",
"lingual",
... | [
"recent",
"research",
"in",
"cross",
"-",
"lingual",
"word",
"embeddings",
"has",
"almost",
"exclusively",
"focused",
"on",
"offline",
"methods",
",",
"which",
"independently",
"train",
"word",
"embeddings",
"in",
"different",
"languages",
"and",
"map",
"them",
... |
ACL | Dependency-driven Relation Extraction with Attentive Graph Convolutional Networks | Syntactic information, especially dependency trees, has been widely used by existing studies to improve relation extraction with better semantic guidance for analyzing the context information associated with the given entities. However, most existing studies suffer from the noise in the dependency trees, especially whe... | 09e8a58fe50453a2401747d5e9c40e18 | 2,021 | [
"syntactic information , especially dependency trees , has been widely used by existing studies to improve relation extraction with better semantic guidance for analyzing the context information associated with the given entities .",
"however , most existing studies suffer from the noise in the dependency trees ,... | [
{
"arguments": [],
"event_type": "ITT",
"trigger": {
"offsets": [
10
],
"text": "used",
"tokens": [
"used"
]
}
},
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
36,
... | [
"syntactic",
"information",
",",
"especially",
"dependency",
"trees",
",",
"has",
"been",
"widely",
"used",
"by",
"existing",
"studies",
"to",
"improve",
"relation",
"extraction",
"with",
"better",
"semantic",
"guidance",
"for",
"analyzing",
"the",
"context",
"inf... |
ACL | Joint Chinese Word Segmentation and Part-of-speech Tagging via Two-way Attentions of Auto-analyzed Knowledge | Chinese word segmentation (CWS) and part-of-speech (POS) tagging are important fundamental tasks for Chinese language processing, where joint learning of them is an effective one-step solution for both tasks. Previous studies for joint CWS and POS tagging mainly follow the character-based tagging paradigm with introduc... | 9609778ad9e5f0ef4d2c7c494df6a6dc | 2,020 | [
"chinese word segmentation ( cws ) and part - of - speech ( pos ) tagging are important fundamental tasks for chinese language processing , where joint learning of them is an effective one - step solution for both tasks .",
"previous studies for joint cws and pos tagging mainly follow the character - based taggin... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
21,
22,
23
],
"text": "chinese language processing",
"tokens": [
"chinese",
"language",
"processing"
]
... | [
"chinese",
"word",
"segmentation",
"(",
"cws",
")",
"and",
"part",
"-",
"of",
"-",
"speech",
"(",
"pos",
")",
"tagging",
"are",
"important",
"fundamental",
"tasks",
"for",
"chinese",
"language",
"processing",
",",
"where",
"joint",
"learning",
"of",
"them",
... |
ACL | RankQA: Neural Question Answering with Answer Re-Ranking | The conventional paradigm in neural question answering (QA) for narrative content is limited to a two-stage process: first, relevant text passages are retrieved and, subsequently, a neural network for machine comprehension extracts the likeliest answer. However, both stages are largely isolated in the status quo and, h... | 864e901c9c8268c1e32b4e85b4cdda05 | 2,019 | [
"the conventional paradigm in neural question answering ( qa ) for narrative content is limited to a two - stage process : first , relevant text passages are retrieved and , subsequently , a neural network for machine comprehension extracts the likeliest answer .",
"however , both stages are largely isolated in t... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5,
6
],
"text": "neural question answering",
"tokens": [
"neural",
"question",
"answering"
]
}
]... | [
"the",
"conventional",
"paradigm",
"in",
"neural",
"question",
"answering",
"(",
"qa",
")",
"for",
"narrative",
"content",
"is",
"limited",
"to",
"a",
"two",
"-",
"stage",
"process",
":",
"first",
",",
"relevant",
"text",
"passages",
"are",
"retrieved",
"and... |
ACL | Domain Adaptation in Multilingual and Multi-Domain Monolingual Settings for Complex Word Identification | Complex word identification (CWI) is a cornerstone process towards proper text simplification. CWI is highly dependent on context, whereas its difficulty is augmented by the scarcity of available datasets which vary greatly in terms of domains and languages. As such, it becomes increasingly more difficult to develop a ... | 0e0d6cc75f98e0e32960341f2f384171 | 2,022 | [
"complex word identification ( cwi ) is a cornerstone process towards proper text simplification .",
"cwi is highly dependent on context , whereas its difficulty is augmented by the scarcity of available datasets which vary greatly in terms of domains and languages .",
"as such , it becomes increasingly more di... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "complex word identification",
"tokens": [
"complex",
"word",
"identification"
]
}
... | [
"complex",
"word",
"identification",
"(",
"cwi",
")",
"is",
"a",
"cornerstone",
"process",
"towards",
"proper",
"text",
"simplification",
".",
"cwi",
"is",
"highly",
"dependent",
"on",
"context",
",",
"whereas",
"its",
"difficulty",
"is",
"augmented",
"by",
"t... |
ACL | Substructure Distribution Projection for Zero-Shot Cross-Lingual Dependency Parsing | We present substructure distribution projection (SubDP), a technique that projects a distribution over structures in one domain to another, by projecting substructure distributions separately. Models for the target domain can then be trained, using the projected distributions as soft silver labels. We evaluate SubDP on... | a11d23df083ec881504fcaf35594405c | 2,022 | [
"we present substructure distribution projection ( subdp ) , a technique that projects a distribution over structures in one domain to another , by projecting substructure distributions separately .",
"models for the target domain can then be trained , using the projected distributions as soft silver labels .",
... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"we",
"present",
"substructure",
"distribution",
"projection",
"(",
"subdp",
")",
",",
"a",
"technique",
"that",
"projects",
"a",
"distribution",
"over",
"structures",
"in",
"one",
"domain",
"to",
"another",
",",
"by",
"projecting",
"substructure",
"distributions"... |
ACL | How can NLP Help Revitalize Endangered Languages? A Case Study and Roadmap for the Cherokee Language | More than 43% of the languages spoken in the world are endangered, and language loss currently occurs at an accelerated rate because of globalization and neocolonialism. Saving and revitalizing endangered languages has become very important for maintaining the cultural diversity on our planet. In this work, we focus on... | 6043d6347900bef1481b47f957a5bdf4 | 2,022 | [
"more than 43 % of the languages spoken in the world are endangered , and language loss currently occurs at an accelerated rate because of globalization and neocolonialism .",
"saving and revitalizing endangered languages has become very important for maintaining the cultural diversity on our planet .",
"in thi... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
41,
42
],
"text": "cultural diversity",
"tokens": [
"cultural",
"diversity"
]
}
],
"event_type": "ITT",
"trigge... | [
"more",
"than",
"43",
"%",
"of",
"the",
"languages",
"spoken",
"in",
"the",
"world",
"are",
"endangered",
",",
"and",
"language",
"loss",
"currently",
"occurs",
"at",
"an",
"accelerated",
"rate",
"because",
"of",
"globalization",
"and",
"neocolonialism",
".",
... |
ACL | A Girl Has A Name: Detecting Authorship Obfuscation | Authorship attribution aims to identify the author of a text based on the stylometric analysis. Authorship obfuscation, on the other hand, aims to protect against authorship attribution by modifying a text’s style. In this paper, we evaluate the stealthiness of state-of-the-art authorship obfuscation methods under an a... | 8b3ecf2416971047a43e36034187cb14 | 2,020 | [
"authorship attribution aims to identify the author of a text based on the stylometric analysis .",
"authorship obfuscation , on the other hand , aims to protect against authorship attribution by modifying a text ’ s style .",
"in this paper , we evaluate the stealthiness of state - of - the - art authorship ob... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
16,
17
],
"text": "authorship obfuscation",
"tokens": [
"authorship",
"obfuscation"
]
}
],
"event_type": "ITT",
... | [
"authorship",
"attribution",
"aims",
"to",
"identify",
"the",
"author",
"of",
"a",
"text",
"based",
"on",
"the",
"stylometric",
"analysis",
".",
"authorship",
"obfuscation",
",",
"on",
"the",
"other",
"hand",
",",
"aims",
"to",
"protect",
"against",
"authorshi... |
ACL | Modeling Bilingual Conversational Characteristics for Neural Chat Translation | Neural chat translation aims to translate bilingual conversational text, which has a broad application in international exchanges and cooperation. Despite the impressive performance of sentence-level and context-aware Neural Machine Translation (NMT), there still remain challenges to translate bilingual conversational ... | fc8c9608ce581c909cddbde7331f7951 | 2,021 | [
"neural chat translation aims to translate bilingual conversational text , which has a broad application in international exchanges and cooperation .",
"despite the impressive performance of sentence - level and context - aware neural machine translation ( nmt ) , there still remain challenges to translate biling... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
0,
1,
2
],
"text": "neural chat translation",
"tokens": [
"neural",
"chat",
"translation"
]
}
],
... | [
"neural",
"chat",
"translation",
"aims",
"to",
"translate",
"bilingual",
"conversational",
"text",
",",
"which",
"has",
"a",
"broad",
"application",
"in",
"international",
"exchanges",
"and",
"cooperation",
".",
"despite",
"the",
"impressive",
"performance",
"of",
... |
ACL | Text2Event: Controllable Sequence-to-Structure Generation for End-to-end Event Extraction | Event extraction is challenging due to the complex structure of event records and the semantic gap between text and event. Traditional methods usually extract event records by decomposing the complex structure prediction task into multiple subtasks. In this paper, we propose Text2Event, a sequence-to-structure generati... | c90cef1f516e8badf22a128c561106f9 | 2,021 | [
"event extraction is challenging due to the complex structure of event records and the semantic gap between text and event .",
"traditional methods usually extract event records by decomposing the complex structure prediction task into multiple subtasks .",
"in this paper , we propose text2event , a sequence - ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "event extraction",
"tokens": [
"event",
"extraction"
]
}
],
"event_type": "ITT",
"trigger": {
... | [
"event",
"extraction",
"is",
"challenging",
"due",
"to",
"the",
"complex",
"structure",
"of",
"event",
"records",
"and",
"the",
"semantic",
"gap",
"between",
"text",
"and",
"event",
".",
"traditional",
"methods",
"usually",
"extract",
"event",
"records",
"by",
... |
ACL | Dynamic Online Conversation Recommendation | Trending topics in social media content evolve over time, and it is therefore crucial to understand social media users and their interpersonal communications in a dynamic manner. Here we study dynamic online conversation recommendation, to help users engage in conversations that satisfy their evolving interests. While ... | ecbf30317158098be707681aa76603c6 | 2,020 | [
"trending topics in social media content evolve over time , and it is therefore crucial to understand social media users and their interpersonal communications in a dynamic manner .",
"here we study dynamic online conversation recommendation , to help users engage in conversations that satisfy their evolving inte... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4,
5
],
"text": "trending topics in social media content",
"tokens": [
"trending",
"t... | [
"trending",
"topics",
"in",
"social",
"media",
"content",
"evolve",
"over",
"time",
",",
"and",
"it",
"is",
"therefore",
"crucial",
"to",
"understand",
"social",
"media",
"users",
"and",
"their",
"interpersonal",
"communications",
"in",
"a",
"dynamic",
"manner",... |
ACL | CogTaskonomy: Cognitively Inspired Task Taxonomy Is Beneficial to Transfer Learning in NLP | Is there a principle to guide transfer learning across tasks in natural language processing (NLP)? Taxonomy (Zamir et al., 2018) finds that a structure exists among visual tasks, as a principle underlying transfer learning for them. In this paper, we propose a cognitively inspired framework, CogTaskonomy, to learn taxo... | ec4c7ed76f51a240c6af18bb1f3ac226 | 2,022 | [
"is there a principle to guide transfer learning across tasks in natural language processing ( nlp ) ?",
"taxonomy ( zamir et al . , 2018 ) finds that a structure exists among visual tasks , as a principle underlying transfer learning for them .",
"in this paper , we propose a cognitively inspired framework , c... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
6,
7,
8,
9
],
"text": "transfer learning across tasks",
"tokens": [
"transfer",
"learning",
"across",
... | [
"is",
"there",
"a",
"principle",
"to",
"guide",
"transfer",
"learning",
"across",
"tasks",
"in",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"?",
"taxonomy",
"(",
"zamir",
"et",
"al",
".",
",",
"2018",
")",
"finds",
"that",
"a",
"structure",
"e... |
ACL | A Compact and Language-Sensitive Multilingual Translation Method | Multilingual neural machine translation (Multi-NMT) with one encoder-decoder model has made remarkable progress due to its simple deployment. However, this multilingual translation paradigm does not make full use of language commonality and parameter sharing between encoder and decoder. Furthermore, this kind of paradi... | 9b4710f5b7353775b86b4c553913c4bd | 2,019 | [
"multilingual neural machine translation ( multi - nmt ) with one encoder - decoder model has made remarkable progress due to its simple deployment .",
"however , this multilingual translation paradigm does not make full use of language commonality and parameter sharing between encoder and decoder .",
"furtherm... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4,
5,
6,
7,
8,
9,
10,
11,
12,
13,
14
... | [
"multilingual",
"neural",
"machine",
"translation",
"(",
"multi",
"-",
"nmt",
")",
"with",
"one",
"encoder",
"-",
"decoder",
"model",
"has",
"made",
"remarkable",
"progress",
"due",
"to",
"its",
"simple",
"deployment",
".",
"however",
",",
"this",
"multilingua... |
ACL | Learning to Relate from Captions and Bounding Boxes | In this work, we propose a novel approach that predicts the relationships between various entities in an image in a weakly supervised manner by relying on image captions and object bounding box annotations as the sole source of supervision. Our proposed approach uses a top-down attention mechanism to align entities in ... | 33686d9a1d7a3deafab74a274c9f44ac | 2,019 | [
"in this work , we propose a novel approach that predicts the relationships between various entities in an image in a weakly supervised manner by relying on image captions and object bounding box annotations as the sole source of supervision .",
"our proposed approach uses a top - down attention mechanism to alig... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"in",
"this",
"work",
",",
"we",
"propose",
"a",
"novel",
"approach",
"that",
"predicts",
"the",
"relationships",
"between",
"various",
"entities",
"in",
"an",
"image",
"in",
"a",
"weakly",
"supervised",
"manner",
"by",
"relying",
"on",
"image",
"captions",
... |
ACL | DialogVED: A Pre-trained Latent Variable Encoder-Decoder Model for Dialog Response Generation | Dialog response generation in open domain is an important research topic where the main challenge is to generate relevant and diverse responses. In this paper, we propose a new dialog pre-training framework called DialogVED, which introduces continuous latent variables into the enhanced encoder-decoder pre-training fra... | b8ec0641db84e87cab1967e91a0b4bf7 | 2,022 | [
"dialog response generation in open domain is an important research topic where the main challenge is to generate relevant and diverse responses .",
"in this paper , we propose a new dialog pre - training framework called dialogved , which introduces continuous latent variables into the enhanced encoder - decoder... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "dialog response generation",
"tokens": [
"dialog",
"response",
"generation"
]
},
... | [
"dialog",
"response",
"generation",
"in",
"open",
"domain",
"is",
"an",
"important",
"research",
"topic",
"where",
"the",
"main",
"challenge",
"is",
"to",
"generate",
"relevant",
"and",
"diverse",
"responses",
".",
"in",
"this",
"paper",
",",
"we",
"propose",
... |
ACL | Explicit Memory Tracker with Coarse-to-Fine Reasoning for Conversational Machine Reading | The goal of conversational machine reading is to answer user questions given a knowledge base text which may require asking clarification questions. Existing approaches are limited in their decision making due to struggles in extracting question-related rules and reasoning about them. In this paper, we present a new fr... | 2de06858f2afe9d7e0d3ea59f76846d9 | 2,020 | [
"the goal of conversational machine reading is to answer user questions given a knowledge base text which may require asking clarification questions .",
"existing approaches are limited in their decision making due to struggles in extracting question - related rules and reasoning about them .",
"in this paper ,... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4,
5
],
"text": "conversational machine reading",
"tokens": [
"conversational",
"machine",
"reading"
]
... | [
"the",
"goal",
"of",
"conversational",
"machine",
"reading",
"is",
"to",
"answer",
"user",
"questions",
"given",
"a",
"knowledge",
"base",
"text",
"which",
"may",
"require",
"asking",
"clarification",
"questions",
".",
"existing",
"approaches",
"are",
"limited",
... |
ACL | Direct Speech-to-Speech Translation With Discrete Units | We present a direct speech-to-speech translation (S2ST) model that translates speech from one language to speech in another language without relying on intermediate text generation. We tackle the problem by first applying a self-supervised discrete speech encoder on the target speech and then training a sequence-to-seq... | 544c154625d31ed4ad58811790dfa509 | 2,022 | [
"we present a direct speech - to - speech translation ( s2st ) model that translates speech from one language to speech in another language without relying on intermediate text generation .",
"we tackle the problem by first applying a self - supervised discrete speech encoder on the target speech and then trainin... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"we",
"present",
"a",
"direct",
"speech",
"-",
"to",
"-",
"speech",
"translation",
"(",
"s2st",
")",
"model",
"that",
"translates",
"speech",
"from",
"one",
"language",
"to",
"speech",
"in",
"another",
"language",
"without",
"relying",
"on",
"intermediate",
... |
ACL | Including Signed Languages in Natural Language Processing | Signed languages are the primary means of communication for many deaf and hard of hearing individuals. Since signed languages exhibit all the fundamental linguistic properties of natural language, we believe that tools and theories of Natural Language Processing (NLP) are crucial towards its modeling. However, existing... | 433a3b42c31aed5e6f6cc4cbfcff19ef | 2,021 | [
"signed languages are the primary means of communication for many deaf and hard of hearing individuals .",
"since signed languages exhibit all the fundamental linguistic properties of natural language , we believe that tools and theories of natural language processing ( nlp ) are crucial towards its modeling .",
... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
0,
1
],
"text": "signed languages",
"tokens": [
"signed",
"languages"
]
}
],
"event_type": "ITT",
"trigger": {
... | [
"signed",
"languages",
"are",
"the",
"primary",
"means",
"of",
"communication",
"for",
"many",
"deaf",
"and",
"hard",
"of",
"hearing",
"individuals",
".",
"since",
"signed",
"languages",
"exhibit",
"all",
"the",
"fundamental",
"linguistic",
"properties",
"of",
"... |
ACL | Efficient Pairwise Annotation of Argument Quality | We present an efficient annotation framework for argument quality, a feature difficult to be measured reliably as per previous work. A stochastic transitivity model is combined with an effective sampling strategy to infer high-quality labels with low effort from crowdsourced pairwise judgments. The model’s capabilities... | 5f248c856b0953dd91c496ad6fd0dba4 | 2,020 | [
"we present an efficient annotation framework for argument quality , a feature difficult to be measured reliably as per previous work .",
"a stochastic transitivity model is combined with an effective sampling strategy to infer high - quality labels with low effort from crowdsourced pairwise judgments .",
"the ... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
... | [
"we",
"present",
"an",
"efficient",
"annotation",
"framework",
"for",
"argument",
"quality",
",",
"a",
"feature",
"difficult",
"to",
"be",
"measured",
"reliably",
"as",
"per",
"previous",
"work",
".",
"a",
"stochastic",
"transitivity",
"model",
"is",
"combined",... |
ACL | Continual Quality Estimation with Online Bayesian Meta-Learning | Most current quality estimation (QE) models for machine translation are trained and evaluated in a static setting where training and test data are assumed to be from a fixed distribution. However, in real-life settings, the test data that a deployed QE model would be exposed to may differ from its training data. In par... | 56ed5b589528f051fca41634021c6cfa | 2,021 | [
"most current quality estimation ( qe ) models for machine translation are trained and evaluated in a static setting where training and test data are assumed to be from a fixed distribution .",
"however , in real - life settings , the test data that a deployed qe model would be exposed to may differ from its trai... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
9,
10
],
"text": "machine translation",
"tokens": [
"machine",
"translation"
]
},
{
"argument_type": "BaseCom... | [
"most",
"current",
"quality",
"estimation",
"(",
"qe",
")",
"models",
"for",
"machine",
"translation",
"are",
"trained",
"and",
"evaluated",
"in",
"a",
"static",
"setting",
"where",
"training",
"and",
"test",
"data",
"are",
"assumed",
"to",
"be",
"from",
"a"... |
ACL | ENGINE: Energy-Based Inference Networks for Non-Autoregressive Machine Translation | We propose to train a non-autoregressive machine translation model to minimize the energy defined by a pretrained autoregressive model. In particular, we view our non-autoregressive translation system as an inference network (Tu and Gimpel, 2018) trained to minimize the autoregressive teacher energy. This contrasts wit... | 730bcbe874c7361af52dc1484da356ef | 2,020 | [
"we propose to train a non - autoregressive machine translation model to minimize the energy defined by a pretrained autoregressive model .",
"in particular , we view our non - autoregressive translation system as an inference network ( tu and gimpel , 2018 ) trained to minimize the autoregressive teacher energy ... | [
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"we",
"propose",
"to",
"train",
"a",
"non",
"-",
"autoregressive",
"machine",
"translation",
"model",
"to",
"minimize",
"the",
"energy",
"defined",
"by",
"a",
"pretrained",
"autoregressive",
"model",
".",
"in",
"particular",
",",
"we",
"view",
"our",
"non",
... |
ACL | Addressing Semantic Drift in Generative Question Answering with Auxiliary Extraction | Recently, question answering (QA) based on machine reading comprehension has become popular. This work focuses on generative QA which aims to generate an abstractive answer to a given question instead of extracting an answer span from a provided passage. Generative QA often suffers from two critical problems: (1) summa... | c74f7ba287b1c1241b617c8ba1c5d951 | 2,021 | [
"recently , question answering ( qa ) based on machine reading comprehension has become popular .",
"this work focuses on generative qa which aims to generate an abstractive answer to a given question instead of extracting an answer span from a provided passage .",
"generative qa often suffers from two critical... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
2,
3,
7,
8,
9,
10,
11
],
"text": "question answering ( qa ) based on machine reading comprehension",
"token... | [
"recently",
",",
"question",
"answering",
"(",
"qa",
")",
"based",
"on",
"machine",
"reading",
"comprehension",
"has",
"become",
"popular",
".",
"this",
"work",
"focuses",
"on",
"generative",
"qa",
"which",
"aims",
"to",
"generate",
"an",
"abstractive",
"answe... |
ACL | Know What You Don’t Know: Modeling a Pragmatic Speaker that Refers to Objects of Unknown Categories | Zero-shot learning in Language & Vision is the task of correctly labelling (or naming) objects of novel categories. Another strand of work in L&V aims at pragmatically informative rather than “correct” object descriptions, e.g. in reference games. We combine these lines of research and model zero-shot reference games, ... | f1ac64d5e4f2dd8a0196b27e6acb2406 | 2,019 | [
"zero - shot learning in language & vision is the task of correctly labelling ( or naming ) objects of novel categories .",
"another strand of work in l & v aims at pragmatically informative rather than “ correct ” object descriptions , e . g . in reference games .",
"we combine these lines of research and mode... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4,
5,
6,
7
],
"text": "zero - shot learning in language & vision",
"tokens": [
... | [
"zero",
"-",
"shot",
"learning",
"in",
"language",
"&",
"vision",
"is",
"the",
"task",
"of",
"correctly",
"labelling",
"(",
"or",
"naming",
")",
"objects",
"of",
"novel",
"categories",
".",
"another",
"strand",
"of",
"work",
"in",
"l",
"&",
"v",
"aims",
... |
ACL | Tailor: Generating and Perturbing Text with Semantic Controls | Controlled text perturbation is useful for evaluating and improving model generalizability. However, current techniques rely on training a model for every target perturbation, which is expensive and hard to generalize. We present Tailor, a semantically-controlled text generation system. Tailor builds on a pretrained se... | 22fa010c9faa07e916eb31b141e9ef24 | 2,022 | [
"controlled text perturbation is useful for evaluating and improving model generalizability .",
"however , current techniques rely on training a model for every target perturbation , which is expensive and hard to generalize .",
"we present tailor , a semantically - controlled text generation system .",
"tail... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "controlled text perturbation",
"tokens": [
"controlled",
"text",
"perturbation"
]
}... | [
"controlled",
"text",
"perturbation",
"is",
"useful",
"for",
"evaluating",
"and",
"improving",
"model",
"generalizability",
".",
"however",
",",
"current",
"techniques",
"rely",
"on",
"training",
"a",
"model",
"for",
"every",
"target",
"perturbation",
",",
"which"... |
ACL | Consistency Regularization for Cross-Lingual Fine-Tuning | Fine-tuning pre-trained cross-lingual language models can transfer task-specific supervision from one language to the others. In this work, we propose to improve cross-lingual fine-tuning with consistency regularization. Specifically, we use example consistency regularization to penalize the prediction sensitivity to f... | 647148450d21774b0bcb735b3d85dbe0 | 2,021 | [
"fine - tuning pre - trained cross - lingual language models can transfer task - specific supervision from one language to the others .",
"in this work , we propose to improve cross - lingual fine - tuning with consistency regularization .",
"specifically , we use example consistency regularization to penalize ... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
28
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"fine",
"-",
"tuning",
"pre",
"-",
"trained",
"cross",
"-",
"lingual",
"language",
"models",
"can",
"transfer",
"task",
"-",
"specific",
"supervision",
"from",
"one",
"language",
"to",
"the",
"others",
".",
"in",
"this",
"work",
",",
"we",
"propose",
"to",... |
ACL | Topic-Aware Neural Keyphrase Generation for Social Media Language | A huge volume of user-generated content is daily produced on social media. To facilitate automatic language understanding, we study keyphrase prediction, distilling salient information from massive posts. While most existing methods extract words from source posts to form keyphrases, we propose a sequence-to-sequence (... | 37aa646905c5702ad317a48e369db8c9 | 2,019 | [
"a huge volume of user - generated content is daily produced on social media .",
"to facilitate automatic language understanding , we study keyphrase prediction , distilling salient information from massive posts .",
"while most existing methods extract words from source posts to form keyphrases , we propose a ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "E-PUR",
"offsets": [
16
],
"text": "facilitate",
"tokens": [
"facilitate"
]
},
{
"argument_type": "Researcher",
"nugget_type": "OG",
... | [
"a",
"huge",
"volume",
"of",
"user",
"-",
"generated",
"content",
"is",
"daily",
"produced",
"on",
"social",
"media",
".",
"to",
"facilitate",
"automatic",
"language",
"understanding",
",",
"we",
"study",
"keyphrase",
"prediction",
",",
"distilling",
"salient",
... |
ACL | Evaluating Explainable AI: Which Algorithmic Explanations Help Users Predict Model Behavior? | Algorithmic approaches to interpreting machine learning models have proliferated in recent years. We carry out human subject tests that are the first of their kind to isolate the effect of algorithmic explanations on a key aspect of model interpretability, simulatability, while avoiding important confounding experiment... | ad87f47dec93949d2daef7453217258c | 2,020 | [
"algorithmic approaches to interpreting machine learning models have proliferated in recent years .",
"we carry out human subject tests that are the first of their kind to isolate the effect of algorithmic explanations on a key aspect of model interpretability , simulatability , while avoiding important confoundi... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
4,
5,
6
],
"text": "machine learning models",
"tokens": [
"machine",
"learning",
"models"
]
}
],
... | [
"algorithmic",
"approaches",
"to",
"interpreting",
"machine",
"learning",
"models",
"have",
"proliferated",
"in",
"recent",
"years",
".",
"we",
"carry",
"out",
"human",
"subject",
"tests",
"that",
"are",
"the",
"first",
"of",
"their",
"kind",
"to",
"isolate",
... |
ACL | OpenHands: Making Sign Language Recognition Accessible with Pose-based Pretrained Models across Languages | AI technologies for Natural Languages have made tremendous progress recently. However, commensurate progress has not been made on Sign Languages, in particular, in recognizing signs as individual words or as complete sentences. We introduce OpenHands, a library where we take four key ideas from the NLP community for lo... | a851cd507cecec30fe7af3c5e81e11f2 | 2,022 | [
"ai technologies for natural languages have made tremendous progress recently .",
"however , commensurate progress has not been made on sign languages , in particular , in recognizing signs as individual words or as complete sentences .",
"we introduce openhands , a library where we take four key ideas from the... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4
],
"text": "natural languages",
"tokens": [
"natural",
"languages"
]
}
],
"event_type": "ITT",
"trigger": ... | [
"ai",
"technologies",
"for",
"natural",
"languages",
"have",
"made",
"tremendous",
"progress",
"recently",
".",
"however",
",",
"commensurate",
"progress",
"has",
"not",
"been",
"made",
"on",
"sign",
"languages",
",",
"in",
"particular",
",",
"in",
"recognizing"... |
ACL | CraftAssist Instruction Parsing: Semantic Parsing for a Voxel-World Assistant | We propose a semantic parsing dataset focused on instruction-driven communication with an agent in the game Minecraft. The dataset consists of 7K human utterances and their corresponding parses. Given proper world state, the parses can be interpreted and executed in game. We report the performance of baseline models, a... | 47b0133ca77dec062449ad0be546d66b | 2,020 | [
"we propose a semantic parsing dataset focused on instruction - driven communication with an agent in the game minecraft .",
"the dataset consists of 7k human utterances and their corresponding parses .",
"given proper world state , the parses can be interpreted and executed in game .",
"we report the perform... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
... | [
"we",
"propose",
"a",
"semantic",
"parsing",
"dataset",
"focused",
"on",
"instruction",
"-",
"driven",
"communication",
"with",
"an",
"agent",
"in",
"the",
"game",
"minecraft",
".",
"the",
"dataset",
"consists",
"of",
"7k",
"human",
"utterances",
"and",
"their... |
ACL | Variational Graph Autoencoding as Cheap Supervision for AMR Coreference Resolution | Coreference resolution over semantic graphs like AMRs aims to group the graph nodes that represent the same entity. This is a crucial step for making document-level formal semantic representations. With annotated data on AMR coreference resolution, deep learning approaches have recently shown great potential for this t... | 71322ca60463d3a522b2b71ef00d18b3 | 2,022 | [
"coreference resolution over semantic graphs like amrs aims to group the graph nodes that represent the same entity .",
"this is a crucial step for making document - level formal semantic representations .",
"with annotated data on amr coreference resolution , deep learning approaches have recently shown great ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
26,
27,
28,
29,
30,
31
],
"text": "document - level formal semantic representations",
"tokens": [
"document... | [
"coreference",
"resolution",
"over",
"semantic",
"graphs",
"like",
"amrs",
"aims",
"to",
"group",
"the",
"graph",
"nodes",
"that",
"represent",
"the",
"same",
"entity",
".",
"this",
"is",
"a",
"crucial",
"step",
"for",
"making",
"document",
"-",
"level",
"fo... |
ACL | QA-Driven Zero-shot Slot Filling with Weak Supervision Pretraining | Slot-filling is an essential component for building task-oriented dialog systems. In this work, we focus on the zero-shot slot-filling problem, where the model needs to predict slots and their values, given utterances from new domains without training on the target domain. Prior methods directly encode slot description... | d234a4421483a33cbc549d1fd60397d6 | 2,021 | [
"slot - filling is an essential component for building task - oriented dialog systems .",
"in this work , we focus on the zero - shot slot - filling problem , where the model needs to predict slots and their values , given utterances from new domains without training on the target domain .",
"prior methods dire... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
23,
24,
25,
26,
27,
28,
29
],
"text": "zero - shot slot - filling problem",
"tokens": [
"zero",
... | [
"slot",
"-",
"filling",
"is",
"an",
"essential",
"component",
"for",
"building",
"task",
"-",
"oriented",
"dialog",
"systems",
".",
"in",
"this",
"work",
",",
"we",
"focus",
"on",
"the",
"zero",
"-",
"shot",
"slot",
"-",
"filling",
"problem",
",",
"where... |
ACL | Document-level Event Extraction via Parallel Prediction Networks | Document-level event extraction (DEE) is indispensable when events are described throughout a document. We argue that sentence-level extractors are ill-suited to the DEE task where event arguments always scatter across sentences and multiple events may co-exist in a document. It is a challenging task because it require... | 57ab98c6c06a45ffb3219eb340a469f4 | 2,021 | [
"document - level event extraction ( dee ) is indispensable when events are described throughout a document .",
"we argue that sentence - level extractors are ill - suited to the dee task where event arguments always scatter across sentences and multiple events may co - exist in a document .",
"it is a challeng... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4
],
"text": "document - level event extraction",
"tokens": [
"document",
"-",
"level... | [
"document",
"-",
"level",
"event",
"extraction",
"(",
"dee",
")",
"is",
"indispensable",
"when",
"events",
"are",
"described",
"throughout",
"a",
"document",
".",
"we",
"argue",
"that",
"sentence",
"-",
"level",
"extractors",
"are",
"ill",
"-",
"suited",
"to... |
ACL | Probing Linguistic Systematicity | Recently, there has been much interest in the question of whether deep natural language understanding (NLU) models exhibit systematicity, generalizing such that units like words make consistent contributions to the meaning of the sentences in which they appear. There is accumulating evidence that neural models do not l... | ae9ac4c1d62ff768a8db745e76b05f67 | 2,020 | [
"recently , there has been much interest in the question of whether deep natural language understanding ( nlu ) models exhibit systematicity , generalizing such that units like words make consistent contributions to the meaning of the sentences in which they appear .",
"there is accumulating evidence that neural ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
12,
13,
14,
15,
16,
17,
18,
19
],
"text": "deep natural language understanding ( nlu ) models",
"... | [
"recently",
",",
"there",
"has",
"been",
"much",
"interest",
"in",
"the",
"question",
"of",
"whether",
"deep",
"natural",
"language",
"understanding",
"(",
"nlu",
")",
"models",
"exhibit",
"systematicity",
",",
"generalizing",
"such",
"that",
"units",
"like",
... |
ACL | The Cascade Transformer: an Application for Efficient Answer Sentence Selection | Large transformer-based language models have been shown to be very effective in many classification tasks. However, their computational complexity prevents their use in applications requiring the classification of a large set of candidates. While previous works have investigated approaches to reduce model size, relativ... | b514da85a490f843ba2aec68d5b5d879 | 2,020 | [
"large transformer - based language models have been shown to be very effective in many classification tasks .",
"however , their computational complexity prevents their use in applications requiring the classification of a large set of candidates .",
"while previous works have investigated approaches to reduce... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4,
5
],
"text": "large transformer - based language models",
"tokens": [
"large",
"tr... | [
"large",
"transformer",
"-",
"based",
"language",
"models",
"have",
"been",
"shown",
"to",
"be",
"very",
"effective",
"in",
"many",
"classification",
"tasks",
".",
"however",
",",
"their",
"computational",
"complexity",
"prevents",
"their",
"use",
"in",
"applica... |
ACL | Conditional Augmentation for Aspect Term Extraction via Masked Sequence-to-Sequence Generation | Aspect term extraction aims to extract aspect terms from review texts as opinion targets for sentiment analysis. One of the big challenges with this task is the lack of sufficient annotated data. While data augmentation is potentially an effective technique to address the above issue, it is uncontrollable as it may cha... | 6516e1f4845e3d00240687acc970ac94 | 2,020 | [
"aspect term extraction aims to extract aspect terms from review texts as opinion targets for sentiment analysis .",
"one of the big challenges with this task is the lack of sufficient annotated data .",
"while data augmentation is potentially an effective technique to address the above issue , it is uncontroll... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "aspect term extraction",
"tokens": [
"aspect",
"term",
"extraction"
]
}
],
... | [
"aspect",
"term",
"extraction",
"aims",
"to",
"extract",
"aspect",
"terms",
"from",
"review",
"texts",
"as",
"opinion",
"targets",
"for",
"sentiment",
"analysis",
".",
"one",
"of",
"the",
"big",
"challenges",
"with",
"this",
"task",
"is",
"the",
"lack",
"of"... |
ACL | GCDT: A Global Context Enhanced Deep Transition Architecture for Sequence Labeling | Current state-of-the-art systems for sequence labeling are typically based on the family of Recurrent Neural Networks (RNNs). However, the shallow connections between consecutive hidden states of RNNs and insufficient modeling of global information restrict the potential performance of those models. In this paper, we t... | 8f9e42e15b0058618aeeab31defafa81 | 2,019 | [
"current state - of - the - art systems for sequence labeling are typically based on the family of recurrent neural networks ( rnns ) .",
"however , the shallow connections between consecutive hidden states of rnns and insufficient modeling of global information restrict the potential performance of those models ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
19,
20,
21
],
"text": "recurrent neural networks",
"tokens": [
"recurrent",
"neural",
"networks"
]
}
... | [
"current",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"systems",
"for",
"sequence",
"labeling",
"are",
"typically",
"based",
"on",
"the",
"family",
"of",
"recurrent",
"neural",
"networks",
"(",
"rnns",
")",
".",
"however",
",",
"the",
"shallow",
"connecti... |
ACL | Few-shot Slot Tagging with Collapsed Dependency Transfer and Label-enhanced Task-adaptive Projection Network | In this paper, we explore the slot tagging with only a few labeled support sentences (a.k.a. few-shot). Few-shot slot tagging faces a unique challenge compared to the other fewshot classification problems as it calls for modeling the dependencies between labels. But it is hard to apply previously learned label dependen... | 85aa7c48f88e5ca6ec89eaafbdaf31dc | 2,020 | [
"in this paper , we explore the slot tagging with only a few labeled support sentences ( a . k . a . few - shot ) .",
"few - shot slot tagging faces a unique challenge compared to the other fewshot classification problems as it calls for modeling the dependencies between labels .",
"but it is hard to apply prev... | [
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
... | [
"in",
"this",
"paper",
",",
"we",
"explore",
"the",
"slot",
"tagging",
"with",
"only",
"a",
"few",
"labeled",
"support",
"sentences",
"(",
"a",
".",
"k",
".",
"a",
".",
"few",
"-",
"shot",
")",
".",
"few",
"-",
"shot",
"slot",
"tagging",
"faces",
"... |
ACL | Bridging Anaphora Resolution as Question Answering | Most previous studies on bridging anaphora resolution (Poesio et al., 2004; Hou et al., 2013b; Hou, 2018a) use the pairwise model to tackle the problem and assume that the gold mention information is given. In this paper, we cast bridging anaphora resolution as question answering based on context. This allows us to fin... | db046c7edbe27333a8e99f97c863c6ae | 2,020 | [
"most previous studies on bridging anaphora resolution ( poesio et al . , 2004 ; hou et al . , 2013b ; hou , 2018a ) use the pairwise model to tackle the problem and assume that the gold mention information is given .",
"in this paper , we cast bridging anaphora resolution as question answering based on context .... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5,
6
],
"text": "bridging anaphora resolution",
"tokens": [
"bridging",
"anaphora",
"resolution"
]
}... | [
"most",
"previous",
"studies",
"on",
"bridging",
"anaphora",
"resolution",
"(",
"poesio",
"et",
"al",
".",
",",
"2004",
";",
"hou",
"et",
"al",
".",
",",
"2013b",
";",
"hou",
",",
"2018a",
")",
"use",
"the",
"pairwise",
"model",
"to",
"tackle",
"the",
... |
ACL | Improving Word Translation via Two-Stage Contrastive Learning | Word translation or bilingual lexicon induction (BLI) is a key cross-lingual task, aiming to bridge the lexical gap between different languages. In this work, we propose a robust and effective two-stage contrastive learning framework for the BLI task. At Stage C1, we propose to refine standard cross-lingual linear maps... | f6893a1f6f0bc2a847bccb017525c3df | 2,022 | [
"word translation or bilingual lexicon induction ( bli ) is a key cross - lingual task , aiming to bridge the lexical gap between different languages .",
"in this work , we propose a robust and effective two - stage contrastive learning framework for the bli task .",
"at stage c1 , we propose to refine standard... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "word translation",
"tokens": [
"word",
"translation"
]
},
{
"argument_type": "Target",
... | [
"word",
"translation",
"or",
"bilingual",
"lexicon",
"induction",
"(",
"bli",
")",
"is",
"a",
"key",
"cross",
"-",
"lingual",
"task",
",",
"aiming",
"to",
"bridge",
"the",
"lexical",
"gap",
"between",
"different",
"languages",
".",
"in",
"this",
"work",
",... |
ACL | Explicitly Capturing Relations between Entity Mentions via Graph Neural Networks for Domain-specific Named Entity Recognition | Named entity recognition (NER) is well studied for the general domain, and recent systems have achieved human-level performance for identifying common entity types. However, the NER performance is still moderate for specialized domains that tend to feature complicated contexts and jargonistic entity types. To address t... | 424d96f5896d0e963fa261c9db0e537d | 2,021 | [
"named entity recognition ( ner ) is well studied for the general domain , and recent systems have achieved human - level performance for identifying common entity types .",
"however , the ner performance is still moderate for specialized domains that tend to feature complicated contexts and jargonistic entity ty... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "named entity recognition",
"tokens": [
"named",
"entity",
"recognition"
]
}
],
... | [
"named",
"entity",
"recognition",
"(",
"ner",
")",
"is",
"well",
"studied",
"for",
"the",
"general",
"domain",
",",
"and",
"recent",
"systems",
"have",
"achieved",
"human",
"-",
"level",
"performance",
"for",
"identifying",
"common",
"entity",
"types",
".",
... |
ACL | Controllable Paraphrase Generation with a Syntactic Exemplar | Prior work on controllable text generation usually assumes that the controlled attribute can take on one of a small set of values known a priori. In this work, we propose a novel task, where the syntax of a generated sentence is controlled rather by a sentential exemplar. To evaluate quantitatively with standard metric... | 10ba7e613a19c9f4a0a1f0a21af0ae76 | 2,019 | [
"prior work on controllable text generation usually assumes that the controlled attribute can take on one of a small set of values known a priori .",
"in this work , we propose a novel task , where the syntax of a generated sentence is controlled rather by a sentential exemplar .",
"to evaluate quantitatively w... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
3,
4,
5
],
"text": "controllable text generation",
"tokens": [
"controllable",
"text",
"generation"
]
}... | [
"prior",
"work",
"on",
"controllable",
"text",
"generation",
"usually",
"assumes",
"that",
"the",
"controlled",
"attribute",
"can",
"take",
"on",
"one",
"of",
"a",
"small",
"set",
"of",
"values",
"known",
"a",
"priori",
".",
"in",
"this",
"work",
",",
"we"... |
ACL | Multi-hop Reading Comprehension across Multiple Documents by Reasoning over Heterogeneous Graphs | Multi-hop reading comprehension (RC) across documents poses new challenge over single-document RC because it requires reasoning over multiple documents to reach the final answer. In this paper, we propose a new model to tackle the multi-hop RC problem. We introduce a heterogeneous graph with different types of nodes an... | 049972ca3baba47fc25a786c86cabed9 | 2,019 | [
"multi - hop reading comprehension ( rc ) across documents poses new challenge over single - document rc because it requires reasoning over multiple documents to reach the final answer .",
"in this paper , we propose a new model to tackle the multi - hop rc problem .",
"we introduce a heterogeneous graph with d... | [
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4
],
"text": "multi - hop reading comprehension",
"tokens": [
"multi",
"-",
"hop",
... | [
"multi",
"-",
"hop",
"reading",
"comprehension",
"(",
"rc",
")",
"across",
"documents",
"poses",
"new",
"challenge",
"over",
"single",
"-",
"document",
"rc",
"because",
"it",
"requires",
"reasoning",
"over",
"multiple",
"documents",
"to",
"reach",
"the",
"fina... |
ACL | MOOCCube: A Large-scale Data Repository for NLP Applications in MOOCs | The prosperity of Massive Open Online Courses (MOOCs) provides fodder for many NLP and AI research for education applications, e.g., course concept extraction, prerequisite relation discovery, etc. However, the publicly available datasets of MOOC are limited in size with few types of data, which hinders advanced models... | b8ad4769d291e70991b41a4f3a7e25ad | 2,020 | [
"the prosperity of massive open online courses ( moocs ) provides fodder for many nlp and ai research for education applications , e . g . , course concept extraction , prerequisite relation discovery , etc .",
"however , the publicly available datasets of mooc are limited in size with few types of data , which h... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
19,
20
],
"text": "education applications",
"tokens": [
"education",
"applications"
]
}
],
"event_type": "ITT",
... | [
"the",
"prosperity",
"of",
"massive",
"open",
"online",
"courses",
"(",
"moocs",
")",
"provides",
"fodder",
"for",
"many",
"nlp",
"and",
"ai",
"research",
"for",
"education",
"applications",
",",
"e",
".",
"g",
".",
",",
"course",
"concept",
"extraction",
... |
ACL | DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference | Large-scale pre-trained language models such as BERT have brought significant improvements to NLP applications. However, they are also notorious for being slow in inference, which makes them difficult to deploy in real-time applications. We propose a simple but effective method, DeeBERT, to accelerate BERT inference. O... | 4b6fe7b5ad1860a72ffa9af1eced3b84 | 2,020 | [
"large - scale pre - trained language models such as bert have brought significant improvements to nlp applications .",
"however , they are also notorious for being slow in inference , which makes them difficult to deploy in real - time applications .",
"we propose a simple but effective method , deebert , to a... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
0,
1,
2,
3,
4,
5,
6,
7
],
"text": "large - scale pre - trained language models",
"tokens": [
... | [
"large",
"-",
"scale",
"pre",
"-",
"trained",
"language",
"models",
"such",
"as",
"bert",
"have",
"brought",
"significant",
"improvements",
"to",
"nlp",
"applications",
".",
"however",
",",
"they",
"are",
"also",
"notorious",
"for",
"being",
"slow",
"in",
"i... |
ACL | Gender Bias in Multilingual Embeddings and Cross-Lingual Transfer | Multilingual representations embed words from many languages into a single semantic space such that words with similar meanings are close to each other regardless of the language. These embeddings have been widely used in various settings, such as cross-lingual transfer, where a natural language processing (NLP) model ... | 097a7600508cd9212fb670a91130fa8d | 2,020 | [
"multilingual representations embed words from many languages into a single semantic space such that words with similar meanings are close to each other regardless of the language .",
"these embeddings have been widely used in various settings , such as cross - lingual transfer , where a natural language processi... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
40,
41,
42,
43
],
"text": "cross - lingual transfer",
"tokens": [
"cross",
"-",
"lingual",
"trans... | [
"multilingual",
"representations",
"embed",
"words",
"from",
"many",
"languages",
"into",
"a",
"single",
"semantic",
"space",
"such",
"that",
"words",
"with",
"similar",
"meanings",
"are",
"close",
"to",
"each",
"other",
"regardless",
"of",
"the",
"language",
".... |
ACL | Aspect Sentiment Classification with Document-level Sentiment Preference Modeling | In the literature, existing studies always consider Aspect Sentiment Classification (ASC) as an independent sentence-level classification problem aspect by aspect, which largely ignore the document-level sentiment preference information, though obviously such information is crucial for alleviating the information defic... | ddfa2a00128b08b7605ebc24925da970 | 2,020 | [
"in the literature , existing studies always consider aspect sentiment classification ( asc ) as an independent sentence - level classification problem aspect by aspect , which largely ignore the document - level sentiment preference information , though obviously such information is crucial for alleviating the inf... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
187
],
"text": "aspect sentiment classification",
"tokens": [
"asc"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
... | [
"in",
"the",
"literature",
",",
"existing",
"studies",
"always",
"consider",
"aspect",
"sentiment",
"classification",
"(",
"asc",
")",
"as",
"an",
"independent",
"sentence",
"-",
"level",
"classification",
"problem",
"aspect",
"by",
"aspect",
",",
"which",
"larg... |
ACL | What Was Written vs. Who Read It: News Media Profiling Using Text Analysis and Social Media Context | Predicting the political bias and the factuality of reporting of entire news outlets are critical elements of media profiling, which is an understudied but an increasingly important research direction. The present level of proliferation of fake, biased, and propagandistic content online has made it impossible to fact-c... | f0836257dd4df8ac15ef193f4376fda5 | 2,020 | [
"predicting the political bias and the factuality of reporting of entire news outlets are critical elements of media profiling , which is an understudied but an increasingly important research direction .",
"the present level of proliferation of fake , biased , and propagandistic content online has made it imposs... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
17,
18
],
"text": "media profiling",
"tokens": [
"media",
"profiling"
]
}
],
"event_type": "ITT",
"trigger": {
... | [
"predicting",
"the",
"political",
"bias",
"and",
"the",
"factuality",
"of",
"reporting",
"of",
"entire",
"news",
"outlets",
"are",
"critical",
"elements",
"of",
"media",
"profiling",
",",
"which",
"is",
"an",
"understudied",
"but",
"an",
"increasingly",
"importa... |
ACL | Complex Question Decomposition for Semantic Parsing | In this work, we focus on complex question semantic parsing and propose a novel Hierarchical Semantic Parsing (HSP) method, which utilizes the decompositionality of complex questions for semantic parsing. Our model is designed within a three-stage parsing architecture based on the idea of decomposition-integration. In ... | a0adb2b169761652c8ae379e6bfd1f19 | 2,019 | [
"in this work , we focus on complex question semantic parsing and propose a novel hierarchical semantic parsing ( hsp ) method , which utilizes the decompositionality of complex questions for semantic parsing .",
"our model is designed within a three - stage parsing architecture based on the idea of decomposition... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"in",
"this",
"work",
",",
"we",
"focus",
"on",
"complex",
"question",
"semantic",
"parsing",
"and",
"propose",
"a",
"novel",
"hierarchical",
"semantic",
"parsing",
"(",
"hsp",
")",
"method",
",",
"which",
"utilizes",
"the",
"decompositionality",
"of",
"comple... |
ACL | Automated Evaluation of Writing – 50 Years and Counting | In this theme paper, we focus on Automated Writing Evaluation (AWE), using Ellis Page’s seminal 1966 paper to frame the presentation. We discuss some of the current frontiers in the field and offer some thoughts on the emergent uses of this technology. | 15160ec78e25daa7e00b4498c36bba28 | 2,020 | [
"in this theme paper , we focus on automated writing evaluation ( awe ) , using ellis page ’ s seminal 1966 paper to frame the presentation .",
"we discuss some of the current frontiers in the field and offer some thoughts on the emergent uses of this technology ."
] | [
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
5
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
... | [
"in",
"this",
"theme",
"paper",
",",
"we",
"focus",
"on",
"automated",
"writing",
"evaluation",
"(",
"awe",
")",
",",
"using",
"ellis",
"page",
"’",
"s",
"seminal",
"1966",
"paper",
"to",
"frame",
"the",
"presentation",
".",
"we",
"discuss",
"some",
"of"... |
ACL | Automatic Detection of Entity-Manipulated Text using Factual Knowledge | In this work, we focus on the problem of distinguishing a human written news article from a news article that is created by manipulating entities in a human written news article (e.g., replacing entities with factually incorrect entities). Such manipulated articles can mislead the reader by posing as a human written ne... | 92e2e8c247e3215a1c4bb5f709ae4cce | 2,022 | [
"in this work , we focus on the problem of distinguishing a human written news article from a news article that is created by manipulating entities in a human written news article ( e . g . , replacing entities with factually incorrect entities ) .",
"such manipulated articles can mislead the reader by posing as ... | [
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
... | [
"in",
"this",
"work",
",",
"we",
"focus",
"on",
"the",
"problem",
"of",
"distinguishing",
"a",
"human",
"written",
"news",
"article",
"from",
"a",
"news",
"article",
"that",
"is",
"created",
"by",
"manipulating",
"entities",
"in",
"a",
"human",
"written",
... |
ACL | Unsupervised Domain Clusters in Pretrained Language Models | The notion of “in-domain data” in NLP is often over-simplistic and vague, as textual data varies in many nuanced linguistic aspects such as topic, style or level of formality. In addition, domain labels are many times unavailable, making it challenging to build domain-specific systems. We show that massive pre-trained ... | e6d492ba8477771dd66891c4e6aac0c5 | 2,020 | [
"the notion of “ in - domain data ” in nlp is often over - simplistic and vague , as textual data varies in many nuanced linguistic aspects such as topic , style or level of formality .",
"in addition , domain labels are many times unavailable , making it challenging to build domain - specific systems .",
"we s... | [
{
"arguments": [
{
"argument_type": "Concern",
"nugget_type": "DST",
"offsets": [
4,
5,
6,
7
],
"text": "in - domain data",
"tokens": [
"in",
"-",
"domain",
"data"
]
... | [
"the",
"notion",
"of",
"“",
"in",
"-",
"domain",
"data",
"”",
"in",
"nlp",
"is",
"often",
"over",
"-",
"simplistic",
"and",
"vague",
",",
"as",
"textual",
"data",
"varies",
"in",
"many",
"nuanced",
"linguistic",
"aspects",
"such",
"as",
"topic",
",",
"... |
ACL | Identifying Visible Actions in Lifestyle Vlogs | We consider the task of identifying human actions visible in online videos. We focus on the widely spread genre of lifestyle vlogs, which consist of videos of people performing actions while verbally describing them. Our goal is to identify if actions mentioned in the speech description of a video are visually present.... | 98e2dfb868d9c42d24ffedc6e1917178 | 2,019 | [
"we consider the task of identifying human actions visible in online videos .",
"we focus on the widely spread genre of lifestyle vlogs , which consist of videos of people performing actions while verbally describing them .",
"our goal is to identify if actions mentioned in the speech description of a video are... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
5,
6,
7,
8
],
"text": "identifying human actions visible",
"tokens": [
"identifying",
"human",
"actions",
... | [
"we",
"consider",
"the",
"task",
"of",
"identifying",
"human",
"actions",
"visible",
"in",
"online",
"videos",
".",
"we",
"focus",
"on",
"the",
"widely",
"spread",
"genre",
"of",
"lifestyle",
"vlogs",
",",
"which",
"consist",
"of",
"videos",
"of",
"people",
... |
ACL | WatClaimCheck: A new Dataset for Claim Entailment and Inference | We contribute a new dataset for the task of automated fact checking and an evaluation of state of the art algorithms. The dataset includes claims (from speeches, interviews, social media and news articles), review articles published by professional fact checkers and premise articles used by those professional fact chec... | 22d41006008f1191f4fd90e1f24d7f25 | 2,022 | [
"we contribute a new dataset for the task of automated fact checking and an evaluation of state of the art algorithms .",
"the dataset includes claims ( from speeches , interviews , social media and news articles ) , review articles published by professional fact checkers and premise articles used by those profes... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
... | [
"we",
"contribute",
"a",
"new",
"dataset",
"for",
"the",
"task",
"of",
"automated",
"fact",
"checking",
"and",
"an",
"evaluation",
"of",
"state",
"of",
"the",
"art",
"algorithms",
".",
"the",
"dataset",
"includes",
"claims",
"(",
"from",
"speeches",
",",
"... |
ACL | Fast and Accurate Deep Bidirectional Language Representations for Unsupervised Learning | Even though BERT has achieved successful performance improvements in various supervised learning tasks, BERT is still limited by repetitive inferences on unsupervised tasks for the computation of contextual language representations. To resolve this limitation, we propose a novel deep bidirectional language model called... | 1716f68f1eec3a5f3903ec64bc8cc949 | 2,020 | [
"even though bert has achieved successful performance improvements in various supervised learning tasks , bert is still limited by repetitive inferences on unsupervised tasks for the computation of contextual language representations .",
"to resolve this limitation , we propose a novel deep bidirectional language... | [
{
"arguments": [
{
"argument_type": "Fault",
"nugget_type": "WEA",
"offsets": [
17
],
"text": "limited",
"tokens": [
"limited"
]
},
{
"argument_type": "Concern",
"nugget_type": "APP",
"offsets":... | [
"even",
"though",
"bert",
"has",
"achieved",
"successful",
"performance",
"improvements",
"in",
"various",
"supervised",
"learning",
"tasks",
",",
"bert",
"is",
"still",
"limited",
"by",
"repetitive",
"inferences",
"on",
"unsupervised",
"tasks",
"for",
"the",
"com... |
ACL | Document Translation vs. Query Translation for Cross-Lingual Information Retrieval in the Medical Domain | We present a thorough comparison of two principal approaches to Cross-Lingual Information Retrieval: document translation (DT) and query translation (QT). Our experiments are conducted using the cross-lingual test collection produced within the CLEF eHealth information retrieval tasks in 2013–2015 containing English do... | bf246fff8807bce9c1b1246cea605c18 | 2,020 | [
"we present a thorough comparison of two principal approaches to cross - lingual information retrieval : document translation ( dt ) and query translation ( qt ) .",
"our experiments are conducted using the cross - lingual test collection produced within the clef ehealth information retrieval tasks in 2013 – 2015... | [
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
... | [
"we",
"present",
"a",
"thorough",
"comparison",
"of",
"two",
"principal",
"approaches",
"to",
"cross",
"-",
"lingual",
"information",
"retrieval",
":",
"document",
"translation",
"(",
"dt",
")",
"and",
"query",
"translation",
"(",
"qt",
")",
".",
"our",
"exp... |
ACL | Synthetic Question Value Estimation for Domain Adaptation of Question Answering | Synthesizing QA pairs with a question generator (QG) on the target domain has become a popular approach for domain adaptation of question answering (QA) models. Since synthetic questions are often noisy in practice, existing work adapts scores from a pretrained QA (or QG) model as criteria to select high-quality questi... | 0645ace4c01ac874a0def0c40c48635b | 2,022 | [
"synthesizing qa pairs with a question generator ( qg ) on the target domain has become a popular approach for domain adaptation of question answering ( qa ) models .",
"since synthetic questions are often noisy in practice , existing work adapts scores from a pretrained qa ( or qg ) model as criteria to select h... | [
{
"arguments": [
{
"argument_type": "Condition",
"nugget_type": "LIM",
"offsets": [
36,
37
],
"text": "in practice",
"tokens": [
"in",
"practice"
]
},
{
"argument_type": "Concern",
"... | [
"synthesizing",
"qa",
"pairs",
"with",
"a",
"question",
"generator",
"(",
"qg",
")",
"on",
"the",
"target",
"domain",
"has",
"become",
"a",
"popular",
"approach",
"for",
"domain",
"adaptation",
"of",
"question",
"answering",
"(",
"qa",
")",
"models",
".",
... |
ACL | MOROCO: The Moldavian and Romanian Dialectal Corpus | In this work, we introduce the MOldavian and ROmanian Dialectal COrpus (MOROCO), which is freely available for download at https://github.com/butnaruandrei/MOROCO. The corpus contains 33564 samples of text (with over 10 million tokens) collected from the news domain. The samples belong to one of the following six topic... | dcd6a027e32c99d0638e217a6bc6ba32 | 2,019 | [
"in this work , we introduce the moldavian and romanian dialectal corpus ( moroco ) , which is freely available for download at https : / / github . com / butnaruandrei / moroco .",
"the corpus contains 33564 samples of text ( with over 10 million tokens ) collected from the news domain .",
"the samples belong ... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "DST",
"offsets": [
... | [
"in",
"this",
"work",
",",
"we",
"introduce",
"the",
"moldavian",
"and",
"romanian",
"dialectal",
"corpus",
"(",
"moroco",
")",
",",
"which",
"is",
"freely",
"available",
"for",
"download",
"at",
"https",
":",
"/",
"/",
"github",
".",
"com",
"/",
"butnar... |
ACL | An Empirical Investigation of Structured Output Modeling for Graph-based Neural Dependency Parsing | In this paper, we investigate the aspect of structured output modeling for the state-of-the-art graph-based neural dependency parser (Dozat and Manning, 2017). With evaluations on 14 treebanks, we empirically show that global output-structured models can generally obtain better performance, especially on the metric of ... | 0208939f00017ead964e2aea4ca2ae6d | 2,019 | [
"in this paper , we investigate the aspect of structured output modeling for the state - of - the - art graph - based neural dependency parser ( dozat and manning , 2017 ) .",
"with evaluations on 14 treebanks , we empirically show that global output - structured models can generally obtain better performance , e... | [
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
4
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
... | [
"in",
"this",
"paper",
",",
"we",
"investigate",
"the",
"aspect",
"of",
"structured",
"output",
"modeling",
"for",
"the",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"graph",
"-",
"based",
"neural",
"dependency",
"parser",
"(",
"dozat",
"and",
"manning",
... |
ACL | A Simple Theoretical Model of Importance for Summarization | Research on summarization has mainly been driven by empirical approaches, crafting systems to perform well on standard datasets with the notion of information Importance remaining latent. We argue that establishing theoretical models of Importance will advance our understanding of the task and help to further improve s... | b52c87438699253d7eae3d86c46011fb | 2,019 | [
"research on summarization has mainly been driven by empirical approaches , crafting systems to perform well on standard datasets with the notion of information importance remaining latent .",
"we argue that establishing theoretical models of importance will advance our understanding of the task and help to furth... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
2
],
"text": "summarization",
"tokens": [
"summarization"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
6
... | [
"research",
"on",
"summarization",
"has",
"mainly",
"been",
"driven",
"by",
"empirical",
"approaches",
",",
"crafting",
"systems",
"to",
"perform",
"well",
"on",
"standard",
"datasets",
"with",
"the",
"notion",
"of",
"information",
"importance",
"remaining",
"late... |
ACL | Syntopical Graphs for Computational Argumentation Tasks | Approaches to computational argumentation tasks such as stance detection and aspect detection have largely focused on the text of independent claims, losing out on potentially valuable context provided by the rest of the collection. We introduce a general approach to these tasks motivated by syntopical reading, a readi... | 6642e0f6f10a935613208605f482f051 | 2,021 | [
"approaches to computational argumentation tasks such as stance detection and aspect detection have largely focused on the text of independent claims , losing out on potentially valuable context provided by the rest of the collection .",
"we introduce a general approach to these tasks motivated by syntopical read... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
2,
3,
4
],
"text": "computational argumentation tasks",
"tokens": [
"computational",
"argumentation",
"tasks"
... | [
"approaches",
"to",
"computational",
"argumentation",
"tasks",
"such",
"as",
"stance",
"detection",
"and",
"aspect",
"detection",
"have",
"largely",
"focused",
"on",
"the",
"text",
"of",
"independent",
"claims",
",",
"losing",
"out",
"on",
"potentially",
"valuable... |
ACL | POS-Constrained Parallel Decoding for Non-autoregressive Generation | The multimodality problem has become a major challenge of existing non-autoregressive generation (NAG) systems. A common solution often resorts to sequence-level knowledge distillation by rebuilding the training dataset through autoregressive generation (hereinafter known as “teacher AG”). The success of such methods m... | 847defa9acc9889b99cff5451802e0be | 2,021 | [
"the multimodality problem has become a major challenge of existing non - autoregressive generation ( nag ) systems .",
"a common solution often resorts to sequence - level knowledge distillation by rebuilding the training dataset through autoregressive generation ( hereinafter known as “ teacher ag ” ) .",
"th... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "APP",
"offsets": [
115,
17
],
"text": "non - autoregressive generation ( nag ) systems",
"tokens": [
"nag",
"systems"
]
}
],
"event_ty... | [
"the",
"multimodality",
"problem",
"has",
"become",
"a",
"major",
"challenge",
"of",
"existing",
"non",
"-",
"autoregressive",
"generation",
"(",
"nag",
")",
"systems",
".",
"a",
"common",
"solution",
"often",
"resorts",
"to",
"sequence",
"-",
"level",
"knowle... |
ACL | SP-10K: A Large-scale Evaluation Set for Selectional Preference Acquisition | Selectional Preference (SP) is a commonly observed language phenomenon and proved to be useful in many natural language processing tasks. To provide a better evaluation method for SP models, we introduce SP-10K, a large-scale evaluation set that provides human ratings for the plausibility of 10,000 SP pairs over five S... | 482368d7e911254c437439f618798499 | 2,019 | [
"selectional preference ( sp ) is a commonly observed language phenomenon and proved to be useful in many natural language processing tasks .",
"to provide a better evaluation method for sp models , we introduce sp - 10k , a large - scale evaluation set that provides human ratings for the plausibility of 10 , 000... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
18,
19,
20
],
"text": "natural language processing",
"tokens": [
"natural",
"language",
"processing"
]
... | [
"selectional",
"preference",
"(",
"sp",
")",
"is",
"a",
"commonly",
"observed",
"language",
"phenomenon",
"and",
"proved",
"to",
"be",
"useful",
"in",
"many",
"natural",
"language",
"processing",
"tasks",
".",
"to",
"provide",
"a",
"better",
"evaluation",
"met... |
ACL | VIFIDEL: Evaluating the Visual Fidelity of Image Descriptions | We address the task of evaluating image description generation systems. We propose a novel image-aware metric for this task: VIFIDEL. It estimates the faithfulness of a generated caption with respect to the content of the actual image, based on the semantic similarity between labels of objects depicted in images and wo... | 2187bbed861e54974c0ab02725f0d059 | 2,019 | [
"we address the task of evaluating image description generation systems .",
"we propose a novel image - aware metric for this task : vifidel .",
"it estimates the faithfulness of a generated caption with respect to the content of the actual image , based on the semantic similarity between labels of objects depi... | [
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
... | [
"we",
"address",
"the",
"task",
"of",
"evaluating",
"image",
"description",
"generation",
"systems",
".",
"we",
"propose",
"a",
"novel",
"image",
"-",
"aware",
"metric",
"for",
"this",
"task",
":",
"vifidel",
".",
"it",
"estimates",
"the",
"faithfulness",
"o... |
ACL | Are Girls Neko or Shōjo? Cross-Lingual Alignment of Non-Isomorphic Embeddings with Iterative Normalization | Cross-lingual word embeddings (CLWE) underlie many multilingual natural language processing systems, often through orthogonal transformations of pre-trained monolingual embeddings. However, orthogonal mapping only works on language pairs whose embeddings are naturally isomorphic. For non-isomorphic pairs, our method (I... | cd6693052f07194f38c18c5a516fe99c | 2,019 | [
"cross - lingual word embeddings ( clwe ) underlie many multilingual natural language processing systems , often through orthogonal transformations of pre - trained monolingual embeddings .",
"however , orthogonal mapping only works on language pairs whose embeddings are naturally isomorphic .",
"for non - isom... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
0,
1,
2,
3,
4
],
"text": "cross - lingual word embeddings",
"tokens": [
"cross",
"-",
"lingual",
... | [
"cross",
"-",
"lingual",
"word",
"embeddings",
"(",
"clwe",
")",
"underlie",
"many",
"multilingual",
"natural",
"language",
"processing",
"systems",
",",
"often",
"through",
"orthogonal",
"transformations",
"of",
"pre",
"-",
"trained",
"monolingual",
"embeddings",
... |
ACL | Stochastic Tokenization with a Language Model for Neural Text Classification | For unsegmented languages such as Japanese and Chinese, tokenization of a sentence has a significant impact on the performance of text classification. Sentences are usually segmented with words or subwords by a morphological analyzer or byte pair encoding and then encoded with word (or subword) representations for neur... | d6d29c18133a67e711bdc1af673ad347 | 2,019 | [
"for unsegmented languages such as japanese and chinese , tokenization of a sentence has a significant impact on the performance of text classification .",
"sentences are usually segmented with words or subwords by a morphological analyzer or byte pair encoding and then encoded with word ( or subword ) representa... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
21,
22
],
"text": "text classification",
"tokens": [
"text",
"classification"
]
}
],
"event_type": "ITT",
"trig... | [
"for",
"unsegmented",
"languages",
"such",
"as",
"japanese",
"and",
"chinese",
",",
"tokenization",
"of",
"a",
"sentence",
"has",
"a",
"significant",
"impact",
"on",
"the",
"performance",
"of",
"text",
"classification",
".",
"sentences",
"are",
"usually",
"segme... |
ACL | Slangvolution: A Causal Analysis of Semantic Change and Frequency Dynamics in Slang | Languages are continuously undergoing changes, and the mechanisms that underlie these changes are still a matter of debate. In this work, we approach language evolution through the lens of causality in order to model not only how various distributional factors associate with language change, but how they causally affec... | a50039508746831717fe56f6d01c9a88 | 2,022 | [
"languages are continuously undergoing changes , and the mechanisms that underlie these changes are still a matter of debate .",
"in this work , we approach language evolution through the lens of causality in order to model not only how various distributional factors associate with language change , but how they ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "FEA",
"offsets": [
0
],
"text": "languages",
"tokens": [
"languages"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
4
],
... | [
"languages",
"are",
"continuously",
"undergoing",
"changes",
",",
"and",
"the",
"mechanisms",
"that",
"underlie",
"these",
"changes",
"are",
"still",
"a",
"matter",
"of",
"debate",
".",
"in",
"this",
"work",
",",
"we",
"approach",
"language",
"evolution",
"thr... |
ACL | Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings | Word embeddings typically represent different meanings of a word in a single conflated vector. Empirical analysis of embeddings of ambiguous words is currently limited by the small size of manually annotated resources and by the fact that word senses are treated as unrelated individual concepts. We present a large data... | b8d19f18eae23347546671b786f63ae5 | 2,019 | [
"word embeddings typically represent different meanings of a word in a single conflated vector .",
"empirical analysis of embeddings of ambiguous words is currently limited by the small size of manually annotated resources and by the fact that word senses are treated as unrelated individual concepts .",
"we pre... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1
],
"text": "word embeddings",
"tokens": [
"word",
"embeddings"
]
}
],
"event_type": "ITT",
"trigger": {
... | [
"word",
"embeddings",
"typically",
"represent",
"different",
"meanings",
"of",
"a",
"word",
"in",
"a",
"single",
"conflated",
"vector",
".",
"empirical",
"analysis",
"of",
"embeddings",
"of",
"ambiguous",
"words",
"is",
"currently",
"limited",
"by",
"the",
"smal... |
ACL | Pretrained Transformers Improve Out-of-Distribution Robustness | Although pretrained Transformers such as BERT achieve high accuracy on in-distribution examples, do they generalize to new distributions? We systematically measure out-of-distribution (OOD) generalization for seven NLP datasets by constructing a new robustness benchmark with realistic distribution shifts. We measure th... | f6d253d2b4a15754f8fee9d463aa04a9 | 2,020 | [
"although pretrained transformers such as bert achieve high accuracy on in - distribution examples , do they generalize to new distributions ?",
"we systematically measure out - of - distribution ( ood ) generalization for seven nlp datasets by constructing a new robustness benchmark with realistic distribution s... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "MOD",
"offsets": [
1,
2
],
"text": "pretrained transformers",
"tokens": [
"pretrained",
"transformers"
]
}
],
"event_type": "ITT",
... | [
"although",
"pretrained",
"transformers",
"such",
"as",
"bert",
"achieve",
"high",
"accuracy",
"on",
"in",
"-",
"distribution",
"examples",
",",
"do",
"they",
"generalize",
"to",
"new",
"distributions",
"?",
"we",
"systematically",
"measure",
"out",
"-",
"of",
... |
ACL | An Analysis of the Utility of Explicit Negative Examples to Improve the Syntactic Abilities of Neural Language Models | We explore the utilities of explicit negative examples in training neural language models. Negative examples here are incorrect words in a sentence, such as barks in *The dogs barks. Neural language models are commonly trained only on positive examples, a set of sentences in the training data, but recent studies sugges... | fcf1dd5d0bee3ecda0fa252d8f2f6896 | 2,020 | [
"we explore the utilities of explicit negative examples in training neural language models .",
"negative examples here are incorrect words in a sentence , such as barks in * the dogs barks .",
"neural language models are commonly trained only on positive examples , a set of sentences in the training data , but ... | [
{
"arguments": [
{
"argument_type": "Researcher",
"nugget_type": "OG",
"offsets": [
0
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "TAK",
"offsets": [
... | [
"we",
"explore",
"the",
"utilities",
"of",
"explicit",
"negative",
"examples",
"in",
"training",
"neural",
"language",
"models",
".",
"negative",
"examples",
"here",
"are",
"incorrect",
"words",
"in",
"a",
"sentence",
",",
"such",
"as",
"barks",
"in",
"*",
"... |
ACL | Multi-Hop Paragraph Retrieval for Open-Domain Question Answering | This paper is concerned with the task of multi-hop open-domain Question Answering (QA). This task is particularly challenging since it requires the simultaneous performance of textual reasoning and efficient searching. We present a method for retrieving multiple supporting paragraphs, nested amidst a large knowledge ba... | 082dd70500903fcc805528f618b2cd13 | 2,019 | [
"this paper is concerned with the task of multi - hop open - domain question answering ( qa ) .",
"this task is particularly challenging since it requires the simultaneous performance of textual reasoning and efficient searching .",
"we present a method for retrieving multiple supporting paragraphs , nested ami... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
8,
9,
10,
11,
12,
13,
14,
15
],
"text": "multi - hop open - domain question answering",
"tokens":... | [
"this",
"paper",
"is",
"concerned",
"with",
"the",
"task",
"of",
"multi",
"-",
"hop",
"open",
"-",
"domain",
"question",
"answering",
"(",
"qa",
")",
".",
"this",
"task",
"is",
"particularly",
"challenging",
"since",
"it",
"requires",
"the",
"simultaneous",
... |
ACL | MMGCN: Multimodal Fusion via Deep Graph Convolution Network for Emotion Recognition in Conversation | Emotion recognition in conversation (ERC) is a crucial component in affective dialogue systems, which helps the system understand users’ emotions and generate empathetic responses. However, most works focus on modeling speaker and contextual information primarily on the textual modality or simply leveraging multimodal ... | 0d6fcfb11b3cb40b77d0061f7aac74e3 | 2,021 | [
"emotion recognition in conversation ( erc ) is a crucial component in affective dialogue systems , which helps the system understand users ’ emotions and generate empathetic responses .",
"however , most works focus on modeling speaker and contextual information primarily on the textual modality or simply levera... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3
],
"text": "emotion recognition in conversation",
"tokens": [
"emotion",
"recognition",
"in",
... | [
"emotion",
"recognition",
"in",
"conversation",
"(",
"erc",
")",
"is",
"a",
"crucial",
"component",
"in",
"affective",
"dialogue",
"systems",
",",
"which",
"helps",
"the",
"system",
"understand",
"users",
"’",
"emotions",
"and",
"generate",
"empathetic",
"respon... |
ACL | ASSET: A Dataset for Tuning and Evaluation of Sentence Simplification Models with Multiple Rewriting Transformations | In order to simplify a sentence, human editors perform multiple rewriting transformations: they split it into several shorter sentences, paraphrase words (i.e. replacing complex words or phrases by simpler synonyms), reorder components, and/or delete information deemed unnecessary. Despite these varied range of possibl... | d1ba159388f1335ea21e065dd0201511 | 2,020 | [
"in order to simplify a sentence , human editors perform multiple rewriting transformations : they split it into several shorter sentences , paraphrase words ( i . e . replacing complex words or phrases by simpler synonyms ) , reorder components , and / or delete information deemed unnecessary .",
"despite these ... | [
{
"arguments": [
{
"argument_type": "BaseComponent",
"nugget_type": "MOD",
"offsets": [
10,
11,
12
],
"text": "multiple rewriting transformations",
"tokens": [
"multiple",
"rewriting",
"transformati... | [
"in",
"order",
"to",
"simplify",
"a",
"sentence",
",",
"human",
"editors",
"perform",
"multiple",
"rewriting",
"transformations",
":",
"they",
"split",
"it",
"into",
"several",
"shorter",
"sentences",
",",
"paraphrase",
"words",
"(",
"i",
".",
"e",
".",
"rep... |
ACL | ConSERT: A Contrastive Framework for Self-Supervised Sentence Representation Transfer | Learning high-quality sentence representations benefits a wide range of natural language processing tasks. Though BERT-based pre-trained language models achieve high performance on many downstream tasks, the native derived sentence representations are proved to be collapsed and thus produce a poor performance on the se... | a4a3ea2eef957084ccb22132f29b6491 | 2,021 | [
"learning high - quality sentence representations benefits a wide range of natural language processing tasks .",
"though bert - based pre - trained language models achieve high performance on many downstream tasks , the native derived sentence representations are proved to be collapsed and thus produce a poor per... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
1,
2,
3,
4,
5
],
"text": "high - quality sentence representations",
"tokens": [
"high",
"-",
"qua... | [
"learning",
"high",
"-",
"quality",
"sentence",
"representations",
"benefits",
"a",
"wide",
"range",
"of",
"natural",
"language",
"processing",
"tasks",
".",
"though",
"bert",
"-",
"based",
"pre",
"-",
"trained",
"language",
"models",
"achieve",
"high",
"perform... |
ACL | Don’t Say That! Making Inconsistent Dialogue Unlikely with Unlikelihood Training | Generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address. They tend to produce generations that (i) rely too much on copying from the context, (ii) contain repetitions within utterances, (iii) overuse frequent words, and (iv) at a deeper level, co... | 77ac3b5405a06e6bfe38f1a2aae70d4c | 2,020 | [
"generative dialogue models currently suffer from a number of problems which standard maximum likelihood training does not address .",
"they tend to produce generations that ( i ) rely too much on copying from the context , ( ii ) contain repetitions within utterances , ( iii ) overuse frequent words , and ( iv )... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "generative dialogue models",
"tokens": [
"generative",
"dialogue",
"models"
]
}
... | [
"generative",
"dialogue",
"models",
"currently",
"suffer",
"from",
"a",
"number",
"of",
"problems",
"which",
"standard",
"maximum",
"likelihood",
"training",
"does",
"not",
"address",
".",
"they",
"tend",
"to",
"produce",
"generations",
"that",
"(",
"i",
")",
... |
ACL | It’s Easier to Translate out of English than into it: Measuring Neural Translation Difficulty by Cross-Mutual Information | The performance of neural machine translation systems is commonly evaluated in terms of BLEU. However, due to its reliance on target language properties and generation, the BLEU metric does not allow an assessment of which translation directions are more difficult to model. In this paper, we propose cross-mutual inform... | 8d3b76fb88d327a47637076aad29597f | 2,020 | [
"the performance of neural machine translation systems is commonly evaluated in terms of bleu .",
"however , due to its reliance on target language properties and generation , the bleu metric does not allow an assessment of which translation directions are more difficult to model .",
"in this paper , we propose... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
13
],
"text": "bleu",
"tokens": [
"bleu"
]
}
],
"event_type": "ITT",
"trigger": {
"offsets": [
9
],
"text... | [
"the",
"performance",
"of",
"neural",
"machine",
"translation",
"systems",
"is",
"commonly",
"evaluated",
"in",
"terms",
"of",
"bleu",
".",
"however",
",",
"due",
"to",
"its",
"reliance",
"on",
"target",
"language",
"properties",
"and",
"generation",
",",
"the... |
ACL | Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM Language Models | LSTM-based recurrent neural networks are the state-of-the-art for many natural language processing (NLP) tasks. Despite their performance, it is unclear whether, or how, LSTMs learn structural features of natural languages such as subject-verb number agreement in English. Lacking this understanding, the generality of L... | 16705d99c77ef2227291f2fce6e3e8b5 | 2,020 | [
"lstm - based recurrent neural networks are the state - of - the - art for many natural language processing ( nlp ) tasks .",
"despite their performance , it is unclear whether , or how , lstms learn structural features of natural languages such as subject - verb number agreement in english .",
"lacking this un... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
17,
18,
19
],
"text": "natural language processing",
"tokens": [
"natural",
"language",
"processing"
]
... | [
"lstm",
"-",
"based",
"recurrent",
"neural",
"networks",
"are",
"the",
"state",
"-",
"of",
"-",
"the",
"-",
"art",
"for",
"many",
"natural",
"language",
"processing",
"(",
"nlp",
")",
"tasks",
".",
"despite",
"their",
"performance",
",",
"it",
"is",
"unc... |
ACL | He said “who’s gonna take care of your children when you are at ACL?”: Reported Sexist Acts are Not Sexist | In a context of offensive content mediation on social media now regulated by European laws, it is important not only to be able to automatically detect sexist content but also to identify if a message with a sexist content is really sexist or is a story of sexism experienced by a woman. We propose: (1) a new characteri... | cfb5b96ab6b77dba61d425558c28d923 | 2,020 | [
"in a context of offensive content mediation on social media now regulated by european laws , it is important not only to be able to automatically detect sexist content but also to identify if a message with a sexist content is really sexist or is a story of sexism experienced by a woman .",
"we propose : ( 1 ) a... | [
{
"arguments": [
{
"argument_type": "Proposer",
"nugget_type": "OG",
"offsets": [
54
],
"text": "we",
"tokens": [
"we"
]
},
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
... | [
"in",
"a",
"context",
"of",
"offensive",
"content",
"mediation",
"on",
"social",
"media",
"now",
"regulated",
"by",
"european",
"laws",
",",
"it",
"is",
"important",
"not",
"only",
"to",
"be",
"able",
"to",
"automatically",
"detect",
"sexist",
"content",
"bu... |
ACL | Boosting Neural Machine Translation with Similar Translations | This paper explores data augmentation methods for training Neural Machine Translation to make use of similar translations, in a comparable way a human translator employs fuzzy matches. In particular, we show how we can simply present the neural model with information of both source and target sides of the fuzzy matches... | 18fb4a983d773a8106e8dd9dd29776da | 2,020 | [
"this paper explores data augmentation methods for training neural machine translation to make use of similar translations , in a comparable way a human translator employs fuzzy matches .",
"in particular , we show how we can simply present the neural model with information of both source and target sides of the ... | [
{
"arguments": [
{
"argument_type": "Content",
"nugget_type": "APP",
"offsets": [
3,
4,
5
],
"text": "data augmentation methods",
"tokens": [
"data",
"augmentation",
"methods"
]
},
... | [
"this",
"paper",
"explores",
"data",
"augmentation",
"methods",
"for",
"training",
"neural",
"machine",
"translation",
"to",
"make",
"use",
"of",
"similar",
"translations",
",",
"in",
"a",
"comparable",
"way",
"a",
"human",
"translator",
"employs",
"fuzzy",
"mat... |
ACL | Using Automatically Extracted Minimum Spans to Disentangle Coreference Evaluation from Boundary Detection | The common practice in coreference resolution is to identify and evaluate the maximum span of mentions. The use of maximum spans tangles coreference evaluation with the challenges of mention boundary detection like prepositional phrase attachment. To address this problem, minimum spans are manually annotated in smaller... | 24fd2785f45d9c92068fc83f5b93a7a6 | 2,019 | [
"the common practice in coreference resolution is to identify and evaluate the maximum span of mentions .",
"the use of maximum spans tangles coreference evaluation with the challenges of mention boundary detection like prepositional phrase attachment .",
"to address this problem , minimum spans are manually an... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
4,
5
],
"text": "coreference resolution",
"tokens": [
"coreference",
"resolution"
]
}
],
"event_type": "ITT",
"... | [
"the",
"common",
"practice",
"in",
"coreference",
"resolution",
"is",
"to",
"identify",
"and",
"evaluate",
"the",
"maximum",
"span",
"of",
"mentions",
".",
"the",
"use",
"of",
"maximum",
"spans",
"tangles",
"coreference",
"evaluation",
"with",
"the",
"challenges... |
ACL | End-to-End Neural Pipeline for Goal-Oriented Dialogue Systems using GPT-2 | The goal-oriented dialogue system needs to be optimized for tracking the dialogue flow and carrying out an effective conversation under various situations to meet the user goal. The traditional approach to build such a dialogue system is to take a pipelined modular architecture, where its modules are optimized individu... | 2404839b071bfa1cc48ff2742de35495 | 2,020 | [
"the goal - oriented dialogue system needs to be optimized for tracking the dialogue flow and carrying out an effective conversation under various situations to meet the user goal .",
"the traditional approach to build such a dialogue system is to take a pipelined modular architecture , where its modules are opti... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
1,
2,
3,
4,
5
],
"text": "goal - oriented dialogue system",
"tokens": [
"goal",
"-",
"oriented",
... | [
"the",
"goal",
"-",
"oriented",
"dialogue",
"system",
"needs",
"to",
"be",
"optimized",
"for",
"tracking",
"the",
"dialogue",
"flow",
"and",
"carrying",
"out",
"an",
"effective",
"conversation",
"under",
"various",
"situations",
"to",
"meet",
"the",
"user",
"g... |
ACL | Constructing Multi-Modal Dialogue Dataset by Replacing Text with Semantically Relevant Images | In multi-modal dialogue systems, it is important to allow the use of images as part of a multi-turn conversation. Training such dialogue systems generally requires a large-scale dataset consisting of multi-turn dialogues that involve images, but such datasets rarely exist. In response, this paper proposes a 45k multi-m... | 9af469cdb232e938e1b8894d6aa00b79 | 2,021 | [
"in multi - modal dialogue systems , it is important to allow the use of images as part of a multi - turn conversation .",
"training such dialogue systems generally requires a large - scale dataset consisting of multi - turn dialogues that involve images , but such datasets rarely exist .",
"in response , this ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
1,
2,
3,
4,
5
],
"text": "multi - modal dialogue systems",
"tokens": [
"multi",
"-",
"modal",
... | [
"in",
"multi",
"-",
"modal",
"dialogue",
"systems",
",",
"it",
"is",
"important",
"to",
"allow",
"the",
"use",
"of",
"images",
"as",
"part",
"of",
"a",
"multi",
"-",
"turn",
"conversation",
".",
"training",
"such",
"dialogue",
"systems",
"generally",
"requ... |
ACL | GlobalWoZ: Globalizing MultiWoZ to Develop Multilingual Task-Oriented Dialogue Systems | Over the last few years, there has been a move towards data curation for multilingual task-oriented dialogue (ToD) systems that can serve people speaking different languages. However, existing multilingual ToD datasets either have a limited coverage of languages due to the high cost of data curation, or ignore the fact... | 16ece471bfb74fda115cb8beec46dd7d | 2,022 | [
"over the last few years , there has been a move towards data curation for multilingual task - oriented dialogue ( tod ) systems that can serve people speaking different languages .",
"however , existing multilingual tod datasets either have a limited coverage of languages due to the high cost of data curation , ... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
15,
16,
17,
18,
19,
23
],
"text": "multilingual task - oriented dialogue ( tod ) systems",
"tokens": [
"mul... | [
"over",
"the",
"last",
"few",
"years",
",",
"there",
"has",
"been",
"a",
"move",
"towards",
"data",
"curation",
"for",
"multilingual",
"task",
"-",
"oriented",
"dialogue",
"(",
"tod",
")",
"systems",
"that",
"can",
"serve",
"people",
"speaking",
"different",... |
ACL | QuASE: Question-Answer Driven Sentence Encoding | Question-answering (QA) data often encodes essential information in many facets. This paper studies a natural question: Can we get supervision from QA data for other tasks (typically, non-QA ones)? For example, can we use QAMR (Michael et al., 2017) to improve named entity recognition? We suggest that simply further pr... | 711ed4b3a433a56a613515d5040038c3 | 2,020 | [
"question - answering ( qa ) data often encodes essential information in many facets .",
"this paper studies a natural question : can we get supervision from qa data for other tasks ( typically , non - qa ones ) ?",
"for example , can we use qamr ( michael et al . , 2017 ) to improve named entity recognition ?"... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2
],
"text": "question - answering",
"tokens": [
"question",
"-",
"answering"
]
}
],
"eve... | [
"question",
"-",
"answering",
"(",
"qa",
")",
"data",
"often",
"encodes",
"essential",
"information",
"in",
"many",
"facets",
".",
"this",
"paper",
"studies",
"a",
"natural",
"question",
":",
"can",
"we",
"get",
"supervision",
"from",
"qa",
"data",
"for",
... |
ACL | Can Unsupervised Knowledge Transfer from Social Discussions Help Argument Mining? | Identifying argument components from unstructured texts and predicting the relationships expressed among them are two primary steps of argument mining. The intrinsic complexity of these tasks demands powerful learning models. While pretrained Transformer-based Language Models (LM) have been shown to provide state-of-th... | 1c3c37fcf68d673153d9e3568b58b6d9 | 2,022 | [
"identifying argument components from unstructured texts and predicting the relationships expressed among them are two primary steps of argument mining .",
"the intrinsic complexity of these tasks demands powerful learning models .",
"while pretrained transformer - based language models ( lm ) have been shown t... | [
{
"arguments": [
{
"argument_type": "Target",
"nugget_type": "TAK",
"offsets": [
0,
1,
2,
3,
4,
5
],
"text": "identifying argument components from unstructured texts",
"tokens": [
"identif... | [
"identifying",
"argument",
"components",
"from",
"unstructured",
"texts",
"and",
"predicting",
"the",
"relationships",
"expressed",
"among",
"them",
"are",
"two",
"primary",
"steps",
"of",
"argument",
"mining",
".",
"the",
"intrinsic",
"complexity",
"of",
"these",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.