Skip to content

Latest commit

 

History

History
18 lines (16 loc) · 5.1 KB

NLP_question_answering.md

File metadata and controls

18 lines (16 loc) · 5.1 KB

NLP - Question Answering

Paper Conference Remarks
UnitedQA - A Hybrid Approach for Open Domain Question Answering Arxiv 2020 1. Apply novel techniques to enhance both extractive and generative readers built upon recent pretrained neural language models, and find that proper training methods can provide large improvement over previous state-of-the-art models. 2. Demonstrate that a simple hybrid approach by combining answers from both readers can efficiently take advantages of extractive and generative answer inference strategies and outperforms single models as well as homogeneous ensembles.
Unified Open-Domain Question Answering with Structured and Unstructured Knowledge Arxiv 2020 1. Study open-domain question answering (ODQA) with structured, unstructured and semi-structured knowledge sources, including text, tables, lists, and knowledge bases. 2. The proposed approach homogenizes all sources by reducing them to text, and applies recent, powerful retriever-reader models which have so far been limited to text sources only. 3. Find that combining sources always helps, even for datasets which target a single source by construction.
Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering Arxiv 2020 1. Investigate how much the generative readers can benefit from retrieving text passages, potentially containing evidence. 2. Observe that the performance of this method significantly improves when increasing the number of retrieved passages.
Generation-Augmented Retrieval for Open-domain Question Answering Arxiv 2020 1. Present Generation-Augmented Retrieval (GAR), a query expansion method that augments a query with relevant contexts through text generation. 2. Demonstrate on open-domain question answering that the generated contexts significantly enrich the semantics of the queries and thus GAR with sparse representations (BM25) achieves comparable or better performance than the state-of-the-art dense methods such as DPR. 3. GAR can be easily combined with DPR to achieve even better performance. 4. Show that GAR achieves the state-of-the-art performance on the Natural Questions and TriviaQA datasets under the extractive setting when equipped with an extractive reader, and consistently outperforms other retrieval methods when the same generative reader is used.
Learning Dense Representations of Phrases at Scale Arxiv 2020 1. Show for the first time that we can learn dense phrase representations alone that achieve much stronger performance in open-domain QA. 2. Propose (1) learning query-agnostic phrase representations via question generation and distillation; (2) novel negative-sampling methods for global normalization; (3) query-side fine-tuning for transfer learning. 3. The proposed DensePhrases improves previous phrase retrieval models by 15%-25% absolute accuracy and matches the performance of state-of-the-art retriever-reader models. 4. The proposed model is easy to parallelize due to pure dense representations and processes more than 10 questions per second on CPUs.
NeurIPS 2020 EfficientQA Competition NeurIPS 2020 1. Review the EfficientQA competition from NeurIPS 2020. 2. Describe the motivation and organization of the competition, review the best submissions, and analyze system predictions to inform a discussion of evaluation for open-domain QA.
Dense Passage Retrieval for Open-Domain Question Answering EMNLP 2020 1. Show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. 2. The proposed dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps the end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.
Text-based Question Answering from Information Retrieval and Deep Neural Network Perspectives - A Survey Arxiv 2020 1. Provide a comprehensive overview of different models proposed for the QA task, including both traditional information retrieval perspective, and more recent deep neural network perspective. 2. Introduce well-known datasets for the task and present available results from the literature to have a comparison between different techniques.
A Survey on Complex Question Answering over Knowledge Base - Recent Advances and Challenges Arxiv 2020 1. Introduce the recent advances in complex QA where besides traditional methods relying on templates and rules, the research is categorized into a taxonomy that contains two main branches, namely Information Retrieval-based and Neural Semantic Parsing-based. 2. Analyze directions for future research and introduce the models proposed by the Alime team.

Back to index