Skip to content

A Context-aware Visual Attention-based training pipeline for Object Detection from a Webpage screenshot!

License

Notifications You must be signed in to change notification settings

kevalmorabia97/CoVA-Web-Object-Detection

Repository files navigation

CoVA: Context-aware Visual Attention for Webpage Information Extraction

Abstract

Webpage information extraction (WIE) is an important step to create knowledge bases. For this, classical WIE methods leverage the Document Object Model (DOM) tree of a website. However, use of the DOM tree poses significant challenges as context and appearance are encoded in an abstract manner. To address this challenge we propose to reformulate WIE as a context-aware Webpage Object Detection task. Specifically, we develop a Context-aware Visual Attention-based (CoVA) detection pipeline which combines appearance features with syntactical structure from the DOM tree. To study the approach we collect a new large-scale datase of e-commerce websites for which we manually annotate every web element with four labels: product price, product title, product image and others. On this dataset we show that the proposed CoVA approach is a new challenging baseline which improves upon prior state-of-the-art methods.

In Proceedings of The Fifth Workshop on e-Commerce and NLP (ECNLP 5), Association for Computational Linguistics 2022. Paper, Slides, Video Presentation

CoVA Dataset

We labeled 7,740 webpages spanning 408 domains (Amazon, Walmart, Target, etc.). Each of these webpages contains exactly one labeled price, title, and image. All other web elements are labeled as background. On average, there are 90 web elements in a webpage.

Webpage screenshots and bounding boxes can be obtained here

Train-Val-Test split

We create a cross-domain split which ensures that each of the train, val and test sets contains webpages from different domains. Specifically, we construct a 3 : 1 : 1 split based on the number of distinct domains. We observed that the top-5 domains (based on number of samples) were Amazon, EBay, Walmart, Etsy, and Target. So, we created 5 different splits for 5-Fold Cross Validation such that each of the major domains is present in one of the 5 splits for test data. These splits can be accessed here

CoVA End-to-end Training Pipeline

Our Context-Aware Visual Attention-based end-to-end pipeline for Webpage Object Detection (CoVA) aims to learn function f to predict labels y = [y1, y2, ..., yN] for a webpage containing N elements. The input to CoVA consists of:

  1. a screenshot of a webpage,
  2. list of bounding boxes [x, y, w, h] of the web elements, and
  3. neighborhood information for each element obtained from the DOM tree.

This information is processed in four stages:

  1. the graph representation extraction for the webpage,
  2. the Representation Network (RN),
  3. the Graph Attention Network (GAT), and
  4. a fully connected (FC) layer.

The graph representation extraction computes for every web element i its set of K neighboring web elements Ni. The RN consists of a Convolutional Neural Net (CNN) and a positional encoder aimed to learn a visual representation vi for each web element i ∈ {1, ..., N}. The GAT combines the visual representation vi of the web element i to be classified and those of its neighbors, i.e., vk ∀k ∈ Ni to compute the contextual representation ci for web element i. Finally, the visual and contextual representations of the web element are concatenated and passed through the FC layer to obtain the classification output.

Pipeline

Experimental Results

Table of Comparison Cross Domain Accuracy (mean ± standard deviation) for 5-fold cross validation.

NOTE: Cross Domain means we train the model on some web domains and test it on completely different domains to evaluate the generalizability of the models to unseen web templates.

Attention Visualizations!

Attention Visualizations Attention Visualizations where red border denotes web element to be classified, and its contexts have green shade whose intensity denotes score. Price in (a) get much more score than other contexts. Title and image in (b) are scored higher than other contexts for price.

Cite

If you find this useful in your research, please cite our ACL 2022 Paper:

@inproceedings{kumar-etal-2022-cova,
    title = "{C}o{VA}: Context-aware Visual Attention for Webpage Information Extraction",
    author = "Kumar, Anurendra  and
      Morabia, Keval  and
      Wang, William  and
      Chang, Kevin  and
      Schwing, Alex",
    booktitle = "Proceedings of The Fifth Workshop on e-Commerce and NLP (ECNLP 5)",
    month = may,
    year = "2022",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.ecnlp-1.11",
    pages = "80--90",
    abstract = "Webpage information extraction (WIE) is an important step to create knowledge bases. For this, classical WIE methods leverage the Document Object Model (DOM) tree of a website. However, use of the DOM tree poses significant challenges as context and appearance are encoded in an abstract manner. To address this challenge we propose to reformulate WIE as a context-aware Webpage Object Detection task. Specifically, we develop a Context-aware Visual Attention-based (CoVA) detection pipeline which combines appearance features with syntactical structure from the DOM tree. To study the approach we collect a new large-scale datase of e-commerce websites for which we manually annotate every web element with four labels: product price, product title, product image and others. On this dataset we show that the proposed CoVA approach is a new challenging baseline which improves upon prior state-of-the-art methods.",
}