Skip to content
This repository has been archived by the owner on Jan 15, 2024. It is now read-only.

(not a bug) question about bert create_pretraining_data.tokenize_lines() #1592

Open
kiukchung opened this issue Aug 30, 2022 · 1 comment
Open
Labels
bug Something isn't working

Comments

@kiukchung
Copy link

Description

In the function scripts.pretraining.bert.create_pretraining_data.tokenize_lines()

The code snippet:

for line in lines:
        if not line:
            break
        line = line.strip()
        # Empty lines are used as document delimiters
        if not line:
            results.append([])
        else:
            #<OMITTED FOR BREVITY...>
    return results

Suggests that empty or null lines (e.g. "" or None) break the for-loop returning only the lines that have been processed so far whereas stripped-empty lines (e.g. " ") are used as document delimiters.

Could someone shed light as to what the (empty line + break-from-loop) is meant to accomplish? Are empty/null lines used as delimiters?

@kiukchung kiukchung added the bug Something isn't working label Aug 30, 2022
@szhengac
Copy link
Member

This requires a format in the text file where documents are separated by empty lines. The following example consists of 2 documents, where each line is considered as a sentence in a document.

We use wikipedia corpus.
Wikipedia is great.

We also use bookcorpus.
Bookcorpus is also helpful.

So when the tokenize_lines function reaches the end of the file, if not line is triggered and we break the loop. When the function reads the empty line between two documents, the second if not line is triggered after line.strip() on line = '\n'.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants