/
README.Rmd
207 lines (144 loc) · 10.2 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
---
output: github_document
---
<!-- badges: start -->
[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)
[![R build status](https://github.com/PolMine/LinkTools/workflows/R-CMD-check/badge.svg)](https://github.com/PolMine/LinkTools/actions)
[![codecov](https://codecov.io/gh/PolMine/LinkTools/branch/main/graph/badge.svg)](https://codecov.io/gh/PolMine/LinkTools/branch/main)
[![Lifecycle: experimental](https://img.shields.io/badge/lifecycle-experimental-red.svg)](https://www.tidyverse.org/lifecycle/#experimental)
<!-- badges: end -->
# LinkTools
## About LinkTools
### Motivation
`LinkTools` facilitates the linkage of datasets by providing accessible approaches to two tasks: Record Linkage and Entity Linking. It is developed in the measure [Linking Textual Data](https://www.konsortswd.de/en/konsortswd/the-consortium/services/linking-textual-data/) in [KonsortSWD](https://www.konsortswd.de/en/) within the [National Research Data Infrastructure Germany](https://www.nfdi.de/?lang=en) (NFDI) and has the goal to make linking textual data more accessible.
### Purpose
Once finalized, four steps should be integrated into this R package:
* the preparation of datasets which should be linked, i.e. the transformation into a comparable format and the assignment of new information, in particular shared unique identifiers
* the merge of datasets based on these identifiers
* the encoding or enrichment of the data with different output formats (data.table, XML or CWB)
* the package includes a wrapper for the Named Entity Linking of textual data based on DBpedia Spotlight
A major focus of this package is the provision of an intuitive workflow with transparency and robustness. Documentation, validity and user experience as well as training and education are at the heart of this development. Consequently, the processes of linkage and linking should be designed as approachable as possible - using GUIs and feedback - as well as robust and repeatable.
## Current Status
The Record Linkage functionality is maturing and the current state is documented in the package vignette. Record Linkage with CWB corpora is currently implemented. Entity Linking using [DBpedia Spotlight](https://www.dbpedia-spotlight.org/) as the backend is currently in development.
## Core functionality of LinkTools
```{r load_LinkTools_package}
library(LinkTools)
```
### Record Linkage
The `LTDataset` class, an R6 class, is the main driver of the Record Linkage process within the `LinkTools` package. Aside from the wrangling of the textual data input, its core functionality includes the merge of metadata found in text corpora and external datasets. To this end, first, a strict merge of exact matches is performed. Thereafter, if observations could not be joined directly, a fuzzy matching approach is possible in which `LinkTools` suggests matches based on different measures of string distance. In this stage, the manual inspection of these suggestions and the addition of missing values is possible.
The vignette shows this for larger data and a real external dataset. In the following a short artificial example is provided. It is assumed that the following two resources should be linked:
### Text corpus
```{r load_polmineR_and_germaparlmini}
library(polmineR)
use("polmineR")
```
GermaParlMini is a sample corpus of the larger GermaParl corpus of parliamentary debates. It is provided by the `polmineR` R package. It contains a number of metadata such as the speaker name and the party of a speaker. To show the capabilities of the tool, a small sample of this dataset is used.
```{r subset_germaparlmini}
germaparlmini_session <- polmineR::corpus("GERMAPARLMINI") |>
polmineR::subset(date == "2009-11-12")
```
The metadata used for linking looks like the following:
```{r retrieve_attributes, echo = FALSE}
speaker_s_attributes <- germaparlmini_session |>
polmineR::s_attributes(c("speaker", "party")) |>
data.table::setorder(speaker)
speaker_s_attributes |>
knitr::kable(format = "markdown")
```
### External Data
To show the (fuzzy) matching possibilities of the package, an artificial dataset is created on the spot. It is created by modifying the speaker data found in the textual data by introducing some deviation. In consequence, it contains most of the speakers in the textual data shown above as well as the same metadata plus a variable called "ID" which represents the additional information we want to add to the textual data. To showcase the fuzzy matching, this artificial dataset also contains some typos in the names of the speakers as well as a differently named column for the speaker names.
For a real dataset, please see the package vignette.
```{r make_artificial_dataset, echo = FALSE}
set.seed(24)
artificial_id_data <- data.table::copy(speaker_s_attributes)
# to show what happens if not all observations can be matched, remove one random
# row.
row_to_remove_idx <- sample(1:nrow(artificial_id_data), 1)
artificial_id_data <- artificial_id_data[-row_to_remove_idx, ]
# a single purpose function to mildly modify speaker names
scramble_name <- function(x) {
to_scramble <- sample(c(1, 2), 1)
# only potentially modify about every second name
if (to_scramble == 1) {
# separate first and last elements in name column
name_elements_split <- unlist(strsplit(x, " "))
last_name <- name_elements_split[length(name_elements_split)]
first_name <- name_elements_split[1:(length(name_elements_split) - 1)]
# split into characters
name_as_chars <- unlist(strsplit(last_name, ""))
# to scramble area in the middle of the name, select two characters in the
# middle
min <- length(name_as_chars) - floor(length(name_as_chars) / 2)
max <- min + 1
middle_sector <- name_as_chars[min:max]
middle_sector_scrambled <- sample(middle_sector, length(middle_sector))
# get rest of the name. Short names might cause errors.
start_sector <- name_as_chars[1:(min-1)]
if (max == length(name_as_chars)) {
end_sector <- NULL
} else {
end_sector <- name_as_chars[(max+1):length(name_as_chars)]
}
# paste again
last_name_mod <- paste(c(start_sector,
middle_sector_scrambled,
end_sector),
collapse = "")
name_scrambled <- paste(c(first_name, last_name_mod), collapse = " ")
return(name_scrambled)
} else {
return(x)
}
}
artificial_id_data[, speaker := scramble_name(speaker), by = seq_len(nrow(artificial_id_data))]
artificial_id_data[, id := sprintf("ID_%s", 1:nrow(artificial_id_data))]
data.table::setnames(artificial_id_data, old = "speaker", new = "name")
artificial_id_data |>
knitr::kable(format = "markdown")
```
### Step 1: Instantiating the `LTDataset` class
The `LTDataset` class is instantiated with the names of the two resources and some additional information. The package vignette shows some more arguments. Also see `?LTDataset` for more in-depth documentation of these arguments.
```{r instantiate_LTDataset}
LTD <- LTDataset$new(textual_data = germaparlmini_session,
textual_data_type = "cwb",
external_resource = artificial_id_data,
attr_to_add = c("id_in_corpus" = "id"),
match_by = c("speaker" = "name",
"party" = "party"),
forced_encoding = "UTF-8")
```
### Step 2: Join strictly
With the shared attributes provided in the `match_by` argument, the two datasets are joined.
```{r join_data}
LTD$join_textual_and_external_data()
```
After this join, a data.table object can be created. This can be used to inspect the results of the join directly. Each row in which the ID column is "NA" was not matched.
```{r create_region_datatable}
LTD$create_attribute_region_datatable()
```
```{r print_region_datatable}
LTD$attrs_by_region_dt |>
unique() |>
data.table::setorder(speaker) |>
knitr::kable(format = "markdown")
```
### Step 3: Check and add missing values
If there are cases in which a match was not possible - for example because of slight differences in the two datasets - manual annotation supported by fuzzy matching is possible.
In an interactive process, realized with `shiny`, suggestions based on the fuzzy matching of attributes are provided and can be checked manually. Remaining missing attributes can be added to the metadata of the corpus.
```{r check_missing, eval = FALSE}
LTD$check_and_add_missing_values(modify = TRUE,
match_fuzzily_by = "speaker",
doc_dir = tempdir())
```
The screenshot below should serve as an illustration of this interactive element.
![Interactive and manual control of suggestions in LinkTools v0.0.1.9005](LinkTools_interactive_matching_gui_README.png)
A final functionality allows to write the new attribute back into the CWB corpus.
## LinkTools and other Resources
The purpose of `LinkTools` is to provide an integrated user experience for interested persons wanting to link textual data with other types of data. The package thus will handle different data types which are important in the realm of textual data, taking care of preprocessing and the enrichment of the data in a robust and transparent manner. It provides access to different existing approaches of linking and linkage, lowering barriers to make use of available resources.
Focusing on textual data, the code base of the [PolMine project](https://polmine.github.io/) is at the core of `LinkTools`. In particular, the R package [polmineR](https://cran.r-project.org/package=polmineR) is used to manage large CWB corpora. In the future, a broader scope of input will be covered. The strict merging in the record linkage workflow is mainly realized with [data.table](https://cran.r-project.org/package=data.table). The fuzzy - or probabilistic - joins are mainly realized using the R packages [fuzzyjoin](https://cran.r-project.org/package=fuzzyjoin) and [stringdist](https://cran.r-project.org/package=stringdist).
## Dependencies
Running the Vignette requires the availability of the GermaParlMini CWB corpus and the `btmp` data package which provides the external data for linking.
## Installation
```{r install_from_github, eval = FALSE}
remotes::install_github("PolMine/LinkTools")
```