-
Notifications
You must be signed in to change notification settings - Fork 10
/
README.Rmd
147 lines (100 loc) · 6.21 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
---
output:
md_document:
variant: markdown_github
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, echo = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "README-"
)
```
# R Wrapper for the libFM Executable
This package provides a rough interface to [libFM](http://www.libfm.org/) using the executable with [R](https://www.r-project.org/). It does this in two ways:
* `model_frame_libFM()`, `sp_matrix_libFM()`, and `matrix_libFM()` convert data into libFM (also LIBSVD) format
+ `model_frame_libFM()` is very fast when your data consists of factors with many levels
+ `sp_matrix_libFM()` is very fast when your data is a sparse matrix
* It calls the libFM executable with your data and returns the resulting prediction
## Installing
### Installing libFM executable
#### Windows
First, you will need to [download the libFM windows executable](http://www.libfm.org/#download) and save it on your computer, for example in `C:\libFM`.
Then, it is recommended that you add the directory that you saved libFM in to you system path. [This webpage](https://msdn.microsoft.com/en-us/library/office/ee537574(v=office.14).aspx) provides one way to do that. If you do not or cannot do that, you can enter the directory as `exe_loc` argument into the `libFM()` function, for example `libFM(..., exe_loc = "C:\\libFM")`.
#### Mac / Linux
First, you will need to [download the libFM C++ source code](http://www.libfm.org/#download) and install it on your computer, for example in `/usr/local/share/libFM/bin`. You can also download the development code from Steffen Rendle's [github repository](https://github.com/srendle/libfm).
Then, it is recommended that you add the directory that you saved libFM in to you system path. [This webpage](http://architectryan.com/2012/10/02/add-to-the-path-on-mac-os-x-mountain-lion/) worked for me. If you do not or cannot do that, you can enter the directory as `exe_loc` argument into the `libFM()` function, for example `libFM(..., exe_loc = "/usr/local/share/libFM/bin")`.
#### Debugging
You can verify that the path contains the libFM directory by running `Sys.getenv('PATH')`. Verify that the program works by running `system("libFM -help")`.
### Installing libFMexe R package
With the `devtools` package, you can install this package by running the following.
```r
# install.package("devtools")
devtools::install_github("andland/libFMexe")
```
## Using libFMexe
libFM does well with categorical variables with lots of levels. The canonical example is for collaborative filtering, where there is one categorical variable for the users and one for the items. It also works well with sparse data.
The main advantage of this package is being able to try many different models. For example, trying different combinations of variables, trying different `dim` of the two-way interactions, or trying different `init_stdev`. In fact, you can use `cv_libFM()` to select `dim` with cross validation.
As with the defaults of libFM, I suggest using `method = "mcmc"` because it automatically integrates over the regularization parameters, taking the hard work out of factorization. You can also use the `grouping` argument to group the regularization parameters by, for example, users and items.
### Collaborative Filtering Example
Using the Movie Lens 100k data, we can predict what ratings users will give to movies.
```{r timing1, echo=FALSE}
ptm <- proc.time()
```
```{r libFM}
library(libFMexe)
data(movie_lens)
set.seed(1)
train_rows = sample.int(nrow(movie_lens), nrow(movie_lens) * 2 / 3)
train = movie_lens[train_rows, ]
test = movie_lens[-train_rows, ]
predFM = libFM(train, test, Rating ~ User + Movie,
task = "r", dim = 10, iter = 500)
mean((predFM - test$Rating)^2)
```
```{r timing2, echo=FALSE}
elapsed = proc.time() - ptm
```
This gives a mean squared error of `r mean((predFM - test$Rating)^2)` with dimension 10. This is also very quick. It only took `r elapsed[["elapsed"]]` seconds to convert the data to libFM format and run libFM for 500 iterations.
We can compare to something simpler, such as ridge regression. Ridge regression cannot model interactions of users and movies because each interaction is observed at most once.
```{r timing_r1, echo=FALSE}
ptm <- proc.time()
```
```{r glmnet}
suppressPackageStartupMessages(library(glmnet))
spmat = sparse.model.matrix(Rating ~ User + Movie, data = movie_lens)
trainsp = spmat[train_rows, ]
testsp = spmat[-train_rows, ]
mod = cv.glmnet(x = trainsp, y = movie_lens$Rating[train_rows], alpha = 0)
predRR = predict(mod, testsp, s = "lambda.min")
mean((predRR - test$Rating)^2)
```
```{r timing_r2, echo=FALSE}
elapsed = proc.time() - ptm
```
Ridge regression gives a mean squared error of `r mean((predRR - test$Rating)^2)`. To compare timing, `glmnet` took `r elapsed[["elapsed"]]` seconds to run without any factorization.
For comparison, we can run libFM with `dim = 0`, which is basically the same as ridge regression.
```{r timing0_1, echo=FALSE}
ptm <- proc.time()
```
```{r libFMRR}
predFM_RR = libFM(train, test, Rating ~ User + Movie,
task = "r", dim = 0, iter = 100)
mean((predFM_RR - test$Rating)^2)
```
```{r timing0_2, echo=FALSE}
elapsed = proc.time() - ptm
```
This gives a mean squared error of `r mean((predFM_RR - test$Rating)^2)`, nearly the same as ridge regression. Also, it only took `r elapsed[["elapsed"]]` seconds to convert the data and run 100 iterations.
In the above, I randomly chose `dim = 10`. We can use cross validation to select the dimension with lowest mean squared error.
```{r cv_libFM}
mses = cv_libFM(train, Rating ~ User + Movie,
task = "r", dims = seq(0, 20, by = 5), iter = 500)
mses
```
According to cross validation, we should use a dimension of `r rownames(mses)[which.min(mses)]`.
## License
I have licensed this code GPL-3, the same as the [source code for libFM](https://github.com/srendle/libfm). Note that if you downloaded the executable or source code from the website [libfm.org/](http://libfm.org/), it is licensed for non-commercial use only.
## Improving this package
[c++ source code](https://github.com/srendle/libfm) is available for libFM. Let me know if you would like to help build an implementation that calls the source code.