-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use group_by with freq_table #40
Comments
What to do when two variables are passed to freq_table()?Cyl is the outcome var of interest
Now, cyl within levels of am
The code above gives the result we want. However, this works too:
If that didn't what would we want it to return instead? A list of one-way tables? The cleanest thing to do for now is change In the future, we may want multiple vars passed to |
Overview
Previously, in #1 I removed the ability to use a grouped tibble with freq_table(). Now, I'm finding that using group_by() might be the most
dplyr
way to do things. Remember, freq_table() is intended to be integrated with adplyr
pipeline.Additionally, using group_by() might help with issue #9 in that group_var_1, group_var_1, etc. would naturally flow from the variables added to group_by().
Adding multiple var names to the group_by() function could result in multiple tables rather than being used as grouping variables. (Nah, I don't think I like this idea).
Passing one var name to freq_table() should still produce a one-way frequency table. In other words, you shouldn't need to use group_by() to produce a one-way frequency table.
It turns out that just removing
from the freq_table code will make it so that group_by() works again. All the stats still work too. The only issue that I can see is that.
and
Now return the exact same result. I'm not sure if that's good or not. I guess one problem is that it makes it harder to rename the output columns as described in #9 (i.e., group and outcome). Does it though? Need to think more about this.
One good thing is that we don't have to worry about previous groupings messing up the groups we expect when using group_by with freq_table. According to the group_by documentation, If you apply group_by() to an already grouped dataset, will overwrite the existing grouping variables.
Left off at
2023-03-17
Working through the stuff below in test.Rmd. Decided to create some test files that I can use to compare freqtables to Stata and SAS. The specifics are outlined in #22.
2022-07-31
Trying to decide if I want to soft depreciate ... or hard deprecate it the
...
argument in freq_tables. In theiss-40-group-by
branch, I have four different versions of thefreq_table()
function:freq_table()
: The current CRAN version of the function.freq_table_v2()
: In this version, I'm soft deprecating...
. It still works, but I'm also adding a.x
argument and an informative warning message for users about deprecating...
. This is probably the safest route, but it feel like it will slow me down from doing what I actually want to do with freqtables. Also, not being able to use the.x
argument by position feels wrong.freq_table_v3()
: In this version, I'm hard deprecating...
. I'm just replacing it with the.x
argument and an informative warning message for users about deprecating...
. Of course, there are issues with the approach breaking code.freq_table_v4()
: In this version, I'm also hard deprecating...
. This is the most extreme version and what I was last working on. It begins from the newfreq_tbl
function and builds on from there. Not only might this fix the group_by issue, but we might also address Group and subgroup make more sense than row and col. #9, Update @return in freq_table #14, Create the freq_tbl function #39, and Add ability to make n-way tables #22. And also modularize the code a little more, which is something I've been wanting to do for a while. Of course, there are lots of issues with the approach breaking code.Task list
group_by()
then freq_table will calculate the number of times each value of this variable is observed separately for each value of the grouping variable(s).The text was updated successfully, but these errors were encountered: