Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RFC] Causes vs. Associates_with #33

Open
emjun opened this issue Dec 7, 2021 · 2 comments
Open

[RFC] Causes vs. Associates_with #33

emjun opened this issue Dec 7, 2021 · 2 comments
Assignees
Labels
rfc request for comments

Comments

@emjun
Copy link
Owner

emjun commented Dec 7, 2021

Tisane currently provides two types of conceptual relationships: causes and associates_with. This doc covers when and how to use these verbs.

If a user provides associates_with, we walk them through possible association patterns to identify the underlying causal relationships. In other words, associates_with indicates a need for disambiguation to compile to a series of causes statements.

To do this well, we need to resolve two competing interests: causal accuracy and usability. Prioritizing causal accuracy, the system should help an analyst distinguish and choose among an exhaustive list of possible causal situations. However, doing so may be unusable because the task of differentiating among numerous possible causal situations may be unrealistic for analysts unfamiliar with causality. These concerns do not seem insurmountable.

With an infinite number of hidden variables, there are an infinite number of possible causal relationships. We could restrict the number of hidden variables an analyst considers. This decision compromises causal accuracy for usability. If we had a justifiable cap on hidden variables, it may be worthwhile to take this approach.

Another perspective: If the goal is to translate each associates_with into a set of causes, why provide associates_with at all?

The primary reason I wanted to provide both was because of the following:

  • Analysts are sometimes unsure about the causal edges in their conceptual models. This uncertainty can be due to their own lack of knowledge or because the relationships are hypothesized but not known and now the analysts want to see if data supports the hypothesized relationships.
  • There may be a lack of definitive evidence in a domain about some causal edges and paths (that may involve multiple variables).

In all these cases, it seems important to acknowledge what is known, what is hypothesized/the focus of inquiry, and what is asserted for the scope of the analysis. (accurate documentation, transparency)

In the current version of Tisane, analysts can express any relationships they might know or are probing into using causes. If analysts do not want to assert any causal relationships due to a perceived lack of evidence in their field, they should use associates_with. Whenever possible, analysts should use causes instead of associates_with.

Tisane's model inference process makes argubaly less useful covariate selection recommendations based on associates_with relationships. Tisane looks for variables that have associates_with relationships with both one of the IVs and the DV. Tisane suggests these variables as covariates with caution, including a warning in the Tisane GUI and a tooltip explaining to analysts that associates_with edges may have additional causal confounders that are not specified or detectable with the current specification.

For the causes relationships, Tisane uses the disjunctive criteria, developed for settings where researchers may be uncertain about their causal models, to recommend possible confounders as covariates.

We assume that the set of IVs an end-user provides in their query are the ones they are most interested in and want to treat as exposures.

What happens if the initial choice of variables could lead to confusion in interpretation of results?
We currently treat each IV as a separate exposure and combine all confounders into one model. In some cases, this may lead to interpretation confusion. For example, if the model includes two variables on the same causal path, one of the variables may appear to have no effect on the outcome even if it does (due to d-separation). We currently expect analysts to be aware of and interpret their results accurately in light of their variable selection choices. In their input queries, analysts should include only the variables they absolutely care the most about in their queries.

Moving forward

I would like to see the following (working list, no priority given yet):

  • Tisane: Separate out use cases and provide language constructs for each: lack of knowledge vs. hypothesized causal edges vs. lack of definitive evidence in the domain.
    • Language design: Remove associates with, require only causes
    • Provide a "gallery" or "library" of canonical graph shapes/statements they could adapt.
    • Allow for inclusion of hidden variables?
    • Generate multiple linear models to verify the input DAG/mechanism validation.
    • Enforce variable selection that guarantees accurate inference, not just DAG/mechanism validation.
  • A question I keep coming back to: How usable is causal modeling to non-experts and how can we make it more usable to them?
    • Study/find out what makes stating "causes" statements difficult for researchers and how to constructively support their skepticism rather than allowing them to avoid formalizing their knowledge.

Implementation changes:

  • [BIG] Thomas R. had some hesitation about the theoretical soundness of the disjunctive criteria. He did not expand much, but I hope to meet with him early in the winter quarter to discuss.
  • I could re-implement Tisane in R so that it uses Daggity under the hood. Would have to see how to use Daggity under the hood in Python.
  • In R, I've never created a widget/plug in, but I can look into how to do that.
  • Both R and Python versions could use a code-only interface, not having to rely on the GUI.

Follow-up work/Paper ideas:

  • Eval and improve conceptual modeling language
  • Eval Tisane vs. R
@audreyseo audreyseo added the rfc request for comments label Dec 8, 2021
@audreyseo
Copy link
Collaborator

Semantics of the causes and associates relationships & language design

Based on the user studies, I think that it makes sense to clarify what causes stands for by providing several different flavors, such as may_cause, i_hypothesize_this_causes, definitely_causes. (Names could use some work, but I am going for clarity here, not good naming.) My question is how this would change the implementation? Is there any way to distinguish between these things in statistics?

I can see us applying some basic heuristics, like if we have a.may_cause(b), and a is our DV, then we certainly don't want to suggest b as an IV, or at least provide a very strong warning against it (i.e., make users jump through hoops -- if they check it off in the GUI, then they have to run through a whole "Terms and Conditions" type dialog popup and say that "they agree" multiple times) (if I'm remembering my crash course on causal modeling correctly, I think we were grappling with this issue on either the night of the submission or at least a few days before, and it was all a blur). This may allow users who may otherwise be wary of trying to state that a.causes(b) from saying instead that a.associates_with(b) and Tisane suggesting b as an IV.

The i_hypothesize_this_causes form seems like it would have no implementation changes; instead, it's more of a note for the researcher that "this is something that I am hypothesizing about."

The definitely_causes form would be for things like "drunk driving kills people" or "smoking causes cancer," that everyone can agree upon.

Hidden Variables Confusion

I'm not entirely sure how someone would specify a hidden variable. Would the user hypothesize "there may be n hidden variables out there somewhere"? And what would we do with this information?

On Causal Modeling

I, for one, do not feel like I have a good enough grasp on what causal modeling is to really make a lot of statements about what would be the right thing to do. My brief impression is that it seems theoretically nice but in actuality is kind of impractical?

(I have to wonder if there is a gender difference in how people would use the causes relationship. Perhaps womxn are less likely to want to assert that there is a causal relationship, whereas men are more gung-ho about asserting causal relationships.) (In a similar vein, women tend to use a lot of modifiers to their language and downplay what they're actually doing. It's an interesting linguistics question (and also PL/usability question) as to whether re-naming causes into several aliases (that all basically do the same thing) but with modifiers that qualify how strong of a statement they're making would make womxn more likely to actually use causes. )

Implementation Changes

I think that these all sound reasonable. I think it would also make sense to provide some feedback to the user if they stipulate an associates_with relationship. In Jupyter notebooks (you know how much of a fan I am of Jupyter notebooks, after all, lol), sometimes pandas functions will provide warnings that are displayed differently from other feedback, and Jupyter notebooks provide all sorts of hooks to allow you to do this kind of thing it seems, which may be worth looking into. In a REPL setting, we could also provide a print/send to stderr information warning them.

@shreyashnigam
Copy link
Collaborator

Regarding Semantics

In my opinion, causal definitions should only be reserved for universal truths (like drunk driving causes accidents). For all other possible causal relationships, we should "nudge" the user towards using something less powerful. This can be done through Audrey's idea of providing varying degrees of confidence for defining causal relationships. On the implementation front, we could multiply/raise to the power of some constant, to make certain relationships more or less powerful. I am not sure how we would determine such a constant.

Regarding Hidden Variables

With an infinite number of hidden variables, there are an infinite number of possible causal relationships

Could Tisane provide a list of viable hidden variables? For example, consider two variables X and Y for which the user has defined a causal relationship. Tisane could list all other variables that have a relationship with X and Y, presenting them as possible options for the hidden variable.

Potential Problem: Asking users to pick a hidden variable might force them to make more assumptions or define more relationships than they're comfortable with.

Regarding the Working List

I think everything mentioned on the list is a great idea. In addition to all of that, do we want to explore having different workflows in Tisane? We could provide users with different options and paths depending on their use case. This will help us add more features to Tisane that benefit a certain type of user without having to worry about the impact it could have on another type of user.

On Next Steps

I could re-implement Tisane in R so that it uses Daggity under the hood

Python alternative for Daggity: https://github.com/pgmpy/pgmpy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rfc request for comments
Projects
None yet
Development

No branches or pull requests

3 participants