Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add tracing to evaluate #11911

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

jessechancy
Copy link
Collaborator

@jessechancy jessechancy commented May 6, 2024

Related Issues/PRs

#xxx

What changes are proposed in this pull request?

Add tracing to mlflow.evaluate() with support for regular python functions, Pyfunc, and langchain

When a user passes a chain or function to mlflow.evaluate() that has been instrumented with MLflow tracing, MLflow should log traces during evaluation and link them to the evaluation run using the same tagging approach from https://databricks.atlassian.net/browse/ML-40783 .

We should make sure to support LangChain: even if the user hasn’t enabled LangChain autologging, mlflow.evaluate() should enable autologging for the duration of the evaluation so that traces are captured.

We should make sure to only enable the tracing part of LangChain autologging, not the part that logs models

How is this PR tested?

  • Existing unit/integration tests
  • New unit/integration tests
  • Manual tests

Does this PR require documentation update?

  • No. You can skip the rest of this section.
  • Yes. I've updated:
    • Examples
    • API references
    • Instructions

Release Notes

Is this a user-facing change?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release notes for MLflow users.

What component(s), interfaces, languages, and integrations does this PR affect?

Components

  • area/artifacts: Artifact stores and artifact logging
  • area/build: Build and test infrastructure for MLflow
  • area/deployments: MLflow Deployments client APIs, server, and third-party Deployments integrations
  • area/docs: MLflow documentation pages
  • area/examples: Example code
  • area/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registry
  • area/models: MLmodel format, model serialization/deserialization, flavors
  • area/recipes: Recipes, Recipe APIs, Recipe configs, Recipe Templates
  • area/projects: MLproject format, project running backends
  • area/scoring: MLflow Model server, model deployment tools, Spark UDFs
  • area/server-infra: MLflow Tracking server backend
  • area/tracking: Tracking Service, tracking client APIs, autologging

Interface

  • area/uiux: Front-end, user experience, plotting, JavaScript, JavaScript dev server
  • area/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Models
  • area/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registry
  • area/windows: Windows support

Language

  • language/r: R APIs and clients
  • language/java: Java APIs and clients
  • language/new: Proposals for new client languages

Integrations

  • integrations/azure: Azure and Azure ML integrations
  • integrations/sagemaker: SageMaker integrations
  • integrations/databricks: Databricks integrations

How should the PR be classified in the release notes? Choose one:

  • rn/none - No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" section
  • rn/breaking-change - The PR will be mentioned in the "Breaking Changes" section
  • rn/feature - A new user-facing feature worth mentioning in the release notes
  • rn/bug-fix - A user-facing bug fix worth mentioning in the release notes
  • rn/documentation - A user-facing documentation change worth mentioning in the release notes

Should this PR be included in the next patch release?

Yes should be selected for bug fixes, documentation updates, and other small changes. No should be selected for new features and larger changes. If you're unsure about the release classification of this PR, leave this unchecked to let the maintainers decide.

What is a minor/patch release?
  • Minor release: a release that increments the second part of the version number (e.g., 1.2.0 -> 1.3.0).
    Bug fixes, doc updates and new features usually go into minor releases.
  • Patch release: a release that increments the third part of the version number (e.g., 1.2.0 -> 1.2.1).
    Bug fixes and doc updates usually go into patch releases.
  • Yes (this PR will be cherry-picked and included in the next patch release)
  • No (this PR will be included in the next minor release)

@github-actions github-actions bot added the rn/none List under Small Changes in Changelogs. label May 6, 2024
Copy link

github-actions bot commented May 6, 2024

Documentation preview for df3c6ef will be available when this CircleCI job
completes successfully.

More info

@jessechancy jessechancy requested review from BenWilson2, B-Step62, harupy, liangz1 and prithvikannan and removed request for harupy and BenWilson2 May 7, 2024 00:22
Copy link
Collaborator

@liangz1 liangz1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good so far! Shall we add a test?

mlflow/models/evaluation/base.py Outdated Show resolved Hide resolved
@BenWilson2
Copy link
Member

The tracing branch is now closed. Can you rebase with a merge target to master?

@jessechancy jessechancy changed the base branch from tracing to master May 7, 2024 22:54
@jessechancy jessechancy requested a review from liangz1 May 8, 2024 20:10
Copy link
Collaborator

@liangz1 liangz1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left some questions about clarifying the supported functionality. Thanks!

@@ -361,6 +363,73 @@ def baseline_model_uri(request):
return None


def test_mlflow_evaluate_logs_traces():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you update the docs for mlflow.evaluate() to describe this new functionality? I'm curious to see:

  1. When would the traces be generated with an evaluate() call. Is all model type supported?
  2. How to find the traces produced by a certain evaluate run.
  3. Is model = function supported? If so, what does it look like? Is there an example for this case?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. All models types should be supported. Anything other than langchain would just trace the input/output. In langchain, it would create a span for each individual section in langchain.
  2. The traces would appear in the traces section as if you were to log a regular trace
  3. I think there isn't a difference between a pyfunc model or function here, because mlflow.evaluate does the neccessary steps to turn it all into a model with a predict method. Here we would be tracking the input/output of this predict method.

Copy link
Collaborator

@dbczumar dbczumar May 25, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Anything other than langchain would just trace the input/output

It's up to the user to decide what level of granularity to use when tracing; if the user instruments each subroutine in their model with @mlflow.trace, then they won't just get overall input / output. We're not limiting / restricting this granularity in this PR, right @jessechancy ?

The traces would appear in the traces section as if you were to log a regular trace

The key point is that we set the run ID as a tag on the trace, which appears in the UI.

I think there isn't a difference between a pyfunc model or function here, because mlflow.evaluate does the neccessary steps to turn it all into a model with a predict method. Here we would be tracking the input/output of this predict method.

Same as above - the user might instrument at a level that's more granular than just input / output. Can we confirm this works? Looks like your tests already cover that.

targets=iris_dataset._constructor_args["targets"],
evaluators="dummy_evaluator",
)
assert len(get_traces()) == 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does the trace look like with a regressor model? In which scenario would the user want to see a trace for a regressor evaluation?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @jessechancy , is this addressed somewhere? Could you share a pointer if possible?

Besides, I'm wondering if we should provide a config to turn off the tracing for evaluate() runs, in case non-genai users get confused with the auto-logged traces. cc @BenWilson2

tests/evaluate/test_evaluation.py Show resolved Hide resolved
k: {**v, "disable": True} for k, v in AUTOLOGGING_INTEGRATIONS.items()
}
try:
yield None
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
yield None
yield

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jessechancy can we address this nit?

@BenWilson2 BenWilson2 added the enable-dev-tests Enables cross-version tests for dev versions label May 10, 2024
@jessechancy jessechancy removed the enable-dev-tests Enables cross-version tests for dev versions label May 10, 2024

TRACE_BUFFER.clear()


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We may want to add one final test to this suite.

  1. Call evaluate on a model
  2. Verify traces are recorded for a simple langchain chain
  3. Call mlflow autolog (specifying no overrides to the default)
  4. Invoke the model
  5. Ensure that the default settings for autologging apply and that the expected tracked run and artifacts / metrics / params are logged to the run.

Mostly to ensure that modifications to autologging behavior within a single sessions are locally scoped with the configuration overrides being used in the context handler (it works as expected now, but in the future, if this logic needs to change, having the test will be a guard against a regression in the behavior introduced in this PR)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great idea. Can we also test:

  1. Call autolog with some specific settings
  2. Call evaluate on a model
  3. Verify traces are recorded for a simple langchain chain
  4. Invoke the model
  5. Ensure that the settings for autologging in (1) apply and that the expected tracked run and artifacts / metrics / params are logged to the run.

Can we also test these cases when evaluate fails with an exception?

Copy link
Member

@BenWilson2 BenWilson2 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM once the safeguard test is added

Copy link
Collaborator

@liangz1 liangz1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM overall, just a question about the non-genai eval run case.
@jessechancy Do we have another JIRA ticket or PR for updating the documentation? If so, could you link it? Thanks!

targets=iris_dataset._constructor_args["targets"],
evaluators="dummy_evaluator",
)
assert len(get_traces()) == 1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @jessechancy , is this addressed somewhere? Could you share a pointer if possible?

Besides, I'm wondering if we should provide a config to turn off the tracing for evaluate() runs, in case non-genai users get confused with the auto-logged traces. cc @BenWilson2

Copy link
Collaborator

@liangz1 liangz1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I noticed some langchain deps changes that might interrupt non-langchain users. Can you help make sure the experience is still smooth in that case? Thanks!


def monkey_patch_predict(x):
# Disable all autologging except for traces
mlflow.langchain.autolog(log_inputs_outputs=False, disable=False)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we check whether langchain is imported? I see this code block is not langchain-specific, so if the customer only uses non-langchain packages, shall we skip this?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 we might want to only apply this enablement if langchain core module is available through a validation with importlib

@@ -240,7 +240,7 @@ jobs:
- name: Install dependencies
run: |
source ./dev/install-common-deps.sh
pip install pyspark torch transformers
pip install pyspark torch transformers langchain
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to make langchain a common deps? It seems to me that langchain should not be required for non-genai users. (related to my comment in mlflow/models/evaluation/base.py)

Can we test if langchain is not installed, a user should still be able to call mlflow.evaluate()?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should not be a common dependency - this line change is only for test infra.
That being said, a +1 to the validation check that the patch function with evaluate will work if langchain is not installed. Temporarily removing the install from the REPL context would be good to validate.

@BenWilson2
Copy link
Member

Regarding #11911 (comment) - this is valid. I feel like the non-genai use cases (classification, regression) should be opt-in for this behavior as it doesn't make a great deal of sense and could potentially generate a very large dataset that doesn't provide any value, even for debugging.

@jessechancy jessechancy requested review from dbczumar and removed request for B-Step62 and prithvikannan May 15, 2024 18:16
@@ -50,6 +51,8 @@

# Flag indicating whether autologging is globally disabled for all integrations.
_AUTOLOGGING_GLOBALLY_DISABLED = False
# Exceptions autologging flavors to the above flag
_AUTOLOGGING_GLOBALLY_DISABLED_EXCEPTIONS = []
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
_AUTOLOGGING_GLOBALLY_DISABLED_EXCEPTIONS = []
_AUTOLOGGING_GLOBALLY_DISABLED_EXEMPTIONS = []

Comment on lines 53 to 65
MLFLOW_EVALUATE_LANGCHAIN_AUTOLOG_CONFIG = {
"log_input_examples": False,
"log_model_signatures": False,
"log_models": False,
"log_datasets": False,
"log_inputs_outputs": False,
"disable": False,
"exclusive": False,
"disable_for_unsupported_versions": True,
"silent": False,
"registered_model_name": None,
"extra_tags": None,
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to be unused. Can we move it to https://github.com/mlflow/mlflow/pull/11911/files#r1602160113?

Comment on lines 519 to 540
mlflow.langchain.autolog(log_inputs_outputs=False)
try:
yield None
finally:
mlflow.langchain.autolog(**prev_langchain_params)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
mlflow.langchain.autolog(log_inputs_outputs=False)
try:
yield None
finally:
mlflow.langchain.autolog(**prev_langchain_params)
try:
mlflow.langchain.autolog(log_inputs_outputs=False)
yield
finally:
mlflow.langchain.autolog(**prev_langchain_params)

Copy link
Collaborator

@liangz1 liangz1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with the non-langchain case. Thanks @jessechancy !

assert run.info.run_id == get_traces()[0].info.request_metadata["mlflow.sourceRun"]


def test_evaluate_works_with_no_langchain_installed():
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good!

@liangz1 liangz1 requested review from liangz1 and removed request for liangz1 May 15, 2024 20:53
@@ -1265,6 +1265,7 @@ def _validate_dataset_type_supports_predictions(data, supported_predictions_data
def _evaluate(
*,
model,
model_predict_func,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jessechancy This will break all existing third-party evaluator plugins, including ones developed by Databricks. Can we construct model_predict_func inside the default evaluator instead?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 that it seems naturally scoped to _extract_predict_fn except in the case that a user would pass this explicitly into a DefaultEvaluator.evaluate. Is there a clear use case around that we are trying to address?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the _extract_predict_fn doesn't cover all cases here, since I see direct usage of model.predict

Copy link
Collaborator

@dbczumar dbczumar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dbczumar dbczumar requested a review from ian-ack-db May 22, 2024 08:57

@contextlib.contextmanager
def restrict_langchain_autologging_to_traces_only():
if importlib.util.find_spec("langchain") is None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it sufficient to check sys.modules instead? Are there examples that recommend using find_spec() instead?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are a few examples throughtout the codebase that use importlib.util.find_spec, but I can change it to sys.modules

mlflow/models/evaluation/base.py Outdated Show resolved Hide resolved
mlflow/utils/autologging_utils/__init__.py Outdated Show resolved Hide resolved
# This is the 1st commit message:

add tracing to evaluate

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

monkeypatch with langchain

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

recursion fix

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

remove print

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

Autologging langchain config and cleanup

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

comment fixes

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

wip

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

wip

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

fixes

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

test import global

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

test fixes

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

fixes

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

fixes

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

# This is the commit message mlflow#2:

fixes

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

# This is the commit message mlflow#3:

fixes

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

# This is the commit message mlflow#4:

retrigger tests

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

# This is the commit message mlflow#5:

remove copy to prevent retriggers

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

# This is the commit message mlflow#6:

fixes + opt in langchain

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

# This is the commit message mlflow#7:

fixed tests

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

# This is the commit message mlflow#8:

added test for langchain not installed

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

# This is the commit message mlflow#9:

add langchain-experimental package

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

# This is the commit message mlflow#10:

Create copy of model.predict for tracing

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>

# This is the commit message mlflow#11:

fixed tests

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>
Signed-off-by: Jesse Chan <jesse.chan@databricks.com>
Signed-off-by: Jesse Chan <jesse.chan@databricks.com>
Signed-off-by: Jesse Chan <jesse.chan@databricks.com>
Signed-off-by: Jesse Chan <jesse.chan@databricks.com>
Signed-off-by: Jesse Chan <jesse.chan@databricks.com>
Comment on lines 569 to 575
def test_langchain_autolog_parameters_matches_default_parameters():
# get parameters from mlflow.langchain.autolog
params = inspect.signature(mlflow.langchain.autolog).parameters
for name in params:
assert name in MLFLOW_EVALUATE_RESTRICT_LANGCHAIN_AUTOLOG_TO_TRACES_CONFIG
for name in MLFLOW_EVALUATE_RESTRICT_LANGCHAIN_AUTOLOG_TO_TRACES_CONFIG:
assert name in params
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding this test. Can you give guidance to future engineers who will see this test fail without much context? How should they determine the value of any new parameter in MLFLOW_EVALUATE_RESTRICT_LANGCHAIN_AUTOLOG_TO_TRACES_CONFIG to not violate traces only requirement?

Signed-off-by: Jesse Chan <jesse.chan@databricks.com>
Signed-off-by: Jesse Chan <jesse.chan@databricks.com>
Signed-off-by: Jesse Chan <jesse.chan@databricks.com>
Copy link
Collaborator

@ian-ack-db ian-ack-db left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, though not familiar enough with implications for official approval

Comment on lines +142 to +145
def _extract_predict_fn(model, raw_model, model_predict_fn=None):
predict_fn = model.predict if model is not None else None
if model_predict_fn is not None:
predict_fn = model_predict_fn
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we add a docstring that indicates the precedence order and restructure this method so that the logic is if / elif / else? It's currently difficult to reason about what happens when model_predict_fn is specified and raw_model is also specified, etc.

Can we also add type hints to the function to indicate which parameters are optional?

Copy link
Collaborator

@dbczumar dbczumar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM once small comments are addressed. Thanks @jessechancy !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
rn/none List under Small Changes in Changelogs.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants