Skip to content

Releases: mlflow/mlflow

v2.13.0

20 May 07:17
1b604e4
Compare
Choose a tag to compare

MLflow 2.13.0 includes several major features and improvements

With this release, we're happy to introduce several features that enhance the usability of MLflow broadly across a range of use cases.

Major Features and Improvements:

  • Streamable Python Models: The newly introduced predict_stream API for Python Models allows for custom model implementations that support the return of a generator object, permitting full customization for GenAI applications.

  • Enhanced Code Dependency Inference: A new feature for automatically inferrring code dependencies based on detected dependencies within a model's implementation. As a supplement to the code_paths parameter, the introduced infer_model_code_paths option when logging a model will determine which additional code modules are needed in order to ensure that your models can be loaded in isolation, deployed, and reliably stored.

  • Standardization of MLflow Deployment Server: Outputs from the Deployment Server's endpoints now conform to OpenAI's interfaces to provide a simpler integration with commonly used services.

Features:

  • [Deployments] Update the MLflow Deployment Server interfaces to be OpenAI compatible (#12003, @harupy)
  • [Deployments] Add Togetherai as a supported provider for the MLflow Deployments Server (#11557, @FotiosBistas)
  • [Models] Add predict_stream API support for Python Models (#11791, @WeichenXu123)
  • [Models] Enhance the capabilities of logging code dependencies for MLflow models (#11806, @WeichenXu123)
  • [Models] Add support for RunnableBinding models in LangChain (#11980, @serena-ruan)
  • [Model Registry / Databricks] Add support for renaming models registered to Unity Catalog (#11988, @artjen)
  • [Model Registry / Databricks] Improve the handling of searching for invalid components from Unity Catalog registered models (#11961, @artjen)
  • [Model Registry] Enhance retry logic and credential refresh to mitigate cloud provider token expiration failures when uploading or downloading artifacts (#11614, @artjen)
  • [Artifacts / Databricks] Add enhanced lineage tracking for models loaded from Unity Catalog (#11305, @shichengzhou-db)
  • [Tracking] Add resourcing metadata to Pyfunc models to aid in model serving environment configuration (#11832, @sunishsheth2009)
  • [Tracking] Enhance LangChain signature inference for models as code (#11855, @sunishsheth2009)

Bug fixes:

  • [Artifacts] Prohibit invalid configuration options for multi-part upload on AWS (#11975, @ian-ack-db)
  • [Model Registry] Enforce registered model metadata equality (#12013, @artjen)
  • [Models] Correct an issue with hasattr references in AttrDict usages (#11999, @BenWilson2)

Documentation updates:

  • [Docs] Simplify the main documentation landing page (#12017, @BenWilson2)
  • [Docs] Add documentation for the expanded code path inference feature (#11997, @BenWilson2)
  • [Docs] Add documentation guidelines for the predict_stream API (#11976, @BenWilson2)
  • [Docs] Add support for enhanced Documentation with the JFrog MLflow Plugin (#11426, @yonarbel)

Small bug fixes and documentation updates:

#12052, #12053, #12022, #12029, #12024, #11992, #12004, #11958, #11957, #11850, #11938, #11924, #11922, #11920, #11820, #11822, #11798, @serena-ruan; #12054, #12051, #12045, #12043, #11987, #11888, #11876, #11913, #11868, @sunishsheth2009; #12049, #12046, #12037, #11831, @dbczumar; #12047, #12038, #12020, #12021, #11970, #11968, #11967, #11965, #11963, #11941, #11956, #11953, #11934, #11921, #11454, #11836, #11826, #11793, #11790, #11776, #11765, #11763, #11746, #11748, #11740, #11735, @harupy; #12025, #12034, #12027, #11914, #11899, #11866, @BenWilson2; #12026, #11991, #11979, #11964, #11939, #11894, @daniellok-db; #11951, #11974, #11916, @annzhang-db; #12015, #11931, #11627, @jessechancy; #12014, #11917, @prithvikannan; #12012, @AveshCSingh; #12001, @yunpark93; #11984, #11983, #11977, #11977, #11949, @edwardfeng-db; #11973, @bbqiu; #11902, #11835, #11775, @B-Step62; #11845, @lababidi

MLflow 2.12.2

09 May 18:07
Compare
Choose a tag to compare

MLflow 2.12.2 is a patch release that includes several bug fixes and integration improvements to existing features. New features that are introduced in this patch release are intended to provide a foundation to further major features that will be released in the next 2 minor releases.

Features:

  • [Models] Add an environment configuration flag to enable raising an exception instead of a warning for failures in model dependency inference (#11903, @BenWilson2)
  • [Models] Add support for the llm/v1/embeddings task in the Transformers flavor to unify the input and output structures for embedding models (#11795, @B-Step62)
  • [Models] Introduce model streaming return via predict_stream() for custom pyfunc models capable of returning a stream response (#11791, #11895, @WeichenXu123)
  • [Evaluate] Add support for overriding the entire model evaluation judgment prompt within mlflow.evaluate for GenAI models (#11912, @apurva-koti)
  • [Tracking] Add support for defining deployment resource metadata to configure deployment resources within pyfunc models (#11832, #11825, #11804, @sunishsheth2009)
  • [Tracking] Add support for logging LangChain and custom pyfunc models as code (#11855, #11842, @sunishsheth2009)
  • [Tracking] Modify MLflow client's behavior to read from a global asynchronous configuration state (#11778, #11780, @chenmoneygithub)
  • [Tracking] Enhance system metrics data collection to include a GPU power consumption metric (#11747, @chenmoneygithub)

Bug fixes:

  • [Models] Fix a validation issue when performing signature validation if params are specified (#11838, @WeichenXu123)
  • [Databricks] Fix an issue where models cannot be loaded in the Databricks serverless runtime (#11758, @WeichenXu123)
  • [Databricks] Fix an issue with the Databricks serverless runtime where scaled workers do not have authorization to read from the driver NFS mount (#11757, @WeichenXu123)
  • [Databricks] Fix an issue in the Databricks serverless runtime where a model loaded via a spark_udf for inference fails due to a configuration issue (#11752, @WeichenXu123)
  • [Server-infra] Upgrade the gunicorn dependency to version 22 to address a third-party security issue (#11742, @maitreyakv)

Documentation updates:

  • [Docs] Add additional guidance on search syntax restrictions for search APIs (#11892, @BenWilson2)
  • [Docs] Fix an issue with the quickstart guide where the Keras example model is defined incorrectly (#11848, @horw)
  • [Docs] Provide fixes and updates to LangChain tutorials and guides (#11802, @BenWilson2)
  • [Docs] Fix the model registry example within the docs for correct type formatting (#11789, @80rian)

Small bug fixes and documentation updates:

#11928, @apurva-koti; #11910, #11915, #11864, #11893, #11875, #11744, @BenWilson2; #11913, #11918, #11869, #11873, #11867, @sunishsheth2009; #11916, #11879, #11877, #11860, #11843, #11844, #11817, #11841, @annzhang-db; #11822, #11861, @serena-ruan; #11890, #11819, #11794, #11774, @B-Step62; #11880, @prithvikannan; #11833, #11818, #11954, @harupy; #11831, @dbczumar; #11812, #11816, #11800, @daniellok-db; #11788, @smurching; #11756, @IgorMilavec; #11627, @jessechancy

MLflow 2.12.1

17 Apr 13:49
328242e
Compare
Choose a tag to compare

MLflow 2.12.1 includes several major features and improvements

With this release, we're pleased to introduce several major new features that are focused on enhanced GenAI support, Deep Learning workflows involving images, expanded table logging functionality, and general usability enhancements within the UI and external integrations.

Major Features and Improvements:

  • PromptFlow: Introducing the new PromptFlow flavor, designed to enrich the GenAI landscape within MLflow. This feature simplifies the creation and management of dynamic prompts, enhancing user interaction with AI models and streamlining prompt engineering processes. (#11311, #11385 @brynn-code)

  • Enhanced Metadata Sharing for Unity Catalog: MLflow now supports the ability to share metadata (and not model weights) within Databricks Unity Catalog. When logging a model, this functionality enables the automatic duplication of metadata into a dedicated subdirectory, distinct from the model’s actual storage location, allowing for different sharing permissions and access control limits. (#11357, #11720 @WeichenXu123)

  • Code Paths Unification and Standardization: We have unified and standardized the code_paths parameter across all MLflow flavors to ensure a cohesive and streamlined user experience. This change promotes consistency and reduces complexity in the model deployment lifecycle. (#11688, @BenWilson2)

  • ChatOpenAI and AzureChatOpenAI Support: Support for the ChatOpenAI and AzureChatOpenAI interfaces has been integrated into the LangChain flavor, facilitating seamless deployment of conversational AI models. This development opens new doors for building sophisticated and responsive chat applications leveraging cutting-edge language models. (#11644, @B-Step62)

  • Custom Models in Sentence-Transformers: The sentence-transformers flavor now supports custom models, allowing for a greater flexibility in deploying tailored NLP solutions. (#11635, @B-Step62)

  • Native MLflow Image support in the log_image API: Added support for optimized image logging, including step-based iterative logging for images generated as part of a training run. This feature enables the ability to track your image generation, classification, segmentation, enhancement and object detection deep learning models effortlessly. (#11243, #11404, @jessechancy)

  • Image Support for Log Table: With the addition of image support in log_table, MLflow enhances its capabilities in handling rich media. This functionality allows for direct logging and visualization of images within the platform, improving the interpretability and analysis of visual data. (#11535, @jessechancy)

  • Streaming Support for LangChain: The newly introduced predict_stream API for LangChain models supports streaming outputs, enabling real-time output for chain invocation via pyfunc. This feature is pivotal for applications requiring continuous data processing and instant feedback. (#11490, #11580 @WeichenXu123)

Security Fixes:

  • Security Patch: Addressed a critical Local File Read/Path Traversal vulnerability within the Model Registry, ensuring robust protection against unauthorized access and securing user data integrity. (#11376, @WeichenXu123)

Features:

  • [Models] Add the PromptFlow flavor (#11311, #11385 @brynn-code)
  • [Models] Add a new predict_stream API for streamable output for Langchain models and the DatabricksDeploymentClient (#11490, #11580 @WeichenXu123)
  • [Models] Deprecate and add code_paths alias for code_path in pyfunc to be standardized to other flavor implementations (#11688, @BenWilson2)
  • [Models] Add support for custom models within the sentence-transformers flavor (#11635, @B-Step62)
  • [Models] Enable Spark MapType support within model signatures when used with Spark udf inference (#11265, @WeichenXu123)
  • [Models] Add support for metadata-only sharing within Unity Catalog through the use of a subdirectory (#11357, #11720 @WeichenXu123)
  • [Models] Add Support for the ChatOpenAI and AzureChatOpenAI LLM interfaces within the LangChain flavor (#11644, @B-Step62)
  • [Artifacts] Add support for utilizing presigned URLs when uploading and downloading files when using Unity Catalog (#11534, @artjen)
  • [Artifacts] Add a new Image object for handling the logging and optimized compression of images (#11404, @jessechancy)
  • [Artifacts] Add time and step-based metadata to the logging of images (#11243, @jessechancy)
  • [Artifacts] Add the ability to log a dataset to Unity Catalog by means of UCVolumeDatasetSource (#11301, @chenmoneygithub)
  • [Tracking] Remove the restrictions for logging a table in Delta format to no longer require running within a Databricks environment (#11521, @chenmoneygithub)
  • [Tracking] Add support for logging mlflow.Image files within tables (#11535, @jessechancy)
  • [Server-infra] Introduce override configurations for controlling how http retries are handled (#11590, @BenWilson2)
  • [Deployments] Implement chat & chat streaming for Anthropic within the MLflow deployments server (#11195, @gabrielfu)

Security fixes:

  • [Model Registry] Fix Local File Read/Path Traversal (LFI) bypass vulnerability (#11376, @WeichenXu123)

Bug fixes:

  • [Model Registry] Fix a registry configuration error that occurs within Databricks serverless clusters (#11719, @WeichenXu123)
  • [Model Registry] Delete registered model permissions when deleting the underlying models (#11601, @B-Step62)
  • [Model Registry] Disallow % in model names to prevent URL mangling within the UI (#11474, @daniellok-db)
  • [Models] Fix an issue where crtically important environment configurations were not being captured as langchain dependencies during model logging (#11679, @serena-ruan)
  • [Models] Patch the LangChain loading functions to handle uncorrectable pickle-related exceptions that are thrown when loading a model in certain versions (#11582, @B-Step62)
  • [Models] Fix a regression in the sklearn flavor to reintroduce support for custom prediction methods (#11577, @B-Step62)
  • [Models] Fix an inconsistent and unreliable implementation for batch support within the langchain flavor (#11485, @WeichenXu123)
  • [Models] Fix loading remote-code-dependent transformers models that contain custom code (#11412, @daniellok-db)
  • [Models] Remove the legacy conversion logic within the transformers flavor that generates an inconsistent input example display within the MLflow UI (#11508, @B-Step62)
  • [Models] Fix an issue with Keras autologging iteration input handling (#11394, @WeichenXu123)
  • [Models] Fix an issue with keras autologging training dataset generator (#11383, @WeichenXu123)
  • [Tracking] Fix an issue where a module would be imported multiple times when logging a langchain model (#11553, @sunishsheth2009)
  • [Tracking] Fix the sampling logic within the GetSampledHistoryBulkInterval API to produce more consistent results when displayed within the UI (#11475, @daniellok-db)
  • [Tracking] Fix import issues and properly resolve dependencies of langchain and lanchain_community within langchain models when logging (#11450, @sunishsheth2009)
  • [Tracking] Improve the performance of asynchronous logging (#11346, @chenmoneygithub)
  • [Deployments] Add middle-of-name truncation to excessively long deployment names for Sagemaker image deployment (#11523, @BenWilson2)

Documentation updates:

  • [Docs] Add clarity and consistent documentation for code_paths docstrings in API documentation (#11675, @BenWilson2)
  • [Docs] Add documentation guidance for sentence-transformers OpenAI-compatible API interfaces (#11373, @es94129)

Small bug fixes and documentation updates:

#11723, @freemin7; #11722, #11721, #11690, #11717, #11685, #11689, #11607, #11581, #11516, #11511, #11358, @serena-ruan; #11718, #11673, #11676, #11680, #11671, #11662, #11659, #11654, #11633, #11628, #11620, #11610, #11605, #11604, #11600, #11603, #11598, #11572, #11576, #11555, #11563, #11539, #11532, #11528, #11525, #11514, #11513, #11509, #11457, #11501, #11500, #11459, #11446, #11443, #11442, #11433, #11430, #11420, #11419, #11416, #11418, #11417, #11415, #11408, #11325, #11327, #11313, @harupy; #11707, #11527, #11663, #11529, #11517, #11510, #11489, #11455, #11427, #11389, #11378, #11326, @B-Step62; #11715, #11714, #11665, #11626, #11619, #11437, #11429, @BenWilson2; #11699, #11692, @annzhang-db; #11693, #11533, #11396, #11392, #11386, #11380, #11381, #11343, @WeichenXu123; #11696, #11687, #11683, @chilir; #11387, #11625, #11574, #11441, #11432, #11428, #11355, #11354, #11351, #11349, #11339, #11338, #11307, @daniellok-db; #11653, #11369, #11270, @chenmoneygithub; #11666, #11588, @jessechancy; #11661, @jmjeon94; #11640, @tunjan; #11639, @minkj1992; #11589, @tlm365; #11566, #11410, @brynn-code; #11570, @lababidi; #11542, #11375, #11345, @edwardfeng-db; #11463, @taranarmo; #11506, @ernestwong-db; #11502, @fzyzcjy; #11470, @clemenskol; #11452, @jkfran; #11413, @GuyAglionby; #11438, @victorsun123; #11350, @liangz1; #11370, @sunishsheth2009; #11379, #11304, @zhouyou9505; #11321, #11323, #11322, @michael-berk; #11333, @cdancette; #11228, @TomeHirata

MLflow 2.11.3

22 Mar 01:05
Compare
Choose a tag to compare

MLflow 2.11.3 is a patch release that addresses a security exploit with the Open Source MLflow tracking server and miscellaneous Databricks integration fixes

Bug fixes:

  • [Security] Address an LFI exploit related to misuse of url parameters (#11473, @daniellok-db)
  • [Databricks] Fix an issue with Databricks Runtime version acquisition when deploying a model using Databricks Docker Container Services (#11483, @WeichenXu123)
  • [Databricks] Correct an issue with credential management within Databricks Model Serving (#11468, @victorsun123)
  • [Models] Fix an issue with chat request validation for LangChain flavor (#11478, @BenWilson2)
  • [Models] Fixes for LangChain models that are logged as code (#11494, #11436 @sunishsheth2009)

MLflow 2.11.2

20 Mar 02:14
c4bf4a9
Compare
Choose a tag to compare

MLflow 2.11.2 is a patch release that introduces corrections for the support of custom transformer models, resolves LangChain integration problems, and includes several fixes to enhance stability.

New features:
[Models] Copy Model Metadata into the "metadata" subdirectory within artifact store (#11357 @WeichenXu123)

Bug fixes:

[Security] Address LFI exploit (#11376, @WeichenXu123)
[Models] Fix transformers models implementation to allow for custom model and component definitions to be loaded properly (#11412, #11428 @daniellok-db)
[Models] Fix the LangChain flavor implementation to support defining an MLflow model as code (#11370, @sunishsheth2009)
[Models] Fix LangChain VectorSearch parsing errors (#11438, @victorsun123)
[Models] Fix LangChain import issue with the community package (#11450, @sunishsheth2009)
[Models] Fix serialization errors with RunnableAssign in the LangChain flavor (#11358, @serena-ruan)
[Models] Address import issues with LangChain community for Databricks models (#11350, @liangz1)
[Registry] Fix model metadata sharing within Databricks Unity Catalog (#11392 @WeichenXu123)
Small bug fixes and documentation updates:

#11321, #11323, @michael-berk; #11326, #11455, @B-Step62; #11333, @cdancette; #11373, @es94129; #11429, @BenWilson2; #11413, @GuyAglionby; #11338, #11339, #11355, #11432, #11441, @daniellok-db; #11380, #11381, #11383, #11394, @WeichenXu123; #11446, @harupy;

MLflow 2.11.1

06 Mar 08:35
abad05e
Compare
Choose a tag to compare

MLflow 2.11.1 is a patch release, containing fixes for some Databricks integrations and other various issues.

Bug fixes:

Small bug fixes and documentation updates:

#11336, #11335, @harupy; #11303, @B-Step62; #11319, @BenWilson2; #11306, @daniellok-db

MLflow 2.11.0

02 Mar 01:21
Compare
Choose a tag to compare

MLflow 2.11.0 includes several major features and improvements

With the MLflow 2.11.0 release, we're excited to bring a series of large and impactful features that span both GenAI and Deep Learning use cases.

  • The MLflow Tracking UI got an overhaul to better support the review and comparison of training runs for Deep Learning workloads. From grouping to large-scale metric plotting throughout
    the iterations of a DL model's training cycle, there are a large number of quality of life improvements to enhance your Deep Learning MLOps workflow.

  • Support for the popular PEFT library from HuggingFace is now available
    in the mlflow.transformers flavor. In addition to PEFT support, we've removed the restrictions on Pipeline types
    that can be logged to MLflow, as well as the ability to, when developing and testing models, log a transformers pipeline without copying foundational model weights. These
    enhancements strive to make the transformers flavor more useful for cutting-edge GenAI models, new pipeline types, and to simplify the development process of prompt engineering, fine-tuning,
    and to make iterative development faster and cheaper. Give the updated flavor a try today! (#11240, @B-Step62)

  • We've added support to both PyTorch and
    TensorFlow for automatic model weights checkpointing (including resumption from a
    previous state) for the auto logging implementations within both flavors. This highly requested feature allows you to automatically configure long-running Deep Learning training
    runs to keep a safe storage of your best epoch, eliminating the risk of a failure late in training from losing the state of the model optimization. (#11197, #10935, @WeichenXu123)

  • We've added a new interface to Pyfunc for GenAI workloads. The new ChatModel interface allows for interacting with a deployed GenAI chat model as you would with any other provider.
    The simplified interface (no longer requiring conformance to a Pandas DataFrame input type) strives to unify the API interface experience. (#10820, @daniellok-db)

  • We now support Keras 3. This large overhaul of the Keras library introduced new fundamental changes to how Keras integrates with different DL frameworks, bringing with it
    a host of new functionality and interoperability. To learn more, see the Keras 3.0 Tutorial
    to start using the updated model flavor today! (#10830, @chenmoneygithub)

  • Mistral AI has been added as a native provider for the MLflow Deployments Server. You can
    now create proxied connections to the Mistral AI services for completions and embeddings with their powerful GenAI models. (#11020, @thnguyendn)

  • We've added compatibility support for the OpenAI 1.x SDK. Whether you're using an OpenAI LLM for model evaluation or calling OpenAI within a LangChain model, you'll now be able to
    utilize the 1.x family of the OpenAI SDK without having to point to deprecated legacy APIs. (#11123, @harupy)

Features:

  • [UI] Revamp the MLflow Tracking UI for Deep Learning workflows, offering a more intuitive and efficient user experience (#11233, @daniellok-db)
  • [Data] Introduce the ability to log datasets without loading them into memory, optimizing resource usage and processing time (#11172, @chenmoneygithub)
  • [Models] Introduce logging frequency controls for TensorFlow, aligning it with Keras for consistent performance monitoring (#11094, @chenmoneygithub)
  • [Models] Add PySpark DataFrame support in mlflow.pyfunc.predict, enhancing data compatibility and analysis options for batch inference (#10939, @ernestwong-db)
  • [Models] Introduce new CLI commands for updating model requirements, facilitating easier maintenance, validation and updating of models without having to re-log (#11061, @daniellok-db)
  • [Models] Update Embedding API for sentence transformers to ensure compatibility with OpenAI format, broadening model application scopes (#11019, @lu-wang-dl)
  • [Models] Improve input and signature support for text-generation models, optimizing for Chat and Completions tasks (#11027, @es94129)
  • [Models] Enable chat and completions task outputs in the text-generation pipeline, expanding interactive capabilities (#10872, @es94129)
  • [Tracking] Add node id to system metrics for enhanced logging in multi-node setups, improving diagnostics and monitoring (#11021, @chenmoneygithub)
  • [Tracking] Implement mlflow.config.enable_async_logging for asynchronous logging, improving log handling and system performance (#11138, @chenmoneygithub)
  • [Evaluate] Enhance model evaluation with endpoint URL support, streamlining performance assessments and integrations (#11262, @B-Step62)
  • [Deployments] Implement chat & chat streaming support for Cohere, enhancing interactive model deployment capabilities (#10976, @gabrielfu)
  • [Deployments] Enable Cohere streaming support, allowing real-time interaction functionalities for the MLflow Deployments server with the Cohere provider (#10856, @gabrielfu)
  • [Docker / Scoring] Optimize Docker images for model serving, ensuring more efficient deployment and scalability (#10954, @B-Step62)
  • [Scoring] Support completions (prompt) and embeddings (input) format inputs in the scoring server, increasing model interaction flexibility (#10958, @es94129)

Bug Fixes:

  • [Model Registry] Correct the oversight of not utilizing the default credential file in model registry setups (#11261, @B-Step62)
  • [Model Registry] Address the visibility issue of aliases in the model versions table within the registered model detail page (#11223, @smurching)
  • [Models] Ensure load_context() is called when enforcing ChatModel outputs so that all required external references are included in the model object instance (#11150, @daniellok-db)
  • [Models] Rectify the keras output dtype in signature mismatches, ensuring data consistency and accuracy (#11230, @chenmoneygithub)
  • [Models] Resolve spark model loading failures, enhancing model reliability and accessibility (#11227, @WeichenXu123)
  • [Models] Eliminate false warnings for missing signatures in Databricks, improving the user experience and model validation processes (#11181, @B-Step62)
  • [Models] Implement a timeout for signature/requirement inference during Transformer model logging, optimizing the logging process and avoiding delays (#11037, @B-Step62)
  • [Models] Address the missing dtype issue for transformer pipelines, ensuring data integrity and model accuracy (#10979, @B-Step62)
  • [Models] Correct non-idempotent predictions due to in-place updates to model-config, stabilizing model outputs (#11014, @B-Step62)
  • [Models] Fix an issue where specifying torch.dtype as a string was not being applied correctly to the underlying transformers model (#11297, #11295, @harupy)
  • [Tracking] Fix mlflow.evaluate col_mapping bug for non-LLM/custom metrics, ensuring accurate evaluation and metric calculation (#11156, @sunishsheth2009)
  • [Tracking] Resolve the TensorInfo TypeError exception message issue, ensuring clarity and accuracy in error reporting for users (#10953, @leecs0503)
  • [Tracking] Enhance RestException objects to be picklable, improving their usability in distributed computing scenarios where serialization is essential (#10936, @WeichenXu123)
  • [Tracking] Address the handling of unrecognized response error codes, ensuring robust error processing and improved user feedback in edge cases (#10918, @chenmoneygithub)
  • [Spark] Update hardcoded io.delta:delta-spark_2.12:3.0.0 dependency to the correct scala version, aligning dependencies with project requirements (#11149, @WeichenXu123)
  • [Server-infra] Adapt to newer versions of python by avoiding importlib.metadata.entry_points().get, enhancing compatibility and stability (#10752, @raphaelauv)
  • [Server-infra / Tracking] Introduce an environment variable to disable mlflow configuring logging on import, improving configurability and user control (#11137, @jmahlik)
  • [Auth] Enhance auth validation for mlflow.login(), streamlining the authentication process and improving security (#11039, @chenmoneygithub)

Documentation Updates:

  • [Docs] Introduce a comprehensive notebook demonstrating the use of ChatModel with Transformers and Pyfunc, providing users with practical insights and guidelines for leveraging these models (#11239, @daniellok-db)
  • [Tracking / Docs] Stabilize the dataset logging APIs, removing the experimental status (#11229, @dbczumar)
  • [Docs] Revise and update the documentation on authentication database configuration, offering clearer instructions and better support for setting up secure authentication mechanisms (#11176, @gabrielfu)
  • [Docs] Publish a new guide and tutorial for MLflow data logging and log_input, enriching the documentation with actionable advice and examples for effective data handling (#10956, @BenWilson2)
  • [Docs] Upgrade the documentation visuals by replacing low-resolution and poorly dithered GIFs with high-quality HTML5 videos, significantly enhancing the learning experience (#11051, @BenWilson2)
  • [Docs / Examples] Correct the compatibility matrix for OpenAI in MLflow Deployments Server documentation, providing users with accurate integration details and supporting smoother deployments (#11015, @BenWilson2)

Small bug fixes and documentation updates:

#11284, #11096, #11285, #11245, #11254, #11252, #11250, #11249, #11234, #11248, #11242, #11244, #11236, #11208, #11220, #11222, #11221, #11219, #1...

Read more

MLflow 2.10.2

09 Feb 12:58
d77cc7a
Compare
Choose a tag to compare

MLflow 2.10.2 is a patch release.

Small bug fixes and documentation updates:

#11065, @WeichenXu123

MLflow 2.10.1

06 Feb 12:13
Compare
Choose a tag to compare

MLflow 2.10.1 is a patch release, containing fixes for various bugs in the transformers and langchain flavors, the MLflow UI, and the S3 artifact store. More details can be found in the patch notes below.

Bug fixes:

  • [UI] Fixed a bug that prevented datasets from showing up in the MLflow UI (#10992, @daniellok-db)
  • [Artifact Store] Fixed directory bucket region name retrieval (#10967, @kriscon-db)
  • Bug fixes for Transformers flavor
    • [Models] Fix an issue with transformer pipelines not inheriting the torch dtype specified on the model, causing pipeline inference to consume more resources than expected. (#10979, @B-Step62)
    • [Models] Fix non-idempotent prediction due to in-place update to model-config (#11014, @B-Step62)
    • [Models] Fixed a bug affecting prompt templating with Text2TextGeneration pipelines. Previously, calling predict() on a pyfunc-loaded Text2TextGeneration pipeline would fail for string and List[string] inputs. (#10960, @B-Step62)
  • Bug fixes for Langchain flavor
    • Fixed errors that occur when logging inputs and outputs with different lengths (#10952, @serena-ruan)

Documentation updates:

Small bug fixes and documentation updates:

#10930, #11005, @serena-ruan; #10927, @harupy

MLflow 2.10.0

26 Jan 07:18
Compare
Choose a tag to compare

In MLflow 2.10, we're introducing a number of significant new features that are preparing the way for current and future enhanced support for Deep Learning use cases, new features to support a broadened support for GenAI applications, and some quality of life improvements for the MLflow Deployments Server (formerly the AI Gateway).

New MLflow Website

We have a new home. The new site landing page is fresh, modern, and contains more content than ever. We're adding new content and blogs all of the time.

Model Signature Supports Objects and Arrays (#9936, @serena-ruan)

Objects and Arrays are now available as configurable input and output schema elements. These new types are particularly useful for GenAI-focused flavors that can have complex input and output types. See the new Signature and Input Example documentation to learn more about how to use these new signature types.

Langchain Autologging (#10801, @serena-ruan)

LangChain has autologging support now! When you invoke a chain, with autologging enabled, we will automatically log most chain implementations, recording and storing your configured LLM application for you. See the new Langchain documentation to learn more about how to use this feature.

Prompt Templating for Transformers Models (#10791, @daniellok-db)

The MLflow transformers flavor now supports prompt templates. You can now specify an application-specific set of instructions to submit to your GenAI pipeline in order to simplify, streamline, and integrate sets of system prompts to be supplied with each input request. Check out the updated guide to transformers to learn more and see examples!

MLflow Deployments Server Enhancement (#10765, @gabrielfu; #10779, @TomeHirata)

The MLflow Deployments Server now supports two new requested features: (1) OpenAI endpoints that support streaming responses. You can now configure an endpoint to return realtime responses for Chat and Completions instead of waiting for the entire text contents to be completed. (2) Rate limits can now be set per endpoint in order to help control cost overrun when using SaaS models.

Further Document Improvements

Continued the push for enhanced documentation, guides, tutorials, and examples by expanding on core MLflow functionality (Deployments, Signatures, and Model Dependency management), as well as entirely new pages for GenAI flavors. Check them out today!

Other Features:

  • [Models] Enhance the MLflow Models predict API to serve as a pre-logging validator of environment compatibility. (#10759, @B-Step62)
  • [Models] Add support for Image Classification pipelines within the transformers flavor (#10538, @KonakanchiSwathi)
  • [Models] Add support for retrieving and storing license files for transformers models (#10871, @BenWilson2)
  • [Models] Add support for model serialization in the Visual NLP format for JohnSnowLabs flavor (#10603, @C-K-Loan)
  • [Models] Automatically convert OpenAI input messages to LangChain chat messages for pyfunc predict (#10758, @dbczumar)
  • [Tracking] Enhance async logging functionality by ensuring flush is called on Futures objects (#10715, @chenmoneygithub)
  • [Tracking] Add support for a non-interactive mode for the login() API (#10623, @henxing)
  • [Scoring] Allow MLflow model serving to support direct dict inputs with the messages key (#10742, @daniellok-db, @B-Step62)
  • [Deployments] Add streaming support to the MLflow Deployments Server for OpenAI streaming return compatible routes (#10765, @gabrielfu)
  • [Deployments] Add support for directly interfacing with OpenAI via the MLflow Deployments server (#10473, @prithvikannan)
  • [UI] Introduce a number of new features for the MLflow UI (#10864, @daniellok-db)
  • [Server-infra] Add an environment variable that can disallow HTTP redirects (#10655, @daniellok-db)
  • [Artifacts] Add support for Multipart Upload for Azure Blob Storage (#10531, @gabrielfu)

Bug fixes

  • [Models] Add deduplication logic for pip requirements and extras handling for MLflow models (#10778, @BenWilson2)
  • [Models] Add support for paddle 2.6.0 release (#10757, @WeichenXu123)
  • [Tracking] Fix an issue with an incorrect retry default timeout for urllib3 1.x (#10839, @BenWilson2)
  • [Recipes] Fix an issue with MLflow Recipes card display format (#10893, @WeichenXu123)
  • [Java] Fix an issue with metadata collection when using Streaming Sources on certain versions of Spark where Delta is the source (#10729, @daniellok-db)
  • [Scoring] Fix an issue where SageMaker tags were not propagating correctly (#9310, @clarkh-ncino)
  • [Windows / Databricks] Fix an issue with executing Databricks run commands from within a Window environment (#10811, @wolpl)
  • [Models / Databricks] Disable mlflowdbfs mounts for JohnSnowLabs flavor due to flakiness (#9872, @C-K-Loan)

Documentation updates:

#10538, #10901, #10903, #10876, #10833, #10859, #10867, #10843, #10857, #10834, #10814, #10805, #10764, #10771, #10733, #10724, #10703, #10710, #10696, #10691, #10692, @B-Step62; #10882, #10854, #10395, #10725, #10695, #10712, #10707, #10667, #10665, #10654, #10638, #10628, @harupy; #10881, #10875, #10835, #10845, #10844, #10651, #10806, #10786, #10785, #10781, #10741, #10772, #10727, @serena-ruan; #10873, #10755, #10750, #10749, #10619, @WeichenXu123; #10877, @amueller; #10852, @QuentinAmbard; #10822, #10858, @gabrielfu; #10862, @jerrylian-db; #10840, @ernestwong-db; #10841, #10795, #10792, #10774, #10776, #10672, @BenWilson2; #10827, #10826, #10825, #10732, #10481, @michael-berk; #10828, #10680, #10629, @daniellok-db; #10799, #10800, #10578, #10782, #10783, #10723, #10464, @annzhang-db; #10803, #10731, #10708, @kriscon-db; #10797, @dbczumar; #10756, #10751, @Ankit8848; #10784, @AveshCSingh; #10769, #10763, #10717, @chenmoneygithub; #10698, @rmalani-db; #10767, @liangz1; #10682, @cdreetz; #10659, @prithvikannan; #10639, #10609, @TomeHirata