Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Google Generative AI safety settings #117679

Merged
merged 10 commits into from
May 25, 2024

Conversation

tronikos
Copy link
Contributor

@tronikos tronikos commented May 18, 2024

Breaking change

Proposed change

Support safety settings. Default to the old behavior of blocking most content.

image

Type of change

  • Dependency upgrade
  • Bugfix (non-breaking change which fixes an issue)
  • New integration (thank you!)
  • New feature (which adds functionality to an existing integration)
  • Deprecation (breaking change to happen in the future)
  • Breaking change (fix/feature causing existing functionality to break)
  • Code quality improvements to existing code or addition of tests

Additional information

Checklist

  • The code change is tested and works locally.
  • Local tests pass. Your PR cannot be merged unless tests pass
  • There is no commented out code in this PR.
  • I have followed the development checklist
  • I have followed the perfect PR recommendations
  • The code has been formatted using Ruff (ruff format homeassistant tests)
  • Tests have been added to verify that the new code works.

If user exposed functionality or configuration variables are added/changed:

If the code communicates with devices, web services, or third-party tools:

  • The manifest file has all fields filled out correctly.
    Updated and included derived files by running: python3 -m script.hassfest.
  • New or updated dependencies have been added to requirements_all.txt.
    Updated by running python3 -m script.gen_requirements_all.
  • For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
  • Untested files have been added to .coveragerc.

To help with the load of incoming pull requests:

@balloob
Copy link
Member

balloob commented May 25, 2024

@coderabbitai review

Copy link

coderabbitai bot commented May 25, 2024

Walkthrough

The recent updates to the google_generative_ai_conversation component in Home Assistant introduce new configuration constants for blocking thresholds related to various types of harmful content, such as harassment, hate speech, sexual content, and dangerous content. These constants are integrated into the configuration schema and async processing functions, enhancing the safety settings available to users. Additionally, corresponding updates have been made to the localization strings and test cases to support these new features.

Changes

Files Change Summary
homeassistant/components/google_generative_ai_conversation/config_flow.py Added new configuration constants for block thresholds and updated schema to include these new options.
homeassistant/components/google_generative_ai_conversation/const.py Introduced new configuration constants for harassment, hate, sexual, and dangerous block thresholds.
homeassistant/components/google_generative_ai_conversation/conversation.py Integrated new safety settings for harassment, hate, sexual, and dangerous block thresholds into async processing.
homeassistant/components/google_generative_ai_conversation/strings.json Updated localization strings to include descriptions for new block thresholds.
tests/components/google_generative_ai_conversation/snapshots/test_conversation.ambr Added safety settings for various block thresholds in conversation test cases.
tests/components/google_generative_ai_conversation/test_config_flow.py Added constants and recommended values for block thresholds in test configurations.

Tip

New Features and Improvements

Review Settings

Introduced new personality profiles for code reviews. Users can now select between "Chill" and "Assertive" review tones to tailor feedback styles according to their preferences. The "Assertive" profile posts more comments and nitpicks the code more aggressively, while the "Chill" profile is more relaxed and posts fewer comments.

AST-based Instructions

CodeRabbit offers customizing reviews based on the Abstract Syntax Tree (AST) pattern matching. Read more about AST-based instructions in the documentation.

Community-driven AST-based Rules

We are kicking off a community-driven initiative to create and share AST-based rules. Users can now contribute their AST-based rules to detect security vulnerabilities, code smells, and anti-patterns. Please see the ast-grep-essentials repository for more information.

New Static Analysis Tools

We are continually expanding our support for static analysis tools. We have added support for biome, hadolint, and ast-grep. Update the settings in your .coderabbit.yaml file or head over to the settings page to enable or disable the tools you want to use.

Tone Settings

Users can now customize CodeRabbit to review code in the style of their favorite characters or personalities. Here are some of our favorite examples:

  • Mr. T: "You must talk like Mr. T in all your code reviews. I pity the fool who doesn't!"
  • Pirate: "Arr, matey! Ye must talk like a pirate in all yer code reviews. Yarrr!"
  • Snarky: "You must be snarky in all your code reviews. Snark, snark, snark!"

Revamped Settings Page

We have redesigned the settings page for a more intuitive layout, enabling users to find and adjust settings quickly. This change was long overdue; it not only improves the user experience but also allows our development team to add more settings in the future with ease. Going forward, the changes to .coderabbit.yaml will be reflected in the settings page, and vice versa.

Miscellaneous

  • Turn off free summarization: You can switch off free summarization of PRs opened by users not on a paid plan using the enable_free_tier setting.
  • Knowledge-base scope: You can now set the scope of the knowledge base to either the repository (local) or the organization (global) level using the knowledge_base setting. In addition, you can specify Jira project keys and Linear team keys to limit the knowledge base scope for those integrations.
  • High-level summary placement: You can now customize the location of the high-level summary in the PR description using the high_level_summary_placeholder setting (default @coderabbitai summary).
  • Revamped request changes workflow: You can now configure CodeRabbit to auto-approve or request changes on PRs based on the review feedback using the request_changes_workflow setting.

Recent Review Details

Configuration used: CodeRabbit UI
Review profile: CHILL

Commits Files that changed from the base of the PR and between cf73a47 and 36d24b1.
Files selected for processing (6)
  • homeassistant/components/google_generative_ai_conversation/config_flow.py (3 hunks)
  • homeassistant/components/google_generative_ai_conversation/const.py (1 hunks)
  • homeassistant/components/google_generative_ai_conversation/conversation.py (2 hunks)
  • homeassistant/components/google_generative_ai_conversation/strings.json (1 hunks)
  • tests/components/google_generative_ai_conversation/snapshots/test_conversation.ambr (4 hunks)
  • tests/components/google_generative_ai_conversation/test_config_flow.py (2 hunks)
Additional comments not posted (5)
homeassistant/components/google_generative_ai_conversation/const.py (1)

21-25: Constants for block thresholds are well-defined and follow naming conventions.

Verification successful

The RECOMMENDED_HARM_BLOCK_THRESHOLD constant is consistently used across the codebase in various files, including tests and configuration flows. This indicates that the value "BLOCK_LOW_AND_ABOVE" is appropriately integrated and utilized.

  • tests/components/google_generative_ai_conversation/test_config_flow.py
  • homeassistant/components/google_generative_ai_conversation/const.py
  • homeassistant/components/google_generative_ai_conversation/conversation.py
  • homeassistant/components/google_generative_ai_conversation/config_flow.py
Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Verify the appropriateness of the RECOMMENDED_HARM_BLOCK_THRESHOLD value.

# Test: Search for the usage of RECOMMENDED_HARM_BLOCK_THRESHOLD. Expect: Consistent usage across the codebase.
rg --type python $'RECOMMENDED_HARM_BLOCK_THRESHOLD'

Length of output: 82



Script:

#!/bin/bash
# Description: Verify the appropriateness of the RECOMMENDED_HARM_BLOCK_THRESHOLD value.

# Test: Search for the usage of RECOMMENDED_HARM_BLOCK_THRESHOLD. Expect: Consistent usage across the codebase.
rg --type py $'RECOMMENDED_HARM_BLOCK_THRESHOLD'

Length of output: 2272

homeassistant/components/google_generative_ai_conversation/strings.json (1)

28-32: The descriptions for the new block thresholds are clear and informative.

tests/components/google_generative_ai_conversation/test_config_flow.py (1)

12-18: The constants are correctly used in the tests, ensuring that the new configuration options are properly tested.

Also applies to: 24-24, 124-127

homeassistant/components/google_generative_ai_conversation/config_flow.py (1)

35-41: The integration of block thresholds into the configuration schema is well-implemented, providing clear options for users.

Also applies to: 48-48, 215-293

homeassistant/components/google_generative_ai_conversation/conversation.py (1)

25-30: The integration of block thresholds into the conversation processing is correctly implemented, ensuring that the conversation respects the specified safety settings.

Also applies to: 38-38, 176-189


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@balloob
Copy link
Member

balloob commented May 25, 2024

@coderabbitai review

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Comment on lines +16 to +21
'safety_settings': dict({
'DANGEROUS': 'BLOCK_LOW_AND_ABOVE',
'HARASSMENT': 'BLOCK_LOW_AND_ABOVE',
'HATE': 'BLOCK_LOW_AND_ABOVE',
'SEXUAL': 'BLOCK_LOW_AND_ABOVE',
}),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tip

Codebase Verification

The search results indicate that the snapshot tests only use the BLOCK_LOW_AND_ABOVE threshold. However, there are other thresholds like BLOCK_MEDIUM_AND_ABOVE defined in the codebase but not utilized in the tests.

  • Consider adding more varied test cases to cover different block thresholds for comprehensive testing.
Analysis chain

The safety settings in the snapshot tests are consistent with the new block thresholds.

Also applies to: 66-71, 116-121, 169-174

Scripts Executed

The following scripts were executed for the analysis:


Script:

#!/bin/bash
# Description: Suggest adding more varied test cases for different block thresholds.

# Test: Search for the usage of different block thresholds in tests. Expect: Varied usage for comprehensive testing.
rg --type yaml $'safety_settings'

Length of output: 32



Script:

#!/bin/bash
# Description: Search for the usage of different block thresholds in the codebase to ensure comprehensive testing.

# Test: Search for the term 'BLOCK_LOW_AND_ABOVE' and other potential block thresholds in the codebase.
rg 'BLOCK_LOW_AND_ABOVE|BLOCK_MEDIUM_AND_ABOVE|BLOCK_HIGH_AND_ABOVE'

Length of output: 2480

@balloob balloob merged commit 620487f into home-assistant:dev May 25, 2024
24 checks passed
@github-actions github-actions bot locked and limited conversation to collaborators May 26, 2024
@tronikos tronikos deleted the google_ai_safety branch May 28, 2024 23:40
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants