Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve "Defaults to" by putting to end of arg in docstring and ensuring backticks have proper spacing #18945

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 2 additions & 2 deletions keras/src/callbacks/backup_and_restore.py
Expand Up @@ -63,11 +63,11 @@ class BackupAndRestore(Callback):
When set to an integer, the callback saves the checkpoint every
`save_freq` batches. Set `save_freq=False` only if using
preemption checkpointing (i.e. with `save_before_preemption=True`).
delete_checkpoint: Boolean, defaults to `True`. This `BackupAndRestore`
delete_checkpoint: Boolean. This `BackupAndRestore`
callback works by saving a checkpoint to back up the training state.
If `delete_checkpoint=True`, the checkpoint will be deleted after
training is finished. Use `False` if you'd like to keep the checkpoint
for future usage.
for future usage. Defaults to `True`.
"""

def __init__(
Expand Down
10 changes: 5 additions & 5 deletions keras/src/ops/math.py
Expand Up @@ -40,7 +40,7 @@ def segment_sum(data, segment_ids, num_segments=None, sorted=False):
segments. If not specified, it is inferred from the maximum
value in `segment_ids`.
sorted: A boolean indicating whether `segment_ids` is sorted.
Defaults to`False`.
Defaults to `False`.

Returns:
A tensor containing the sum of segments, where each element
Expand Down Expand Up @@ -93,7 +93,7 @@ def segment_max(data, segment_ids, num_segments=None, sorted=False):
segments. If not specified, it is inferred from the maximum
value in `segment_ids`.
sorted: A boolean indicating whether `segment_ids` is sorted.
Defaults to`False`.
Defaults to `False`.

Returns:
A tensor containing the max of segments, where each element
Expand Down Expand Up @@ -141,7 +141,7 @@ def top_k(x, k, sorted=True):
x: Input tensor.
k: An integer representing the number of top elements to retrieve.
sorted: A boolean indicating whether to sort the output in
descending order. Defaults to`True`.
descending order. Defaults to `True`.

Returns:
A tuple containing two tensors. The first tensor contains the
Expand Down Expand Up @@ -225,9 +225,9 @@ def logsumexp(x, axis=None, keepdims=False):
x: Input tensor.
axis: An integer or a tuple of integers specifying the axis/axes
along which to compute the sum. If `None`, the sum is computed
over all elements. Defaults to`None`.
over all elements. Defaults to `None`.
keepdims: A boolean indicating whether to keep the dimensions of
the input tensor when computing the sum. Defaults to`False`.
the input tensor when computing the sum. Defaults to `False`.

Returns:
A tensor containing the logarithm of the sum of exponentials of
Expand Down
10 changes: 5 additions & 5 deletions keras/src/ops/nn.py
Expand Up @@ -1298,8 +1298,8 @@ def one_hot(x, num_classes, axis=-1, dtype=None, sparse=False):
x: Integer tensor to be encoded. The shape can be
arbitrary, but the dtype should be integer.
num_classes: Number of classes for the one-hot encoding.
axis: Axis along which the encoding is performed. Defaults to
`-1`, which represents the last axis.
axis: Axis along which the encoding is performed.
`-1` represents the last axis. Defaults to `-1`.
dtype: (Optional) Data type of the output tensor. If not
provided, it defaults to the default data type of the backend.
sparse: Whether to return a sparse tensor; for backends that support
Expand Down Expand Up @@ -1377,7 +1377,7 @@ def binary_crossentropy(target, output, from_logits=False):
probabilities.
Set it to `True` if `output` represents logits; otherwise,
set it to `False` if `output` represents probabilities.
Defaults to`False`.
Defaults to `False`.

Returns:
Integer tensor: The computed binary cross-entropy loss between
Expand Down Expand Up @@ -1452,7 +1452,7 @@ def categorical_crossentropy(target, output, from_logits=False, axis=-1):
probabilities.
Set it to `True` if `output` represents logits; otherwise,
set it to `False` if `output` represents probabilities.
Defaults to`False`.
Defaults to `False`.
axis: (optional) The axis along which the categorical cross-entropy
is computed.
Defaults to `-1`, which corresponds to the last dimension of
Expand Down Expand Up @@ -1540,7 +1540,7 @@ class labels instead of one-hot encoded vectors. It measures the
or probabilities.
Set it to `True` if `output` represents logits; otherwise,
set it to `False` if `output` represents probabilities.
Defaults to`False`.
Defaults to `False`.
axis: (optional) The axis along which the sparse categorical
cross-entropy is computed.
Defaults to `-1`, which corresponds to the last dimension
Expand Down
4 changes: 2 additions & 2 deletions keras/src/ops/numpy.py
Expand Up @@ -940,7 +940,7 @@ def argsort(x, axis=-1):

Args:
x: Input tensor.
axis: Axis along which to sort. Defaults to`-1` (the last axis). If
axis: Axis along which to sort. Defaults to `-1` (the last axis). If
`None`, the flattened tensor is used.

Returns:
Expand Down Expand Up @@ -4098,7 +4098,7 @@ def pad(x, pad_width, mode="constant", constant_values=None):
mode: One of `"constant"`, `"edge"`, `"linear_ramp"`,
`"maximum"`, `"mean"`, `"median"`, `"minimum"`,
`"reflect"`, `"symmetric"`, `"wrap"`, `"empty"`,
`"circular"`. Defaults to`"constant"`.
`"circular"`. Defaults to `"constant"`.
constant_values: value to pad with if `mode == "constant"`.
Defaults to `0`. A `ValueError` is raised if not None and
`mode != "constant"`.
Expand Down
3 changes: 2 additions & 1 deletion keras/src/utils/audio_dataset_utils.py
Expand Up @@ -82,8 +82,9 @@ def audio_dataset_from_directory(
length of the longest sequence in the batch.
ragged: Whether to return a Ragged dataset (where each sequence has its
own length). Defaults to `False`.
shuffle: Whether to shuffle the data. Defaults to `True`.
shuffle: Whether to shuffle the data.
If set to `False`, sorts the data in alphanumeric order.
Defaults to `True`.
seed: Optional random seed for shuffling and transformations.
validation_split: Optional float between 0 and 1, fraction of data to
reserve for validation.
Expand Down
11 changes: 6 additions & 5 deletions keras/src/utils/image_dataset_utils.py
Expand Up @@ -83,15 +83,15 @@ def image_dataset_from_directory(
(must match names of subdirectories). Used to control the order
of the classes (otherwise alphanumerical order is used).
color_mode: One of `"grayscale"`, `"rgb"`, `"rgba"`.
Defaults to `"rgb"`. Whether the images will be converted to
have 1, 3, or 4 channels.
Whether the images will be converted to
have 1, 3, or 4 channels. Defaults to `"rgb"`.
batch_size: Size of the batches of data. Defaults to 32.
If `None`, the data will not be batched
(the dataset will yield individual samples).
image_size: Size to resize images to after they are read from disk,
specified as `(height, width)`. Defaults to `(256, 256)`.
specified as `(height, width)`.
Since the pipeline processes batches of images that must all have
the same size, this must be provided.
the same size, this must be provided. Defaults to `(256, 256)`.
shuffle: Whether to shuffle the data. Defaults to `True`.
If set to `False`, sorts the data in alphanumeric order.
seed: Optional random seed for shuffling and transformations.
Expand All @@ -103,9 +103,10 @@ def image_dataset_from_directory(
When `subset="both"`, the utility returns a tuple of two datasets
(the training and validation datasets respectively).
interpolation: String, the interpolation method used when
resizing images. Defaults to `"bilinear"`.
resizing images.
Supports `"bilinear"`, `"nearest"`, `"bicubic"`, `"area"`,
`"lanczos3"`, `"lanczos5"`, `"gaussian"`, `"mitchellcubic"`.
Defaults to `"bilinear"`.
follow_links: Whether to visit subdirectories pointed to by symlinks.
Defaults to `False`.
crop_to_aspect_ratio: If `True`, resize the images without aspect
Expand Down
6 changes: 3 additions & 3 deletions keras/src/utils/image_utils.py
Expand Up @@ -350,9 +350,9 @@ def smart_resize(
or `(batch_size, height, width, channels)`.
size: Tuple of `(height, width)` integer. Target size.
interpolation: String, interpolation to use for resizing.
Defaults to `'bilinear'`.
Supports `bilinear`, `nearest`, `bicubic`,
`lanczos3`, `lanczos5`.
Supports `"bilinear"`, `"nearest"`, `"bicubic"`,
`"lanczos3"`, `"lanczos5"`.
Defaults to `"bilinear"`.
data_format: `"channels_last"` or `"channels_first"`.
backend_module: Backend module to use (if different from the default
backend).
Expand Down
2 changes: 1 addition & 1 deletion keras/src/utils/sequence_utils.py
Expand Up @@ -69,7 +69,7 @@ def pad_sequences(
truncating: String, "pre" or "post" (optional, defaults to `"pre"`):
remove values from sequences larger than
`maxlen`, either at the beginning or at the end of the sequences.
value: Float or String, padding value. (Optional, defaults to 0.)
value: Float or String, padding value. (Optional, defaults to `0.`)

Returns:
NumPy array with shape `(len(sequences), maxlen)`
Expand Down
6 changes: 4 additions & 2 deletions keras/src/utils/text_dataset_utils.py
Expand Up @@ -72,13 +72,15 @@ def text_dataset_from_directory(
This is the explicit list of class names
(must match names of subdirectories). Used to control the order
of the classes (otherwise alphanumerical order is used).
batch_size: Size of the batches of data. Defaults to 32.
batch_size: Size of the batches of data.
If `None`, the data will not be batched
(the dataset will yield individual samples).
Defaults to `32`.
max_length: Maximum size of a text string. Texts longer than this will
be truncated to `max_length`.
shuffle: Whether to shuffle the data. Defaults to `True`.
shuffle: Whether to shuffle the data.
If set to `False`, sorts the data in alphanumeric order.
Defaults to `True`.
seed: Optional random seed for shuffling and transformations.
validation_split: Optional float between 0 and 1,
fraction of data to reserve for validation.
Expand Down