Skip to content

retry on get_object_size #12231

retry on get_object_size

retry on get_object_size #12231

GitHub Actions / JUnit Test Report failed Apr 27, 2024 in 0s

2998 tests run, 1678 passed, 1319 skipped, 1 failed.

Annotations

Check failure on line 222 in deeplake/api/tests/test_json.py

See this annotation in the file changed.

@github-actions github-actions / JUnit Test Report

test_json.test_json_transform[lz4-gdrive_ds]

deeplake.util.exceptions.TransformError: Transform failed. See traceback for more details.
Raw output
self = <deeplake.core.transform.transform.Pipeline object at 0x7f07abd123d0>
data_in = [{'x': [1, 2, 3], 'y': [4, [5, 6]]}, {'x': [1, 2, 3], 'y': [4, {'z': [0.1, 0.2, []]}]}, ['a', ['b', 'c'], {'d': 1.0}], [1.0, 2.0, 3.0, 4.0], ['a', 'b', 'c', 'd'], 1, ...]
ds_out = Dataset(path='gdrive://hubtest/tmpf4bb', tensors=['json'])
num_workers = 2, scheduler = 'threaded', progressbar = True, skip_ok = False
check_lengths = True, pad_data_in = False, read_only_ok = False, cache_size = 16
checkpoint_interval = 0, ignore_errors = False, verbose = True, kwargs = {}
overwrite = False
original_data_in = [{'x': [1, 2, 3], 'y': [4, [5, 6]]}, {'x': [1, 2, 3], 'y': [4, {'z': [0.1, 0.2, []]}]}, ['a', ['b', 'c'], {'d': 1.0}], [1.0, 2.0, 3.0, 4.0], ['a', 'b', 'c', 'd'], 1, ...]
initial_padding_state = None
target_ds = Dataset(path='gdrive://hubtest/tmpf4bb', tensors=['json'])
compute_provider = <deeplake.core.compute.thread.ThreadProvider object at 0x7f06b390bb90>
compute_id = '98f86175fc61433bb144e16be1bc6904', initial_autoflush = True

    def eval(
        self,
        data_in,
        ds_out: Optional[deeplake.Dataset] = None,
        num_workers: int = 0,
        scheduler: str = "threaded",
        progressbar: bool = True,
        skip_ok: bool = False,
        check_lengths: bool = True,
        pad_data_in: bool = False,
        read_only_ok: bool = False,
        cache_size: int = DEFAULT_TRANSFORM_SAMPLE_CACHE_SIZE,
        checkpoint_interval: int = 0,
        ignore_errors: bool = False,
        verbose: bool = True,
        **kwargs,
    ):
        """Evaluates the pipeline on ``data_in`` to produce an output dataset ``ds_out``.
    
        Args:
            data_in: Input passed to the transform to generate output dataset. Should support \__getitem__ and \__len__. Can be a Deep Lake dataset.
            ds_out (Dataset, optional): - The dataset object to which the transform will get written. If this is not provided, ``data_in`` will be overwritten if it is a Deep Lake dataset, otherwise error will be raised.
                - It should have all keys being generated in output already present as tensors. It's initial state should be either:
                - **Empty**, i.e., all tensors have no samples. In this case all samples are added to the dataset.
                - **All tensors are populated and have same length.** In this case new samples are appended to the dataset.
            num_workers (int): The number of workers to use for performing the transform. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler.
            scheduler (str): The scheduler to be used to compute the transformation. Supported values include: 'serial', 'threaded', 'processed' and 'ray'.
                Defaults to 'threaded'.
            progressbar (bool): Displays a progress bar if ``True`` (default).
            skip_ok (bool): If ``True``, skips the check for output tensors generated. This allows the user to skip certain tensors in the function definition.
                This is especially useful for inplace transformations in which certain tensors are not modified. Defaults to ``False``.
            check_lengths (bool): If ``True``, checks whether ``ds_out`` has tensors of same lengths initially.
            pad_data_in (bool): If ``True``, pads tensors of ``data_in`` to match the length of the largest tensor in ``data_in``.
                Defaults to ``False``.
            read_only_ok (bool): If ``True`` and output dataset is same as input dataset, the read-only check is skipped.
                Defaults to False.
            cache_size (int): Cache size to be used by transform per worker.
            checkpoint_interval (int): If > 0, the transform will be checkpointed with a commit every ``checkpoint_interval`` input samples to avoid restarting full transform due to intermitten failures. If the transform is interrupted, the intermediate data is deleted and the dataset is reset to the last commit.
                If <= 0, no checkpointing is done. Checkpoint interval should be a multiple of num_workers if num_workers > 0. Defaults to 0.
            ignore_errors (bool): If ``True``, input samples that causes transform to fail will be skipped and the errors will be ignored **if possible**.
            verbose (bool): If ``True``, prints additional information about the transform.
            **kwargs: Additional arguments.
    
        Raises:
            InvalidInputDataError: If ``data_in`` passed to transform is invalid. It should support \__getitem__ and \__len__ operations. Using scheduler other than "threaded" with deeplake dataset having base storage as memory as ``data_in`` will also raise this.
            InvalidOutputDatasetError: If all the tensors of ``ds_out`` passed to transform don't have the same length. Using scheduler other than "threaded" with deeplake dataset having base storage as memory as ``ds_out`` will also raise this.
            TensorMismatchError: If one or more of the outputs generated during transform contain different tensors than the ones present in 'ds_out' provided to transform.
            UnsupportedSchedulerError: If the scheduler passed is not recognized. Supported values include: 'serial', 'threaded', 'processed' and 'ray'.
            TransformError: All other exceptions raised if there are problems while running the pipeline.
            ValueError: If ``num_workers`` > 0 and ``checkpoint_interval`` is not a multiple of ``num_workers`` or if ``checkpoint_interval`` > 0 and ds_out is None.
    
    
        # noqa: DAR401
    
        Example::
    
            @deeplake.compute
            def my_fn(sample_in: Any, samples_out, my_arg0, my_arg1=0):
                samples_out.my_tensor.append(my_arg0 * my_arg1)
    
            # This transform can be used using the eval method in one of these 2 ways:-
    
            # Directly evaluating the method
            # here arg0 and arg1 correspond to the 3rd and 4th argument in my_fn
            my_fn(arg0, arg1).eval(data_in, ds_out, scheduler="threaded", num_workers=5)
    
            # As a part of a Transform pipeline containing other functions
            pipeline = deeplake.compose([my_fn(a, b), another_function(x=2)])
            pipeline.eval(data_in, ds_out, scheduler="processed", num_workers=2)
    
        Note:
            ``pad_data_in`` is only applicable if ``data_in`` is a Deep Lake dataset.
    
        """
        num_workers, scheduler = sanitize_workers_scheduler(num_workers, scheduler)
        overwrite = ds_out is None
        deeplake_reporter.feature_report(
            feature_name="eval",
            parameters={"Num_Workers": str(num_workers), "Scheduler": scheduler},
        )
        check_transform_data_in(data_in, scheduler)
    
        data_in, original_data_in, initial_padding_state = prepare_data_in(
            data_in, pad_data_in, overwrite
        )
        target_ds = data_in if overwrite else ds_out
    
        check_transform_ds_out(
            target_ds, scheduler, check_lengths, read_only_ok and overwrite
        )
    
        # if overwrite then we've already flushed and autocheckecked out data_in which is target_ds now
        if not overwrite:
            target_ds.flush()
            auto_checkout(target_ds)
    
        compute_provider = get_compute_provider(scheduler, num_workers)
        compute_id = str(uuid4().hex)
        target_ds._send_compute_progress(compute_id=compute_id, start=True, progress=0)
    
        initial_autoflush = target_ds.storage.autoflush
        target_ds.storage.autoflush = False
    
        if not check_lengths or read_only_ok:
            skip_ok = True
    
        checkpointing_enabled = checkpoint_interval > 0
        total_samples = len_data_in(data_in)
        if checkpointing_enabled:
            check_checkpoint_interval(
                data_in,
                checkpoint_interval,
                num_workers,
                overwrite,
                verbose,
            )
            datas_in = [
                data_in[i : i + checkpoint_interval]
                for i in range(0, len_data_in(data_in), checkpoint_interval)
            ]
    
        else:
            datas_in = [data_in]
    
        samples_processed = 0
        desc = get_pbar_description(self.functions)
        if progressbar:
            pbar = get_progress_bar(len_data_in(data_in), desc)
            pqueue = compute_provider.create_queue()
        else:
            pbar, pqueue = None, None
        try:
            desc = desc.split()[1]
            completed = False
            progress = 0.0
            for data_in in datas_in:
                if checkpointing_enabled:
                    target_ds._commit(
                        f"Auto-commit during deeplake.compute of {desc} after {progress}% progress",
                        None,
                        False,
                        is_checkpoint=True,
                        total_samples_processed=samples_processed,
                    )
                progress = round(
                    (samples_processed + len_data_in(data_in)) / total_samples * 100, 2
                )
                end = progress == 100
                progress_args = {
                    "compute_id": compute_id,
                    "progress": progress,
                    "end": end,
                }
    
                try:
>                   self.run(
                        data_in,
                        target_ds,
                        compute_provider,
                        num_workers,
                        scheduler,
                        progressbar,
                        overwrite,
                        skip_ok,
                        read_only_ok and overwrite,
                        cache_size,
                        pbar,
                        pqueue,
                        ignore_errors,
                        **kwargs,
                    )

deeplake/core/transform/transform.py:288: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
deeplake/core/transform/transform.py:461: in run
    merge_all_meta_info(
deeplake/util/encoder.py:43: in merge_all_meta_info
    merge_all_tensor_metas(
deeplake/util/encoder.py:87: in merge_all_tensor_metas
    storage[meta_key] = tensor_meta.tobytes()  # type: ignore
deeplake/core/storage/google_drive.py:330: in __setitem__
    self._write_to_file(id, content)
deeplake/core/storage/google_drive.py:258: in _write_to_file
    file = self.drive.files().update(media_body=content, fileId=id).execute()
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/googleapiclient/_helpers.py:131: in positional_wrapper
    return wrapped(*args, **kwargs)
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/googleapiclient/http.py:922: in execute
    resp, content = _retry_request(
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/googleapiclient/http.py:221: in _retry_request
    raise exception
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/googleapiclient/http.py:190: in _retry_request
    resp, content = http.request(uri, method, *args, **kwargs)
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/google_auth_httplib2.py:218: in request
    response, content = self.http.request(
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/httplib2/__init__.py:1724: in request
    (response, content) = self._request(
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/httplib2/__init__.py:1444: in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/site-packages/httplib2/__init__.py:1396: in _conn_request
    response = conn.getresponse()
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/http/client.py:1395: in getresponse
    response.begin()
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/http/client.py:325: in begin
    version, status, reason = self._read_status()
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/http/client.py:286: in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/socket.py:706: in readinto
    return self._sock.recv_into(b)
/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/ssl.py:1314: in recv_into
    return self.read(nbytes, buffer)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <ssl.SSLSocket [closed] fd=-1, family=2, type=1, proto=6>, len = 8192
buffer = <memory at 0x7f07abb48280>

    def read(self, len=1024, buffer=None):
        """Read up to LEN bytes and return them.
        Return zero-length string on EOF."""
    
        self._checkClosed()
        if self._sslobj is None:
            raise ValueError("Read on closed or unwrapped SSL socket.")
        try:
            if buffer is not None:
>               return self._sslobj.read(len, buffer)
E               TimeoutError: The read operation timed out

/opt/hostedtoolcache/Python/3.11.9/x64/lib/python3.11/ssl.py:1166: TimeoutError

The above exception was the direct cause of the following exception:

ds = Dataset(path='gdrive://hubtest/tmpf4bb', tensors=['json'])
compression = 'lz4', scheduler = 'threaded'

    @enabled_non_gcs_datasets
    @pytest.mark.parametrize("compression", ["lz4", None])
    @pytest.mark.slow
    def test_json_transform(ds, compression, scheduler="threaded"):
        ds.create_tensor("json", htype="json", sample_compression=compression)
    
        items = [
            {"x": [1, 2, 3], "y": [4, [5, 6]]},
            {"x": [1, 2, 3], "y": [4, {"z": [0.1, 0.2, []]}]},
            ["a", ["b", "c"], {"d": 1.0}],
            [1.0, 2.0, 3.0, 4.0],
            ["a", "b", "c", "d"],
            1,
            5.0,
            True,
            False,
            None,
        ] * 5
    
        expected = [*items[:9], {}] * 5
    
        @deeplake.compute
        def upload(stuff, ds):
            ds.json.append(stuff)
            return ds
    
>       upload().eval(items, ds, num_workers=2, scheduler=scheduler)

deeplake/api/tests/test_json.py:222: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
deeplake/core/transform/transform.py:105: in eval
    pipeline.eval(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <deeplake.core.transform.transform.Pipeline object at 0x7f07abd123d0>
data_in = [{'x': [1, 2, 3], 'y': [4, [5, 6]]}, {'x': [1, 2, 3], 'y': [4, {'z': [0.1, 0.2, []]}]}, ['a', ['b', 'c'], {'d': 1.0}], [1.0, 2.0, 3.0, 4.0], ['a', 'b', 'c', 'd'], 1, ...]
ds_out = Dataset(path='gdrive://hubtest/tmpf4bb', tensors=['json'])
num_workers = 2, scheduler = 'threaded', progressbar = True, skip_ok = False
check_lengths = True, pad_data_in = False, read_only_ok = False, cache_size = 16
checkpoint_interval = 0, ignore_errors = False, verbose = True, kwargs = {}
overwrite = False
original_data_in = [{'x': [1, 2, 3], 'y': [4, [5, 6]]}, {'x': [1, 2, 3], 'y': [4, {'z': [0.1, 0.2, []]}]}, ['a', ['b', 'c'], {'d': 1.0}], [1.0, 2.0, 3.0, 4.0], ['a', 'b', 'c', 'd'], 1, ...]
initial_padding_state = None
target_ds = Dataset(path='gdrive://hubtest/tmpf4bb', tensors=['json'])
compute_provider = <deeplake.core.compute.thread.ThreadProvider object at 0x7f06b390bb90>
compute_id = '98f86175fc61433bb144e16be1bc6904', initial_autoflush = True

    def eval(
        self,
        data_in,
        ds_out: Optional[deeplake.Dataset] = None,
        num_workers: int = 0,
        scheduler: str = "threaded",
        progressbar: bool = True,
        skip_ok: bool = False,
        check_lengths: bool = True,
        pad_data_in: bool = False,
        read_only_ok: bool = False,
        cache_size: int = DEFAULT_TRANSFORM_SAMPLE_CACHE_SIZE,
        checkpoint_interval: int = 0,
        ignore_errors: bool = False,
        verbose: bool = True,
        **kwargs,
    ):
        """Evaluates the pipeline on ``data_in`` to produce an output dataset ``ds_out``.
    
        Args:
            data_in: Input passed to the transform to generate output dataset. Should support \__getitem__ and \__len__. Can be a Deep Lake dataset.
            ds_out (Dataset, optional): - The dataset object to which the transform will get written. If this is not provided, ``data_in`` will be overwritten if it is a Deep Lake dataset, otherwise error will be raised.
                - It should have all keys being generated in output already present as tensors. It's initial state should be either:
                - **Empty**, i.e., all tensors have no samples. In this case all samples are added to the dataset.
                - **All tensors are populated and have same length.** In this case new samples are appended to the dataset.
            num_workers (int): The number of workers to use for performing the transform. Defaults to 0. When set to 0, it will always use serial processing, irrespective of the scheduler.
            scheduler (str): The scheduler to be used to compute the transformation. Supported values include: 'serial', 'threaded', 'processed' and 'ray'.
                Defaults to 'threaded'.
            progressbar (bool): Displays a progress bar if ``True`` (default).
            skip_ok (bool): If ``True``, skips the check for output tensors generated. This allows the user to skip certain tensors in the function definition.
                This is especially useful for inplace transformations in which certain tensors are not modified. Defaults to ``False``.
            check_lengths (bool): If ``True``, checks whether ``ds_out`` has tensors of same lengths initially.
            pad_data_in (bool): If ``True``, pads tensors of ``data_in`` to match the length of the largest tensor in ``data_in``.
                Defaults to ``False``.
            read_only_ok (bool): If ``True`` and output dataset is same as input dataset, the read-only check is skipped.
                Defaults to False.
            cache_size (int): Cache size to be used by transform per worker.
            checkpoint_interval (int): If > 0, the transform will be checkpointed with a commit every ``checkpoint_interval`` input samples to avoid restarting full transform due to intermitten failures. If the transform is interrupted, the intermediate data is deleted and the dataset is reset to the last commit.
                If <= 0, no checkpointing is done. Checkpoint interval should be a multiple of num_workers if num_workers > 0. Defaults to 0.
            ignore_errors (bool): If ``True``, input samples that causes transform to fail will be skipped and the errors will be ignored **if possible**.
            verbose (bool): If ``True``, prints additional information about the transform.
            **kwargs: Additional arguments.
    
        Raises:
            InvalidInputDataError: If ``data_in`` passed to transform is invalid. It should support \__getitem__ and \__len__ operations. Using scheduler other than "threaded" with deeplake dataset having base storage as memory as ``data_in`` will also raise this.
            InvalidOutputDatasetError: If all the tensors of ``ds_out`` passed to transform don't have the same length. Using scheduler other than "threaded" with deeplake dataset having base storage as memory as ``ds_out`` will also raise this.
            TensorMismatchError: If one or more of the outputs generated during transform contain different tensors than the ones present in 'ds_out' provided to transform.
            UnsupportedSchedulerError: If the scheduler passed is not recognized. Supported values include: 'serial', 'threaded', 'processed' and 'ray'.
            TransformError: All other exceptions raised if there are problems while running the pipeline.
            ValueError: If ``num_workers`` > 0 and ``checkpoint_interval`` is not a multiple of ``num_workers`` or if ``checkpoint_interval`` > 0 and ds_out is None.
    
    
        # noqa: DAR401
    
        Example::
    
            @deeplake.compute
            def my_fn(sample_in: Any, samples_out, my_arg0, my_arg1=0):
                samples_out.my_tensor.append(my_arg0 * my_arg1)
    
            # This transform can be used using the eval method in one of these 2 ways:-
    
            # Directly evaluating the method
            # here arg0 and arg1 correspond to the 3rd and 4th argument in my_fn
            my_fn(arg0, arg1).eval(data_in, ds_out, scheduler="threaded", num_workers=5)
    
            # As a part of a Transform pipeline containing other functions
            pipeline = deeplake.compose([my_fn(a, b), another_function(x=2)])
            pipeline.eval(data_in, ds_out, scheduler="processed", num_workers=2)
    
        Note:
            ``pad_data_in`` is only applicable if ``data_in`` is a Deep Lake dataset.
    
        """
        num_workers, scheduler = sanitize_workers_scheduler(num_workers, scheduler)
        overwrite = ds_out is None
        deeplake_reporter.feature_report(
            feature_name="eval",
            parameters={"Num_Workers": str(num_workers), "Scheduler": scheduler},
        )
        check_transform_data_in(data_in, scheduler)
    
        data_in, original_data_in, initial_padding_state = prepare_data_in(
            data_in, pad_data_in, overwrite
        )
        target_ds = data_in if overwrite else ds_out
    
        check_transform_ds_out(
            target_ds, scheduler, check_lengths, read_only_ok and overwrite
        )
    
        # if overwrite then we've already flushed and autocheckecked out data_in which is target_ds now
        if not overwrite:
            target_ds.flush()
            auto_checkout(target_ds)
    
        compute_provider = get_compute_provider(scheduler, num_workers)
        compute_id = str(uuid4().hex)
        target_ds._send_compute_progress(compute_id=compute_id, start=True, progress=0)
    
        initial_autoflush = target_ds.storage.autoflush
        target_ds.storage.autoflush = False
    
        if not check_lengths or read_only_ok:
            skip_ok = True
    
        checkpointing_enabled = checkpoint_interval > 0
        total_samples = len_data_in(data_in)
        if checkpointing_enabled:
            check_checkpoint_interval(
                data_in,
                checkpoint_interval,
                num_workers,
                overwrite,
                verbose,
            )
            datas_in = [
                data_in[i : i + checkpoint_interval]
                for i in range(0, len_data_in(data_in), checkpoint_interval)
            ]
    
        else:
            datas_in = [data_in]
    
        samples_processed = 0
        desc = get_pbar_description(self.functions)
        if progressbar:
            pbar = get_progress_bar(len_data_in(data_in), desc)
            pqueue = compute_provider.create_queue()
        else:
            pbar, pqueue = None, None
        try:
            desc = desc.split()[1]
            completed = False
            progress = 0.0
            for data_in in datas_in:
                if checkpointing_enabled:
                    target_ds._commit(
                        f"Auto-commit during deeplake.compute of {desc} after {progress}% progress",
                        None,
                        False,
                        is_checkpoint=True,
                        total_samples_processed=samples_processed,
                    )
                progress = round(
                    (samples_processed + len_data_in(data_in)) / total_samples * 100, 2
                )
                end = progress == 100
                progress_args = {
                    "compute_id": compute_id,
                    "progress": progress,
                    "end": end,
                }
    
                try:
                    self.run(
                        data_in,
                        target_ds,
                        compute_provider,
                        num_workers,
                        scheduler,
                        progressbar,
                        overwrite,
                        skip_ok,
                        read_only_ok and overwrite,
                        cache_size,
                        pbar,
                        pqueue,
                        ignore_errors,
                        **kwargs,
                    )
                    target_ds._send_compute_progress(**progress_args, status="success")
                    samples_processed += len_data_in(data_in)
                    completed = end
                except Exception as e:
                    if checkpointing_enabled:
                        print(
                            "Transform failed. Resetting back to last committed checkpoint."
                        )
                        target_ds.reset(force=True)
                    target_ds._send_compute_progress(**progress_args, status="failed")
                    index, sample, suggest = None, None, False
                    if isinstance(e, TransformError):
                        index, sample, suggest = e.index, e.sample, e.suggest
                        if checkpointing_enabled and isinstance(index, int):
                            index = samples_processed + index
                        e = e.__cause__  # type: ignore
                    if isinstance(e, AllSamplesSkippedError):
                        raise e
>                   raise TransformError(
                        index=index,
                        sample=sample,
                        samples_processed=samples_processed,
                        suggest=suggest,
                    ) from e
E                   deeplake.util.exceptions.TransformError: Transform failed. See traceback for more details.

deeplake/core/transform/transform.py:322: TransformError