Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Calls to create_dataset and get_table are timing out #34

Closed
tswast opened this issue Feb 7, 2020 · 12 comments
Closed

Calls to create_dataset and get_table are timing out #34

tswast opened this issue Feb 7, 2020 · 12 comments
Assignees
Labels
api: bigquery Issues related to the googleapis/python-bigquery API. priority: p2 Moderately-important priority. Fix may not be included in next release. status: awaiting information type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.

Comments

@tswast
Copy link
Contributor

tswast commented Feb 7, 2020

We're getting some frequent test failures in the pandas integration tests due to timed out requests, possibly due to googleapis/google-cloud-python#10219 which added non-None default timeout.

It seems 60 seconds is not enough for these calls (but strange that so many API requests are timing out).

Environment details

Context: googleapis/python-bigquery-pandas#309

Steps to reproduce

  1. Make frequent calls to create_dataset.
  2. Observe flakey tests.

Stack trace

        if new_retry.is_exhausted():
>           raise MaxRetryError(_pool, url, error or ResponseError(cause))
E           urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='bigquery.googleapis.com', port=443): Max retries exceeded with url: /bigquery/v2/projects/pandas-travis/datasets/pydata_pandas_bq_testing_dddqwbgrjz/tables/hrsctwrmva (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7ff87946e898>: Failed to establish a new connection: [Errno 110] Connection timed out',))
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/urllib3/util/retry.py:436: MaxRetryError
During handling of the above exception, another exception occurred:
self = <pandas.tests.io.test_gbq.TestToGBQIntegrationWithServiceAccountKeyPath object at 0x7ff879725160>
gbq_dataset = 'pydata_pandas_bq_testing_dddqwbgrjz.hrsctwrmva'
    def test_roundtrip(self, gbq_dataset):
        destination_table = gbq_dataset
    
        test_size = 20001
        df = make_mixed_dataframe_v2(test_size)
    
        df.to_gbq(
            destination_table,
            _get_project_id(),
            chunksize=None,
>           credentials=_get_credentials(),
        )
pandas/tests/io/test_gbq.py:185: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
pandas/core/frame.py:1551: in to_gbq
    credentials=credentials,
pandas/io/gbq.py:219: in to_gbq
    private_key=private_key,
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas_gbq/gbq.py:1202: in to_gbq
    if table.exists(table_id):
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/pandas_gbq/gbq.py:1312: in exists
    self.client.get_table(table_ref)
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/google/cloud/bigquery/client.py:679: in get_table
    retry, method="GET", path=table_ref.path, timeout=timeout
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/google/cloud/bigquery/client.py:556: in _call_api
    return call()
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/google/api_core/retry.py:286: in retry_wrapped_func
    on_error=on_error,
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/google/api_core/retry.py:184: in retry_target
    return target()
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/google/cloud/_http.py:419: in api_request
    timeout=timeout,
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/google/cloud/_http.py:277: in _make_request
    method, url, headers, data, target_object, timeout=timeout
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/google/cloud/_http.py:315: in _do_request
    url=url, method=method, headers=headers, data=data, timeout=timeout
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/google/auth/transport/requests.py:317: in request
    **kwargs
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/requests/sessions.py:533: in request
    resp = self.send(prep, **send_kwargs)
../../../miniconda3/envs/pandas-dev/lib/python3.6/site-packages/requests/sessions.py:646: in send
    r = adapter.send(request, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
@product-auto-label product-auto-label bot added the api: bigquery Issues related to the googleapis/python-bigquery API. label Feb 7, 2020
@shollyman
Copy link
Contributor

Do the tests favor a particular region? Always running in US multiregion?

@tswast
Copy link
Contributor Author

tswast commented Feb 7, 2020

Just the default US region. None of the to_gbq tests at https://github.com/pandas-dev/pandas/blob/8105a7e8282edba5c98138ac09ed2c3bec7823e6/pandas/tests/io/test_gbq.py explicitly set a region.

@yoshi-automation yoshi-automation added triage me I really want to be triaged. 🚨 This issue needs some love. labels Feb 8, 2020
@meredithslota
Copy link
Contributor

Assigning Seth but if this is something @plamut could look at, let's discuss.

@plamut
Copy link
Contributor

plamut commented Feb 25, 2020

I'm currently working on #40 which might be similar. Once that is done, we can re-check and see if it also helped with this issue.

@meredithslota
Copy link
Contributor

#40 seems closed now; @plamut can you take a look?

@meredithslota meredithslota assigned plamut and unassigned shollyman Mar 13, 2020
@meredithslota meredithslota added type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns. and removed triage me I really want to be triaged. labels Mar 13, 2020
@yoshi-automation yoshi-automation added the triage me I really want to be triaged. label Mar 13, 2020
@plamut
Copy link
Contributor

plamut commented Mar 13, 2020

@meredithslota Sure, will take a look at it early next week when I come back.

@plamut plamut added priority: p2 Moderately-important priority. Fix may not be included in next release. status: awaiting information and removed 🚨 This issue needs some love. triage me I really want to be triaged. labels Mar 20, 2020
@plamut
Copy link
Contributor

plamut commented Mar 20, 2020

Currently awaiting response to a question posted under the related Pandas GBQ ticket.

@yoshi-automation yoshi-automation added 🚨 This issue needs some love. and removed 🚨 This issue needs some love. labels Jun 18, 2020
@yoshi-automation yoshi-automation added the 🚨 This issue needs some love. label Aug 5, 2020
@meredithslota
Copy link
Contributor

googleapis/python-bigquery-pandas#309 links to Travis failures but I see a recent build passing (https://travis-ci.org/github/pandas-dev/pandas/builds/705824144) re: re-enabling pandas-gbq tests. So, my guess is that this work spanned three different repos and this has been fixed by the changes @plamut made earlier. I'm cautiously closing this but please reopen if you are still seeing issues.

@meredithslota meredithslota removed the 🚨 This issue needs some love. label Aug 19, 2020
@drmario-gh
Copy link

Hi,

Since January, 17th (2022) we have started experiencing ReadTimeout errors in the get_table method. They represent a very small portion of our streaming attempts: around 190 timeouts in 11,1 million messages. This does not create a real problem for us but we are a bit scared because of their sudden increase. They were extremely rare in the past (below 1 per week).

For context, this happens in a component that streams Pub/Sub messages into BigQuery, which has been live for quite a long time (years). We are also concerned because we are using quite an old version of the bigquery library (we are currently working towards updating it) and we are wondering if there is some looming deprecation we don't know about.

We have seen that there has been some work going around about timeout values (#889) and we might be able to bump our google-cloud-biquery version a little easily: is there any near version we should prefer/avoid?

Thank you!

Environment details

  • OS type and version: Debian GNU/Linux 10 (buster)
  • Python version: Python 3.8.4
  • pip version: pip 20.1.1
  • google-cloud-bigquery version: 1.17.0

Full traceback of one of the timeouts:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 449, in _make_request
    six.raise_from(e, None)
  File "<string>", line 3, in raise_from
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 444, in _make_request
    httplib_response = conn.getresponse()
  File "/usr/local/lib/python3.8/site-packages/sentry_sdk/integrations/stdlib.py", line 102, in getresponse
    rv = real_getresponse(self, *args, **kwargs)
  File "/usr/local/lib/python3.8/http/client.py", line 1332, in getresponse
    response.begin()
  File "/usr/local/lib/python3.8/http/client.py", line 303, in begin
    version, status, reason = self._read_status()
  File "/usr/local/lib/python3.8/http/client.py", line 264, in _read_status
    line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
  File "/usr/local/lib/python3.8/socket.py", line 669, in readinto
    return self._sock.recv_into(b)
  File "/usr/local/lib/python3.8/ssl.py", line 1241, in recv_into
    return self.read(nbytes, buffer)
  File "/usr/local/lib/python3.8/ssl.py", line 1099, in read
    return self._sslobj.read(len, buffer)
socket.timeout: The read operation timed out

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 440, in send
    resp = conn.urlopen(
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 785, in urlopen
    retries = retries.increment(
  File "/usr/local/lib/python3.8/site-packages/urllib3/util/retry.py", line 550, in increment
    raise six.reraise(type(error), error, _stacktrace)
  File "/usr/local/lib/python3.8/site-packages/urllib3/packages/six.py", line 770, in reraise
    raise value
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 451, in _make_request
    self._raise_timeout(err=e, url=url, timeout_value=read_timeout)
  File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 340, in _raise_timeout
    raise ReadTimeoutError(
urllib3.exceptions.ReadTimeoutError: HTTPSConnectionPool(host='www.googleapis.com', port=443): Read timed out. (read timeout=60)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.8/site-packages/rele/subscription.py", line 112, in __call__
    res = self._subscription(data, **dict(message.attributes))
  File "/usr/local/lib/python3.8/site-packages/rele/subscription.py", line 76, in __call__
    return self._func(data, **kwargs)
  File "/app/src/data_collector/bigquery/subscribers.py", line 28, in write_data
    write(self.dataset_id, self.table, **write_kwargs)
  File "/app/src/data_collector/bigquery/writing.py", line 19, in write
    _writer.write(dataset_id, table_name, data, raise_exception, create_extra_columns,
  File "/app/src/data_collector/bigquery/client.py", line 23, in write
    table = self._get_table(dataset_id, table_name)
  File "/app/src/data_collector/bigquery/client.py", line 39, in _get_table
    return self._client.get_table(table_ref)
  File "/usr/local/lib/python3.8/site-packages/google/cloud/bigquery/client.py", line 561, in get_table
    api_response = self._call_api(retry, method="GET", path=table_ref.path)
  File "/usr/local/lib/python3.8/site-packages/google/cloud/bigquery/client.py", line 456, in _call_api
    return call()
  File "/usr/local/lib/python3.8/site-packages/google/api_core/retry.py", line 281, in retry_wrapped_func
    return retry_target(
  File "/usr/local/lib/python3.8/site-packages/google/api_core/retry.py", line 184, in retry_target
    return target()
  File "/usr/local/lib/python3.8/site-packages/google/cloud/_http.py", line 427, in api_request
    response = self._make_request(
  File "/usr/local/lib/python3.8/site-packages/google/cloud/_http.py", line 291, in _make_request
    return self._do_request(
  File "/usr/local/lib/python3.8/site-packages/google/cloud/_http.py", line 329, in _do_request
    return self.http.request(
  File "/usr/local/lib/python3.8/site-packages/google/auth/transport/requests.py", line 448, in request
    response = super(AuthorizedSession, self).request(
  File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 529, in request
    resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 645, in send
    r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 532, in send
    raise ReadTimeout(e, request=request)
requests.exceptions.ReadTimeout: HTTPSConnectionPool(host='www.googleapis.com', port=443): Read timed out. (read timeout=60)

@tswast
Copy link
Contributor Author

tswast commented Jan 20, 2022

@drmario-gh Retries for requests timeout errors weren't added until #896

Even though this was added in 2.25.1, I recommend version 2.27.0 or later. #896 came with a default timeout which caused problems for some long running queries.

https://github.com/googleapis/python-bigquery/blob/main/CHANGELOG.md#2270-2021-09-24

@tswast
Copy link
Contributor Author

tswast commented Jan 20, 2022

we are wondering if there is some looming deprecation we don't know about.

There is a v3 library coming very soon, but it only affects the "models" API along with some changes to default dtypes in the pandas connector. #1021

@FBosler
Copy link

FBosler commented Oct 18, 2022

Hi,

Since January, 17th (2022) we have started experiencing ReadTimeout errors in the get_table method. They represent a very small portion of our streaming attempts: around 190 timeouts in 11,1 million messages. This does not create a real problem for us but we are a bit scared because of their sudden increase. They were extremely rare in the past (below 1 per week).

For context, this happens in a component that streams Pub/Sub messages into BigQuery, which has been live for quite a long time (years). We are also concerned because we are using quite an old version of the bigquery library (we are currently working towards updating it) and we are wondering if there is some looming deprecation we don't know about.

@drmario-gh Seeing the exact same behaviour right now. Have an ETL-project that's been live for years and relies on get_table to verify if a table has been created or not. As I am writing this, the get_table calls are just not finishing anymore (been waiting for minutes). Typically these calls would only take seconds.

We currently are using google-cloud-bigquery==2.12.0.

Did you ultimately figure out what was happening?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
api: bigquery Issues related to the googleapis/python-bigquery API. priority: p2 Moderately-important priority. Fix may not be included in next release. status: awaiting information type: bug Error or flaw in code with unintended results or allowing sub-optimal usage patterns.
Projects
None yet
Development

No branches or pull requests

7 participants