Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reindex error: Error: Request Timeout after 30000ms (elasticsearch 6.0.1) #36

Open
Walidzm1 opened this issue Dec 15, 2017 · 3 comments

Comments

@Walidzm1
Copy link

I need help ! I'm using elasticsearch 6.0.1 and I'm trying to run this command:

elasticsearch-reindex -f http://****-****-****:9200/datapatient/patient -t http://localhost:9200/datapatientnew/patient

But I get an error: Reindex error: Error: Request Timeout after 30000ms

I also tried to run the following command from head plugin of my local machine:

_ reindex

{
  "source": {
    "remote": {
      "host": "http://****-****-****:9200",
    },
    "index": "datapatient",
    "type": "patient"
  },
  "dest": {
    "index": "datapatientnew"
  }
}

And i got an error too:

{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "Remote responded with a chunk that was too large. Use a smaller batch size."
}
],
"type": "illegal_argument_exception",
"reason": "Remote responded with a chunk that was too large. Use a smaller batch size.",
"caused_by": {
"type": "content_too_long_exception",
"reason": "entity content is too long [523680994] for the configured buffer limit [104857600]"
}
},
"status": 400
}

But when trying the same command with an index containing fewer documents, it works well

There is any way to increase timeout value ?

@MiguelAngel82
Copy link

Hello.

I have a similar issue trying to reindex from a 1.7 ES to a 6.3.1 ES. My index is very large (a lot of of GB) but I tried to separate it in a easy to use way (separate it in dates), more or less with a 15.000.000 documents per script execution.

After a few hours the Reindex error: Error: Request Timeout after 30000ms error occurs.

Kind regards,
Miguel.

@hacfi
Copy link

hacfi commented Aug 6, 2018

@OOMMYY
Copy link

OOMMYY commented Mar 17, 2020

I solved this problem by set the size to 100

elastic/elasticsearch#21185

This memory limit really needs to be configurable. The limit that's currently in place makes remote reindexing a nightmare. I have one of two options:
Option 1:
Reindex all the indexes with a size of 1 to ensure I don't hit this limit. This will take an immense amount of time because of how slow it will be.

Option 2:
Run the reindex with size of 3000. If it fails, try again with 1000. If that fails, try again with 500. If that fails, try again with 100. If that fails, try again with 50. If that fails, try again with 25......... (the numbers here don't matter but the point is -- with an index dozens of gigabytes, I can't be sure what the maximum size that I can use is without actually running the reindex).

If the limit can't be changed due to other issues that it would cause, then is it possible to add the ability to resume a reindex instead of having to start over every time? Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants