Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Undolog Packet for query is too large when table longblob field more than 40MB #6451

Open
1 task
jsbxyyx opened this issue Mar 29, 2024 · 3 comments · May be fixed by #6483
Open
1 task

Undolog Packet for query is too large when table longblob field more than 40MB #6451

jsbxyyx opened this issue Mar 29, 2024 · 3 comments · May be fixed by #6483
Labels
type: bug Category issues or prs related to bug.

Comments

@jsbxyyx
Copy link
Member

jsbxyyx commented Mar 29, 2024

  • I have searched the issues of this repository and believe that this is not a duplicate.

Ⅰ. Issue Description

Packet for query is too large (83,886,137 > 67,108,864). You can change this value on the server by setting the 'max_allowed_packet' variable.

create table test (
id int primary key,
name varchar(45),
file longblob
)
insert into test(id, name file) values(1, 'name', '<binary 40MB>');
update test set file = '<binary 40MB>', name = 'name2' where id = 1;

Ⅱ. Describe what happened

If there is an exception, please attach the exception trace:

Just paste your stack trace here!

Ⅲ. Describe what you expected to happen

Ⅳ. How to reproduce it (as minimally and precisely as possible)

  1. xxx
  2. xxx
  3. xxx

Minimal yet complete reproducer code (or URL to code):

Ⅴ. Anything else we need to know?

Ⅵ. Environment:

  • JDK version(e.g. java -version):
  • Seata client/server version:
  • Database version:
  • OS(e.g. uname -a):
  • Others:
@jsbxyyx jsbxyyx added the type: bug Category issues or prs related to bug. label Mar 29, 2024
@slievrly
Copy link
Member

slievrly commented Apr 6, 2024

From what I see, the issue described is a solution. The default value for max_allowed_packet is 64M, yet you're describing storage of 40M, which seems to have some gap. Additionally, is the oversized transaction mentioned in the issue something encountered in a real-world scenario?

@jsbxyyx
Copy link
Member Author

jsbxyyx commented Apr 6, 2024

场景:通过数据库来保存附件。
Scenes:Save attachments through database.

更新语句时,前置镜像 + 后置镜像 > 64MB
before image + after image > 64MB, when update

@leizhiyuan
Copy link
Contributor

Storing attachments in a database is not a good practice. It is recommended to use object storage instead. The database stores the address of the storage location.
数据库保存附件不是一个好的实践,建议自己走对象存储。数据库存储地址

@jsbxyyx jsbxyyx linked a pull request Apr 17, 2024 that will close this issue
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type: bug Category issues or prs related to bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants