Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

release v2.0.0-alpha-1,测试过程中出现的一些问题 #110

Open
dwwang1992 opened this issue Aug 13, 2018 · 6 comments
Open

release v2.0.0-alpha-1,测试过程中出现的一些问题 #110

dwwang1992 opened this issue Aug 13, 2018 · 6 comments
Assignees
Labels
Milestone

Comments

@dwwang1992
Copy link

1、多表同步到一张表,在检测源表和目标表时出现报错
test_nvwi_0000.common = public.common
test_nvwi_0001.common = public.common
test_nvwi_0002.common = public.common
test_nvwi_0003.common = public.common
test_nvwi_0004.common = public.common
test_nvwi_0005.common = public.common
test_nvwi_0006.common = public.common
test_nvwi_0007.common = public.common
报错信息,Greenplum table and MySQL table size are inconsistent!

手动注释掉以下相关检测代码,才可以启动
if (table_map.size() != tableMap.size()) {
String message = "Greenplum table and MySQL table size are inconsistent!";
throw new BiremeException(message);
} else {
logger.info("MySQL、Greenplum table check completed, the state is okay!");
}

  if (table_map.size() != tableMap.size()) {
    String message = "some tables do not have primary keys!";
    throw new BiremeException(message);
  } else {
    logger.info("Greenplum table primary key check is completed, the state is okay!");
  }

2、很多表同步时,仍然出现表数目过多时,即pipeline过多时,表数据无法同步

@wangzw wangzw self-assigned this Aug 15, 2018
@wangzw wangzw added the bug label Aug 15, 2018
@wangzw wangzw added this to the 2.0.0 Release milestone Aug 15, 2018
@wangzw
Copy link
Member

wangzw commented Aug 25, 2018

Bireme "hang" if many table is configured:

  1. Greenplum is slow to delete/copy. At lease 16 segments is required in our test case. And if the workload is heavy, using resource queue to ensure bireme workload has enough resource.
  2. I assume heap table is used, right?
  3. Configure more database connection for Bireme by loader.conn_pool.size, default is 10.

@chengshao1987
Copy link

我在测试1.0版本bireme的时候,maxwell+kafka的方式,配置文件超过10个表的时候,就报错,报 PANIC: ERRORDATA_STACK_SIZE exceeded 。我把loader.conn_pool.size 改成20都没用。 请问这种情况怎么处理,上述说的using resource queue to ensure bireme workload has enough resource这个是啥。

@wangzw
Copy link
Member

wangzw commented Nov 15, 2018

PANIC: ERRORDATA_STACK_SIZE exceeded is an error throw by Greenplum, please ask in Greenplum community.

resource queue is a feature of Greenplum, please refer its documents.

@chengshao1987
Copy link

PANIC: ERRORDATA_STACK_SIZE exceeded is an error throw by Greenplum, please ask in Greenplum community.

resource queue is a feature of Greenplum, please refer its documents.

你们之前开发的时候使用的是哪个版本的greenplum?或者哪些个版本的postgre或者greenplum是支持的?我是真的没办法了。。头疼。。

@wangzw
Copy link
Member

wangzw commented Nov 15, 2018

PANIC: ERRORDATA_STACK_SIZE exceeded usually means a bug of Greenplum. Please report an issue to https://github.com/greenplum-db/gpdb. Currently Greenplum 4.3.x and 5.x is stable.

@chengshao1987
Copy link

PANIC: ERRORDATA_STACK_SIZE exceeded usually means a bug of Greenplum. Please report an issue to https://github.com/greenplum-db/gpdb. Currently Greenplum 4.3.x and 5.x is stable.
我们现在使用的是greenplum 5.10.2(PostgreSQL 8.3.23),我上greenplum上面去问问吧,谢谢~

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants