Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

e.displayText() = DB::Exception: Table xxxxxx already exists. #1667

Closed
skyoct opened this issue May 10, 2024 · 4 comments
Closed

e.displayText() = DB::Exception: Table xxxxxx already exists. #1667

skyoct opened this issue May 10, 2024 · 4 comments
Labels
bug Something isn't working not an issue

Comments

@skyoct
Copy link
Collaborator

skyoct commented May 10, 2024

Bug Report

Briefly describe the bug

2024.05.10 23:16:00.937589 [ 439 ] {} <Debug> acquireNamedCnchSession: Trying to acquire session for 449669800861630472
2024.05.10 23:16:00.939753 [ 439 ] {} <Error> CnchWorkerService: auto DB::CnchWorkerServiceImpl::sendResources(google::protobuf::RpcController *, const Protos::SendResourcesReq *, Protos::SendResourcesResp *, google::protobuf::Closure *)::(anonymous class)::operator()() const: Code: 57, e.displayText() = DB::Exception: Table ods.app_10001_ad_default_449669800861630472 already exists. SQLSTATE: 42P07, Stack trace (when copying this message, always include the lines below):
0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x26955472 in /opt/byconity/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x1037d980 in /opt/byconity/bin/clickhouse
2. DB::CnchWorkerResource::executeCreateQuery(std::__1::shared_ptr<DB::Context>, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool, DB::ColumnsDescription const&) @ 0x1f8e0433 in /opt/byconity/bin/clickhouse
3. DB::CnchWorkerServiceImpl::sendResources(google::protobuf::RpcController*, DB::Protos::SendResourcesReq const*, DB::Protos::SendResourcesResp*, google::protobuf::Closure*)::$_13::operator()() const @ 0x1f8ce040 in /opt/byconity/bin/clickhouse
4. ThreadPoolImpl<ThreadFromGlobalPool>::worker(std::__1::__list_iterator<ThreadFromGlobalPool, void*>) @ 0x103bd8d1 in /opt/byconity/bin/clickhouse
5. ThreadFromGlobalPool::ThreadFromGlobalPool<void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()>(void&&, void ThreadPoolImpl<ThreadFromGlobalPool>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()&&...)::'lambda'()::operator()() @ 0x103bf3e8 in /opt/byconity/bin/clickhouse
6. ThreadPoolImpl<std::__1::thread>::worker(std::__1::__list_iterator<std::__1::thread, void*>) @ 0x103b9fe0 in /opt/byconity/bin/clickhouse
7. void* std::__1::__thread_proxy<std::__1::tuple<std::__1::unique_ptr<std::__1::__thread_struct, std::__1::default_delete<std::__1::__thread_struct> >, void ThreadPoolImpl<std::__1::thread>::scheduleImpl<void>(std::__1::function<void ()>, int, std::__1::optional<unsigned long>)::'lambda0'()> >(void*) @ 0x103be45a in /opt/byconity/bin/clickhouse
8. start_thread @ 0x7ea7 in /lib/x86_64-linux-gnu/libpthread-2.31.so
9. clone @ 0xfca2f in /lib/x86_64-linux-gnu/libc-2.31.so
 (version 21.8.7.1)

The result you expected

How to Reproduce

Version

0.4.0

@skyoct skyoct added the bug Something isn't working label May 10, 2024
@FayeSpica
Copy link

FayeSpica commented May 11, 2024

same bug
using s3 based storage
e.g:

select timestamp,count() from mv_ods_aggregated where timestamp > now() - INTERVAL 5 hour group by timestamp order by timestamp asc;

60 rows in set. Elapsed: 2.562 sec. Processed 79.22 thousand rows, 316.87 KB (30.92 thousand rows/s., 123.68 KB/s.)
select timestamp,count() from mv_ods_aggregated where timestamp > now() - INTERVAL 6 hour group by timestamp order by timestamp asc;

Received exception from server (version 21.8.7):
Code: 57. DB::Exception: Received from localhost:52145. DB::Exception: DB::Exception: Table mv_ods_aggregated _449679721510731780 already exists. SQLSTATE: 42P07.: While executing Remote SQLSTATE: 42P07.

@skyoct
Copy link
Collaborator Author

skyoct commented May 11, 2024

Perhaps it was due to the loss of the part in S3?

The following content is from the system.query_log table.

Exception:

Code: 57, e.displayText() = DB::Exception: DB::Exception: Table ods.app_10001_ad_default_449679739095613441 already exists. SQLSTATE: 42P07.: While executing Remote SQLSTATE: 42P07 (version 21.8.7.1)

Stack:

0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x26955472 in /opt/byconity/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x1037d980 in /opt/byconity/bin/clickhouse
2. DB::readException(DB::ReadBuffer&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool) @ 0x103ef251 in /opt/byconity/bin/clickhouse
3. DB::RPCHelpers::checkException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1fabdb2b in /opt/byconity/bin/clickhouse
4. void DB::RPCHelpers::onAsyncCallDoneWithFailedInfo<DB::Protos::SendResourcesResp>(DB::Protos::SendResourcesResp*, brpc::Controller*, std::__1::shared_ptr<DB::ExceptionHandlerWithFailedInfo>, DB::WorkerId) @ 0x1f8b8aea in /opt/byconity/bin/clickhouse
5. brpc::internal::FunctionClosure4<DB::Protos::SendResourcesResp*, brpc::Controller*, std::__1::shared_ptr<DB::ExceptionHandlerWithFailedInfo>, DB::WorkerId>::Run() @ 0x1f8ba861 in /opt/byconity/bin/clickhouse
6. /build/build_docker/../contrib/incubator-brpc/src/brpc/controller.h:704: brpc::Controller::EndRPC(brpc::Controller::CompletionInfo const&) @ 0x266b1829 in /opt/byconity/bin/clickhouse
7. /build/build_docker/../contrib/incubator-brpc/src/brpc/controller.cpp:0: brpc::Controller::OnVersionedRPCReturned(brpc::Controller::CompletionInfo const&, bool, int) @ 0x266b0386 in /opt/byconity/bin/clickhouse
8. /build/build_docker/../contrib/incubator-brpc/src/brpc/details/controller_private_accessor.h:0: brpc::policy::ProcessRpcResponse(brpc::InputMessageBase*) @ 0x266d98b0 in /opt/byconity/bin/clickhouse
9. /build/build_docker/../contrib/incubator-brpc/src/brpc/input_messenger.cpp:386: brpc::InputMessenger::OnNewMessages(brpc::Socket*) @ 0x266d014e in /opt/byconity/bin/clickhouse
10. /build/build_docker/../contrib/libcxx/include/memory:1655: brpc::Socket::ProcessEvent(void*) @ 0x268002ad in /opt/byconity/bin/clickhouse
11. /build/build_docker/../contrib/incubator-brpc/src/bthread/task_group.cpp:304: bthread::TaskGroup::task_runner(long) @ 0x2669ddad in /opt/byconity/bin/clickhouse

Fackback

Code: 499, e.displayText() = DB::Exception: DB::Exception: Encounter exception when request s3, HTTP Code: 404, RemoteHost: , RequestID: , ExceptionName: , ErrorMessage: No response body., Extra:  SQLSTATE: HY000. SQLSTATE: HY000, Stack trace (when copying this message, always include the lines below):

0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x26955472 in /opt/byconity/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x1037d980 in /opt/byconity/bin/clickhouse
2. DB::readException(DB::ReadBuffer&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool) @ 0x103ef251 in /opt/byconity/bin/clickhouse
3. DB::RPCHelpers::checkException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1fabdb2b in /opt/byconity/bin/clickhouse
4. void DB::RPCHelpers::onAsyncCallDoneWithFailedInfo<DB::Protos::SendResourcesResp>(DB::Protos::SendResourcesResp*, brpc::Controller*, std::__1::shared_ptr<DB::ExceptionHandlerWithFailedInfo>, DB::WorkerId) @ 0x1f8b8aea in /opt/byconity/bin/clickhouse
5. brpc::internal::FunctionClosure4<DB::Protos::SendResourcesResp*, brpc::Controller*, std::__1::shared_ptr<DB::ExceptionHandlerWithFailedInfo>, DB::WorkerId>::Run() @ 0x1f8ba861 in /opt/byconity/bin/clickhouse
6. /build/build_docker/../contrib/incubator-brpc/src/brpc/controller.h:704: brpc::Controller::EndRPC(brpc::Controller::CompletionInfo const&) @ 0x266b1829 in /opt/byconity/bin/clickhouse
7. /build/build_docker/../contrib/incubator-brpc/src/brpc/controller.cpp:0: brpc::Controller::OnVersionedRPCReturned(brpc::Controller::CompletionInfo const&, bool, int) @ 0x266b0386 in /opt/byconity/bin/clickhouse
8. /build/build_docker/../contrib/incubator-brpc/src/brpc/details/controller_private_accessor.h:0: brpc::policy::ProcessRpcResponse(brpc::InputMessageBase*) @ 0x266d98b0 in /opt/byconity/bin/clickhouse
9. /build/build_docker/../contrib/incubator-brpc/src/brpc/input_messenger.cpp:386: brpc::InputMessenger::OnNewMessages(brpc::Socket*) @ 0x266d014e in /opt/byconity/bin/clickhouse
10. /build/build_docker/../contrib/libcxx/include/memory:1655: brpc::Socket::ProcessEvent(void*) @ 0x268002ad in /opt/byconity/bin/clickhouse
11. /build/build_docker/../contrib/incubator-brpc/src/bthread/task_group.cpp:304: bthread::TaskGroup::task_runner(long) @ 0x2669ddad in /opt/byconity/bin/clickhouse
 (version 21.8.7.1)

@FayeSpica
Copy link

Perhaps it was due to the loss of the part in S3?

The following content is from the system.query_log table.

Exception:

Code: 57, e.displayText() = DB::Exception: DB::Exception: Table ods.app_10001_ad_default_449679739095613441 already exists. SQLSTATE: 42P07.: While executing Remote SQLSTATE: 42P07 (version 21.8.7.1)

Stack:

0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x26955472 in /opt/byconity/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x1037d980 in /opt/byconity/bin/clickhouse
2. DB::readException(DB::ReadBuffer&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool) @ 0x103ef251 in /opt/byconity/bin/clickhouse
3. DB::RPCHelpers::checkException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1fabdb2b in /opt/byconity/bin/clickhouse
4. void DB::RPCHelpers::onAsyncCallDoneWithFailedInfo<DB::Protos::SendResourcesResp>(DB::Protos::SendResourcesResp*, brpc::Controller*, std::__1::shared_ptr<DB::ExceptionHandlerWithFailedInfo>, DB::WorkerId) @ 0x1f8b8aea in /opt/byconity/bin/clickhouse
5. brpc::internal::FunctionClosure4<DB::Protos::SendResourcesResp*, brpc::Controller*, std::__1::shared_ptr<DB::ExceptionHandlerWithFailedInfo>, DB::WorkerId>::Run() @ 0x1f8ba861 in /opt/byconity/bin/clickhouse
6. /build/build_docker/../contrib/incubator-brpc/src/brpc/controller.h:704: brpc::Controller::EndRPC(brpc::Controller::CompletionInfo const&) @ 0x266b1829 in /opt/byconity/bin/clickhouse
7. /build/build_docker/../contrib/incubator-brpc/src/brpc/controller.cpp:0: brpc::Controller::OnVersionedRPCReturned(brpc::Controller::CompletionInfo const&, bool, int) @ 0x266b0386 in /opt/byconity/bin/clickhouse
8. /build/build_docker/../contrib/incubator-brpc/src/brpc/details/controller_private_accessor.h:0: brpc::policy::ProcessRpcResponse(brpc::InputMessageBase*) @ 0x266d98b0 in /opt/byconity/bin/clickhouse
9. /build/build_docker/../contrib/incubator-brpc/src/brpc/input_messenger.cpp:386: brpc::InputMessenger::OnNewMessages(brpc::Socket*) @ 0x266d014e in /opt/byconity/bin/clickhouse
10. /build/build_docker/../contrib/libcxx/include/memory:1655: brpc::Socket::ProcessEvent(void*) @ 0x268002ad in /opt/byconity/bin/clickhouse
11. /build/build_docker/../contrib/incubator-brpc/src/bthread/task_group.cpp:304: bthread::TaskGroup::task_runner(long) @ 0x2669ddad in /opt/byconity/bin/clickhouse

Fackback

Code: 499, e.displayText() = DB::Exception: DB::Exception: Encounter exception when request s3, HTTP Code: 404, RemoteHost: , RequestID: , ExceptionName: , ErrorMessage: No response body., Extra:  SQLSTATE: HY000. SQLSTATE: HY000, Stack trace (when copying this message, always include the lines below):

0. Poco::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int) @ 0x26955472 in /opt/byconity/bin/clickhouse
1. DB::Exception::Exception(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, int, bool) @ 0x1037d980 in /opt/byconity/bin/clickhouse
2. DB::readException(DB::ReadBuffer&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, bool) @ 0x103ef251 in /opt/byconity/bin/clickhouse
3. DB::RPCHelpers::checkException(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) @ 0x1fabdb2b in /opt/byconity/bin/clickhouse
4. void DB::RPCHelpers::onAsyncCallDoneWithFailedInfo<DB::Protos::SendResourcesResp>(DB::Protos::SendResourcesResp*, brpc::Controller*, std::__1::shared_ptr<DB::ExceptionHandlerWithFailedInfo>, DB::WorkerId) @ 0x1f8b8aea in /opt/byconity/bin/clickhouse
5. brpc::internal::FunctionClosure4<DB::Protos::SendResourcesResp*, brpc::Controller*, std::__1::shared_ptr<DB::ExceptionHandlerWithFailedInfo>, DB::WorkerId>::Run() @ 0x1f8ba861 in /opt/byconity/bin/clickhouse
6. /build/build_docker/../contrib/incubator-brpc/src/brpc/controller.h:704: brpc::Controller::EndRPC(brpc::Controller::CompletionInfo const&) @ 0x266b1829 in /opt/byconity/bin/clickhouse
7. /build/build_docker/../contrib/incubator-brpc/src/brpc/controller.cpp:0: brpc::Controller::OnVersionedRPCReturned(brpc::Controller::CompletionInfo const&, bool, int) @ 0x266b0386 in /opt/byconity/bin/clickhouse
8. /build/build_docker/../contrib/incubator-brpc/src/brpc/details/controller_private_accessor.h:0: brpc::policy::ProcessRpcResponse(brpc::InputMessageBase*) @ 0x266d98b0 in /opt/byconity/bin/clickhouse
9. /build/build_docker/../contrib/incubator-brpc/src/brpc/input_messenger.cpp:386: brpc::InputMessenger::OnNewMessages(brpc::Socket*) @ 0x266d014e in /opt/byconity/bin/clickhouse
10. /build/build_docker/../contrib/libcxx/include/memory:1655: brpc::Socket::ProcessEvent(void*) @ 0x268002ad in /opt/byconity/bin/clickhouse
11. /build/build_docker/../contrib/incubator-brpc/src/bthread/task_group.cpp:304: bthread::TaskGroup::task_runner(long) @ 0x2669ddad in /opt/byconity/bin/clickhouse
 (version 21.8.7.1)

I tried restarting all components and reducing the number of byconity-server instance from 5 to 2. now, It works fine

select timestamp,count() from mv_ods_aggregated where where timestamp >= '2024-05-11 00:00:00' and timestamp < '2024-05-11 12:00:00' group by timestamp order by timestamp asc;

144 rows in set. Elapsed: 0.081 sec. Processed 282.19 thousand rows, 1.13 MB (3.47 million rows/s., 13.89 MB/s.)

I'm not sure how it recovered

@skyoct
Copy link
Collaborator Author

skyoct commented May 13, 2024

Confirming that the missing part caused the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working not an issue
Projects
None yet
Development

No branches or pull requests

3 participants