-
-
Notifications
You must be signed in to change notification settings - Fork 479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When the OPC server loses power and is then powered on, the OPC client's reconnect feature is not activated. #1318
Comments
Please update to more recent version of node-opcua. Also, describe which system you are trying to connect to, as the behavior might be induced by some specific behavior of the OPCU Server.
Consider using our professional service for deeper investigation. (https://support.sterfive.com) |
It is possible that the server you are connecting to does not implement the ResendRequest method. Could you test again with this version and confirm that the issue is fixed ? |
The node-opc version has been updated to 2.117.0. When the Opc Server is powered off or disconnected and then recovery, the client will reconnect successfully but Description of the other OPCUA system I am trying to connect to: Name: Beijer Box2 pro |
During reconnection with a new run of the server, NodeOPCUA is trying to restore the MonitoredItem that was created in the previous instance automatically. The log shows that the createMonitoredItem request takes too much time to process on the server side ( it exceed the default timeout of 15 seconds). What is the defaultMaxMonitoredItemsPerCall exhibited by your server ? How many monitored Items were active in the client before the disconnection happened ? |
You can investigate the defaultMaxMonitoredItemsPerCall by using an OPCUA Client and visiting the following node in the address space:
I suspect that MaxMonitoredItemsPerCall is zero (0), meaning there are no limits. |
@erossignon
|
@erossignon Here's the detailed log: {"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 0, next attempt in 1.043 seconds","timestamp":"2024-03-07T09:12:18.151Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 1, next attempt in 2.149 seconds","timestamp":"2024-03-07T09:12:22.274Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 2, next attempt in 4.336 seconds","timestamp":"2024-03-07T09:12:27.490Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 3, next attempt in 8.697 seconds","timestamp":"2024-03-07T09:12:34.882Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 4, next attempt in 16.372 seconds","timestamp":"2024-03-07T09:12:46.658Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 5, next attempt in 21.664 seconds","timestamp":"2024-03-07T09:13:06.082Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 6, next attempt in 20.049 seconds","timestamp":"2024-03-07T09:13:30.818Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 7, next attempt in 21.979 seconds","timestamp":"2024-03-07T09:13:53.922Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 8, next attempt in 20.238 seconds","timestamp":"2024-03-07T09:14:18.978Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 9, next attempt in 20.764 seconds","timestamp":"2024-03-07T09:14:42.274Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 10, next attempt in 20.945 seconds","timestamp":"2024-03-07T09:15:06.114Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 11, next attempt in 20.185 seconds","timestamp":"2024-03-07T09:15:30.114Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 12, next attempt in 20.151 seconds","timestamp":"2024-03-07T09:15:53.378Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 13, next attempt in 21.267 seconds","timestamp":"2024-03-07T09:16:16.610Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 14, next attempt in 21.293 seconds","timestamp":"2024-03-07T09:16:40.930Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 15, next attempt in 21.944 seconds","timestamp":"2024-03-07T09:17:05.281Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 16, next attempt in 20.171 seconds","timestamp":"2024-03-07T09:17:30.306Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 17, next attempt in 21.351 seconds","timestamp":"2024-03-07T09:17:53.538Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 18, next attempt in 21.278 seconds","timestamp":"2024-03-07T09:18:17.953Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 19, next attempt in 21.54 seconds","timestamp":"2024-03-07T09:18:42.306Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 20, next attempt in 21.737 seconds","timestamp":"2024-03-07T09:19:06.913Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 21, next attempt in 21.421 seconds","timestamp":"2024-03-07T09:19:31.714Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 22, next attempt in 21.48 seconds","timestamp":"2024-03-07T09:19:56.194Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 23, next attempt in 20.529 seconds","timestamp":"2024-03-07T09:20:20.738Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 24, next attempt in 21.887 seconds","timestamp":"2024-03-07T09:20:41.271Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 25, next attempt in 21.678 seconds","timestamp":"2024-03-07T09:21:03.160Z"}
{"level":"warn","message":"opc-job still trying to connect to opc.tcp://192.168.50.88:49320: retry = 26, next attempt in 21.862 seconds","timestamp":"2024-03-07T09:21:24.840Z"}
{"level":"info","message":"opc-job connect to opc.tcp://192.168.50.88:49320 success!","timestamp":"2024-03-07T09:21:47.194Z"}
{"level":"warn","message":"monitor error: Transaction has timed out ( timeout = 15000 ms , request = CreateMonitoredItemsRequest)","timestamp":"2024-03-07T09:22:02.850Z"}
{"level":"warn","message":"monitor terminated: Transaction has timed out ( timeout = 15000 ms , request = CreateMonitoredItemsRequest)","timestamp":"2024-03-07T09:22:02.851Z"}
{"level":"warn","message":"opc-job connect to opc.tcp://192.168.50.88:49320 timeout","timestamp":"2024-03-07T09:22:02.853Z"}
09:25:07.398Z :client_secure_channel_layer :558 xxxxx <<<<<< _on_message_received for unknown or timeout request 11 CreateMonitoredItemsResponse Good (0x00000000) 2626130994
09:25:07.399Z :message_builder :226 MessageBuilder : ERROR DETECTED IN 'message' event handler
09:25:07.399Z :message_builder :228 => invalid requestId =11 |
This requires further investigation with a physical connection to the device, which cannot be conducted on a public forum. I suggest you contact sterfive Support through your NodeOPCUA Support membership for professional investigation. |
Current behavior
When the OPC server loses power and is subsequently powered on, the OPC client displays an error message, and the originally configured reconnect functionality becomes ineffective, making it unable to reconnect to the OPC server.
Describe the bug
When the OPC server loses power, the OPC clients originally connected to the OPC server will encounter the following error message.
Even when I restore power to the OPC server, I am unable to reconnect to the OPC server, despite having configured the reconnect settings.
Step by step instruction to reproduce
Steps to reproduce the behavior:
Expected behavior
The expected behavior is that the OPC client can reconnect to the original OPC server after the OPC server's power cycle.
Screenshots
opc client config
error message
Gist
Context
(x) my request is related to node-opcua acting as an OPCUA CLIENT
( ) my request is related to node-opcua acting as an OPCUA SERVER
( ) I have installed node-opcua from source ( using git clone)
(x) I have installed node-opcua as a package ( using npm install )
( ) I am using an application that uses node-opcua
Device: ______
OS version: ______
Description of the other OPCUA system I am trying to connect to:
node-opcua version: :
Node:
node --version
= v2.77.0The text was updated successfully, but these errors were encountered: