Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reported twin does not reflect running container #7268

Open
CharleeSF opened this issue Apr 18, 2024 · 1 comment
Open

Reported twin does not reflect running container #7268

CharleeSF opened this issue Apr 18, 2024 · 1 comment
Assignees

Comments

@CharleeSF
Copy link

Expected Behavior

Reported twin accurately reflects settings of current running container

Current Behavior

Twin reports updated configuration while container is not updated

Steps to Reproduce

I cannot reproduce. It happened by accident.
Basically, I think for some reason my docker was hanging on my device.
I pushed a new modules configuration (I added an environment variable) and noticed something was wrong so I rebooted the device.
On boot the old container booted again, but then when I checked the device twin on the azure portal it reported that the new configuration was applied. Rerunning the same configuration did not do anything.
Only way to get the configuration actually applied was to change the configuration (add a dummy environment variable).

I pushed the configuration with az iot edge set-modules from command line.

Context (Environment)

Output of iotedge check

Click here
$ sudo iotedge check

Configuration checks (aziot-identity-service)
---------------------------------------------
√ keyd configuration is well-formed - OK
√ certd configuration is well-formed - OK
√ tpmd configuration is well-formed - OK
√ identityd configuration is well-formed - OK
√ daemon configurations up-to-date with config.toml - OK
√ identityd config toml file specifies a valid hostname - OK
√ aziot-identity-service package is up-to-date - OK
√ host time is close to reference time - OK
√ preloaded certificates are valid - OK
√ keyd is running - OK
√ certd is running - OK
√ identityd is running - OK
√ read all preloaded certificates from the Certificates Service - OK
√ read all preloaded key pairs from the Keys Service - OK
√ check all EST server URLs utilize HTTPS - OK
√ ensure all preloaded certificates match preloaded private keys with the same ID - OK

Connectivity checks (aziot-identity-service)
--------------------------------------------
√ host can connect to and perform TLS handshake with iothub AMQP port - OK
√ host can connect to and perform TLS handshake with iothub HTTPS / WebSockets port - OK
√ host can connect to and perform TLS handshake with iothub MQTT port - OK

Configuration checks
--------------------
√ aziot-edged configuration is well-formed - OK
√ configuration up-to-date with config.toml - OK
√ container engine is installed and functional - OK
√ configuration has correct URIs for daemon mgmt endpoint - OK
√ aziot-edge package is up-to-date - OK
× container time is close to host time - Error
    Could not parse container output
‼ DNS server - Warning
    Container engine is not configured with DNS server setting, which may impact connectivity to IoT Hub.
    Please see https://aka.ms/iotedge-prod-checklist-dns for best practices.
    You can ignore this warning if you are setting DNS server per module in the Edge deployment.
‼ production readiness: logs policy - Warning
    Container engine is not configured to rotate module logs which may cause it run out of disk space.
    Please see https://aka.ms/iotedge-prod-checklist-logs for best practices.
    You can ignore this warning if you are setting log policy per module in the Edge deployment.
‼ production readiness: Edge Agent's storage directory is persisted on the host filesystem - Warning
    The edgeAgent module is not configured to persist its /tmp/edgeAgent directory on the host filesystem.
    Data might be lost if the module is deleted or updated.
    Please see https://aka.ms/iotedge-storage-host for best practices.
‼ production readiness: Edge Hub's storage directory is persisted on the host filesystem - Warning
    The edgeHub module is not configured to persist its /tmp/edgeHub directory on the host filesystem.
    Data might be lost if the module is deleted or updated.
    Please see https://aka.ms/iotedge-storage-host for best practices.
√ Agent image is valid and can be pulled from upstream - OK
√ proxy settings are consistent in aziot-edged, aziot-identityd, moby daemon and config.toml - OK

Connectivity checks
-------------------
√ container on the default network can connect to upstream AMQP port - OK
√ container on the default network can connect to upstream HTTPS / WebSockets port - OK
√ container on the IoT Edge module network can connect to upstream AMQP port - OK
√ container on the IoT Edge module network can connect to upstream HTTPS / WebSockets port - OK
30 check(s) succeeded.
4 check(s) raised warnings. Re-run with --verbose for more details.
1 check(s) raised errors. Re-run with --verbose for more details.
2 check(s) were skipped due to errors from other checks. Re-run with --verbose for more details.

Device Information

  • Host OS: Ubuntu Core 22
  • Architecture: amd64
  • Container OS: Ubuntu 20

Runtime Versions

  • aziot-edged: iotedge 1.4.33
  • Edge Agent: 1.4
  • Edge Hub: 1.4
  • Docker/Moby: 24.0.5 (docker snap, revision 2915)

Logs

I have no logs because I rebooted the device (I couldn't talk to docker anymore, all CLI requests seemed to hang)

Comment

Mainly posting this because it worries me that the reported twin is not the actual settings of the running container; how do you guys decide what twin settings to report?

@nlcamp nlcamp self-assigned this Apr 23, 2024
@nlcamp
Copy link
Contributor

nlcamp commented Apr 23, 2024

@CharleeSF - Thanks for reporting this issue. Unfortunately, without logs and repro steps, there's not much we can do investigate the issue.

As a sanity check I issued a module twin update from portal while the iotedge runtime on my device was stopped. I confirmed that the reported properties did not update during this period. As expected, they only updated after I started the runtime and re-applied the twin update.

Please let us know if you run into this issue again and can provide logs and repro steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants