Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conpot stops functioning after nmap scans #564

Open
bestrocker221 opened this issue Jan 3, 2022 · 12 comments
Open

Conpot stops functioning after nmap scans #564

bestrocker221 opened this issue Jan 3, 2022 · 12 comments
Labels

Comments

@bestrocker221
Copy link

Conpot does not work anymore after nmap scanning. Both through the container and not.
The process keeps running, but the ports do not react anymore and the logging is stopped.

This unfortunately makes the honeypot pretty much useless since it cannot withstand a basic nmap scan.

To Reproduce

  1. docker-compose up
  2. nmap -sV -v 127.0.0.1
  3. see few logs and maybe the scan terminates.
  4. scan can also not terminate, and conpot is unresponsive, all future port probes will return no answer. No further logs.

Expected behavior
Conpot keeps running, with the logs saved of each connection attempt.

Any help?

Thank you!

@Vingaard
Copy link
Contributor

Vingaard commented Jan 4, 2022

Interesting - a quick question. what happens if you scan the Conpot from anything else than 127.0.0..1 (localhost)
like nmap scan:
nmap -sV -v {inset conpot_ip}

@bestrocker221
Copy link
Author

bestrocker221 commented Jan 4, 2022 via email

@bestrocker221
Copy link
Author

bestrocker221 commented Jan 25, 2022

Anybody confirming the same behavior or found an alternative way to solve it?

@glaslos
Copy link
Member

glaslos commented Feb 20, 2022

I've just checked out the latest code from GitHub, run docker-compose up and then nmap -sV -v 127.0.0.1 twice without issues. CPU usage was pretty much at 100% during the nmap scan. Scan took about 150 seconds.

@bestrocker221
Copy link
Author

Strange, I tried both the docker-compose and through the virtualenv, on fresh new Debian and Arch systems and I got the explained issue.

I found many places in the code where the issue makes sense, the lack of sockets closing and error handling during loops. Gonna make a PR when finished. Now it seems to deterministically work

@t3chn0m4g3
Copy link
Contributor

t3chn0m4g3 commented Mar 2, 2022

Maybe this is unrelated to the issues title, however I have been noticing conpot guardian_ast and iec104 to keep running at 100% CPU usage after some connection attempts being made.

image

Example guardian_ast:
{"timestamp": "2022-03-02T04:45:06.688727", "sensorid": "conpot", "id": "ce4b97df-bc04-416f-b5af-c7654d1d7d2b", "src_ip": "23.106.58.171", "src_port": 58678, "dst_ip": "172.23.0.2", "dst_port": 10001, "public_ip": "xx.xx.46.201", "data_type": "guardian_ast", "request": null, "response": null, "event_type": "NEW_CONNECTION"}

Example iec104:
{"timestamp": "2022-03-02T09:49:38.909285", "sensorid": "conpot", "id": "efb38210-1ddc-4ff9-8468-0eaffed92b4b", "src_ip": "164.52.24.169", "src_port": 53032, "dst_ip": "172.22.0.2", "dst_port": 2404, "public_ip": "xx.xx.46.201", "data_type": "IEC104", "request": null, "response": null, "event_type": "NEW_CONNECTION"}

Conpot behavior changes afterwards. While connections still can be made there is no logging and no functionality.

This is based on git commit 1c2382e. Still have to test latest master.

Update: I can confirm to reproduce this behavior with nmap -sV -v 127.0.0.1, still have to test with latest master. Since @glaslos confirmed it works fine I am guessing that some library was updated, and I found one other honeypot showing the same behavior.

@t3chn0m4g3
Copy link
Contributor

t3chn0m4g3 commented Mar 9, 2022

@glaslos

Built conpot from latest master and can now confirm that problems with iec104 are fixed, while guardian_ast repeatedly shows 100% CPU usage after nmap and leaves it without function.
image

It is reproducible with nmap -p 10001 -sV -v 127.0.0.1.

Console log:

2022-03-09 14:02:37,741 Config file found!
2022-03-09 14:02:37,743 Starting Conpot using template: /usr/lib/python3.9/site-packages/conpot/templates/guardian_ast
2022-03-09 14:02:37,743 Starting Conpot using configuration found in: /etc/conpot/conpot.cfg
2022-03-09 14:02:37,750 Serving tar:///usr/lib/python3.9/site-packages/conpot/data.tar as file system. File uploads will be kept at : /tmp/conpot
2022-03-09 14:02:37,750 Opening path /tmp/conpot for persistent storage of files.
2022-03-09 14:02:37,751 Initializing Virtual File System at /tmp/conpot/__conpot__hi0s02wj. Source specified : tar:///usr/lib/python3.9/site-packages/conpot/data.tar
 Please wait while the system copies all specified files
2022-03-09 14:02:37,818 Fetched 1.2.3.4 as external ip.
2022-03-09 14:02:37,819 Conpot GuardianAST initialized
2022-03-09 14:02:37,819 Found and enabled guardian_ast protocol.
2022-03-09 14:02:37,820 No proxy template found. Service will remain unconfigured/stopped.
2022-03-09 14:02:37,820 GuardianAST server started on: ('0.0.0.0', 10001)
2022-03-09 14:04:14,067 New guardian_ast session from 192.168.208.1 (7f0c9159-271d-432c-9e01-2c5eb1d75110)
2022-03-09 14:04:14,067 New GuardianAST connection from 192.168.208.1:60070. (7f0c9159-271d-432c-9e01-2c5eb1d75110)
2022-03-09 14:04:20,071 Non ^A command attempt 192.168.208.1:60070. (7f0c9159-271d-432c-9e01-2c5eb1d75110)
2022-03-09 14:04:20,071 GuardianAST client disconnected 192.168.208.1:60070. (7f0c9159-271d-432c-9e01-2c5eb1d75110)
2022-03-09 14:04:20,073 New GuardianAST connection from 192.168.208.1:60074. (7f0c9159-271d-432c-9e01-2c5eb1d75110)

JSON log:

{"timestamp": "2022-03-09T12:03:49.935986", "sensorid": "conpot", "id": "c1d7e4be-0a27-46fa-88f6-18d65f2deb1b", "src_ip": "192.168.128.1", "src_port": 52392, "dst_ip": "192.168.128.2", "dst_port": 10001, "public_ip": "1.2.3.4", "data_type": "guardian_ast", "request": null, "response": null, "event_type": "NEW_CONNECTION"}
{"timestamp": "2022-03-09T12:03:49.935986", "sensorid": "conpot", "id": "c1d7e4be-0a27-46fa-88f6-18d65f2deb1b", "src_ip": "192.168.128.1", "src_port": 52392, "dst_ip": "192.168.128.2", "dst_port": 10001, "public_ip": "1.2.3.4", "data_type": "guardian_ast", "request": null, "response": null, "event_type": "CONNECTION_LOST"}
{"timestamp": "2022-03-09T12:03:49.935986", "sensorid": "conpot", "id": "c1d7e4be-0a27-46fa-88f6-18d65f2deb1b", "src_ip": "192.168.128.1", "src_port": 52392, "dst_ip": "192.168.128.2", "dst_port": 10001, "public_ip": "1.2.3.4", "data_type": "guardian_ast", "request": null, "response": null, "event_type": "NEW_CONNECTION"}
{"timestamp": "2022-03-09T14:04:14.067252", "sensorid": "conpot", "id": "7f0c9159-271d-432c-9e01-2c5eb1d75110", "src_ip": "192.168.208.1", "src_port": 60070, "dst_ip": "192.168.208.2", "dst_port": 10001, "public_ip": "1.2.3.4", "data_type": "guardian_ast", "request": null, "response": null, "event_type": "NEW_CONNECTION"}
{"timestamp": "2022-03-09T14:04:14.067252", "sensorid": "conpot", "id": "7f0c9159-271d-432c-9e01-2c5eb1d75110", "src_ip": "192.168.208.1", "src_port": 60070, "dst_ip": "192.168.208.2", "dst_port": 10001, "public_ip": "1.2.3.4", "data_type": "guardian_ast", "request": null, "response": null, "event_type": "CONNECTION_LOST"}
{"timestamp": "2022-03-09T14:04:14.067252", "sensorid": "conpot", "id": "7f0c9159-271d-432c-9e01-2c5eb1d75110", "src_ip": "192.168.208.1", "src_port": 60070, "dst_ip": "192.168.208.2", "dst_port": 10001, "public_ip": "1.2.3.4", "data_type": "guardian_ast", "request": null, "response": null, "event_type": "NEW_CONNECTION"}

Happy to provide debug logs, just let me know what you need.

The only thing I noticed is, that every time I stop conpot_guardian_ast after it keeps running with 100% CPU I get the following error ...
Traceback (most recent call last): File "src/gevent/greenlet.py", line 906, in gevent._gevent_cgreenlet.Greenlet.run File "/usr/lib/python3.9/site-packages/gevent/baseserver.py", line 34, in _handle_and_close_when_done return handle(*args_tuple) File "/usr/lib/python3.9/site-packages/conpot/protocols/guardian_ast/guardian_ast_server.py", line 327, in handle request += sock.recv(4096)
... which is not happening when being stopped otherwise.

@bestrocker221
Copy link
Author

In my latest local version all the protocols in the default template works correctly after multiple -sV scans. I just need to find a proper way to make a PR since I am making modifications directly in my virtualenv dev environment.

@t3chn0m4g3
Copy link
Contributor

@bestrocker221 Interesting, default shows no sign of 100% CPU usage (after nmap is finished) on my end with latest master.

@t3chn0m4g3
Copy link
Contributor

t3chn0m4g3 commented Mar 11, 2022

Behavior is the same with Py 3.7/3.8/3.9 with Alpine and Ubuntu as Docker images.

Dirty, but works fine if docker-compose uses restart=always:

STOPSIGNAL SIGINT
HEALTHCHECK CMD if [ $(ps -C mpv -p 1 -o %cpu | tail -n 1 | cut -f 1 -d ".") -gt 75 ]; then kill -2 1; else exit 0; fi
  • -p 1 Monitors PID 1 only and kills process if process CPU usage is > 75%.

Keeping 🤞 for a solution, though 😄

@bestrocker221
Copy link
Author

@t3chn0m4g3 Hi, the solution might work but is just a workaround that does not actually fix the code behind it.

I left conpot with default template running for a week and it eventually stopped functioning, now I checked and it was using 100% CPU, but since it is one single process I can't distinguish without a debugger what is going on.. need more testing.

Restarting the process as you say can work, but is indeed "dirty".

@t3chn0m4g3
Copy link
Contributor

@glaslos @xandfury Can you reproduce? Can I provide logs to help solving this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants