Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How Do I get rid of Pihole ? I tried everything. #243

Open
BlueWings172 opened this issue Aug 18, 2021 · 5 comments
Open

How Do I get rid of Pihole ? I tried everything. #243

BlueWings172 opened this issue Aug 18, 2021 · 5 comments

Comments

@BlueWings172
Copy link

Hi

Pihole hasn't been working properly so I decided to remove it with the intention to install it on a clean slate.

This proved much harder than I thought. I am new to this so I started with the menu where I deselected pihole, rebuilt the stack but pihole remains working on and accessible from the browser. I tried the [Delete all stopped containers and docker volumes] in the menu as well as [Delete all images not associated with container] but both don't do anything when I press y even after 10 minutes and when I press enter it says process terminated I also tried the delete unassociated images option.

I tried a restore from yesterday's backup but today's data still show in pihole !

I tried to delete the persistent data with the below commands:

cd ~/IOTstack
docker-compose stop pihole
docker-compose rm -f pihole
sudo rm -rf ./volumes/pihole
docker-compose up -d pihole

I tried to stop all containers from the menu and tried all 3 options above and pihole still survived all that. Then i tried what was described as the nuclear option in the wikis by running sudo git clean -d -x -f which seemed to delete most folders and stop the other containers but pihole still lives on and accessible from the browser with all the query data!

I also tried running : docker system prune and it didn't do anything (0B space reclaimed)

when I run docker ps I get the below:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fa04311b6d01 raymondmm/tasmoadmin "/init" 59 minutes ago Up 59 minutes 443/tcp, 0.0.0.0:8088->80/tcp, :::8088->80/tcp tasmoadmin
fb6be53e7864 pihole/pihole:latest "/s6-init" 59 minutes ago Up 59 minutes (healthy) 0.0.0.0:53->53/udp, :::53->53/udp, 0.0.0.0:53->53/tcp, 0.0.0.0:67->67/udp, :::53->53/tcp, :::67->67/udp, 0.0.0.0:8089->80/tcp, :::8089->80/tcp pihole
ba683b816800 ghcr.io/linuxserver/heimdall "/init" 59 minutes ago Up 59 minutes 0.0.0.0:8880->80/tcp, :::8880->80/tcp, 0.0.0.0:8883->443/tcp, :::8883->443/tcp heimdall
95c0d2c74a64 portainer/portainer-ce "/portainer" 59 minutes ago Up 59 minutes 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp, 0.0.0.0:9000->9000/tcp, :::9000->9000/tcp portainer-ce
b5cef6fe83db octoprint/octoprint "/init" 59 minutes ago Up 59 minutes 0.0.0.0:9985->80/tcp, :::9985->80/tcp octoprint
58d2ff63180a portainer/agent "./agent" 59 minutes ago Up 59 minutes 0.0.0.0:9001->9001/tcp, :::9001->9001/tcp portainer-agent

so all my containers are still on my system and pihole is even still acessible from the browser.

What can I do to completely eradicate pihole and the other containers and do a reinstall or restore ?

Isn't the point of docker to make it easier to add and remove containers?

I would appreciate any help.

Thanks

@Paraphraser
Copy link
Contributor

Paraphraser commented Aug 18, 2021

Hi - first off, please read this project is dormant.

If I had this problem and wanted to clobber everything, here is what I would do.

I'd start by taking down my stack:

$ cd ~/IOTstack
$ docker-compose down

Then I would run docker ps. If it gave me anything more than the titles line, I would interpret that as telling me that containers were somehow running outside of what was defined in my docker-compose.yml. I would then use:

$ docker stop ID
$ docker rm -f ID

where "ID" is the value from the "CONTAINER ID" column in the docker ps output. I would keep going until docker ps only returned the titles row.

Then I would start with docker images. Anything that mentioned PiHole, I'd use:

$ docker rmi ID

where "ID" is the "IMAGE ID". If that command produced an error like this:

Error response from daemon: conflict: unable to delete 5d2168efbaca (cannot be forced) - image is being used by running container 37770d033386

I would do a docker rm ID where the ID is the one at the end of that message (in this case "37770d033386") then retry the rmi. If the rm produced an error of its own, I'd try a docker stop ID with the ID in that message. See the pattern? Keep following the errors and trying commands until something works and then retrace your steps.

Once you were sure you had removed all traces of PiHole from docker images, then run your test to see if you think PiHole is still running. If it is then that suggests you may once have installed PiHole "natively" outside of IOTstack and Docker. I'd go looking for signs of that with:

$ ps -ax | grep -i pihole

If I got hits, I'd start researching how to blow away a native install. If I didn't get hits but my tests suggested PiHole was still running, I'd have a logical contradiction and I'd start wondering whether my tests really were getting DNS services from the RPi at all and, if they truly were, whether it was actually PiHole or maybe I was running BIND?

I hope some of this helps.

For the record, I have zero trouble with PiHole - since I started running it as a container, that is. It was a right pest when i ran it natively (before Andreas' video about IOTstack). All I have ever done since I started with gcgarner/IOTstack, now SensorsIot/IOTstack, is a docker-compose pull plus docker-compose up -d. That always results in the latest and greatest.

BUT, I only run PiHole as an ad-blocker. I do not run it as a general DNS server or DHCP server. I have a local upstream DNS (BIND running on a Mac). In essence, downstream devices needing blocking services use the PiHole RPi for DNS, the PiHole either blocks or forwards. If a domain name to be resolved is either unqualified or fully qualified in my local domain, PiHole relays to the local upstream DNS, otherwise to 8.8.8.8 and its friends.

@BlueWings172
Copy link
Author

@Paraphraser

thanks so much for your great help and for pointing out the status of the project. I really hope Graham Garner is well.

I have tried the above in addition to the below that I found online and it did finally eradicate pihole.

docker rm -vf $(docker ps -aq)
docker rmi -f $(docker images -aq)
docker system prune

When I previously ran the "nuclear option" from the menu, the volumes folder disappeared and I did not understand from where pihole was running. Then after running the commands the Volumes folder appeared again and i deleted its contents.

I was able to run restore and everything worked out !!

Thanks again.

@Paraphraser
Copy link
Contributor

If you look through the docker-compose.yml you will see almost every service definition has a volumes directive, and that most of those have left hand sides beginning with ./volumes/.

under gcgarner there were several that started with ./services but I think those have all now moved to ./volumes under SensorsIot.

When you run docker-compose up -d, one of the things that happens is that the presence of all the left hand sides of all those volumes directives is checked and, if neither a file nor folder exists at that path, a folder is created with root ownership. The same thing happens inside the containers with the right hand sides. Generally, you're mapping an existing internal folder to a potentially non-existing external folder but, sometimes, you also want an internal folder created too.

Under gcgarner there were several cases where a volumes mapping was a file to a file. That worked if the external file was present but if the external file ever went walkabout, docker-compose would promptly create a folder in its place. Mapping a folder to a file is, err, "problematic" so containers would generally barf. Then the user would come along and find a root-owned folder where a file should be and that's where more than one user would declare defeat. I think all of those have gone under SensorsIot.

Anyway, that's why the volumes folder and a bunch of sub-folders re-appeared. It was docker-compose behaving normally.

On this topic, there are two classes of container: those that are well-behaved and those that aren't. See this issue for a discussion. The general idea is that you (the user) should always be able to start from a clean slate by deleting volumes or one or more named sub-folders (eg volumes/mosquitto) then "up" the stack, and each container facing a clean-slate situation should initialise with sensible defaults such that the user has a basis for moving forward. We're getting closer to that goal in SensorsIot too.

Sometimes it simply isn't feasible to force a container to be well-behaved. A good example is Zigbee2mqtt where Docker affords no mechanism for dynamic discovery of the attached dongle. You just have to figure it out. And then there are some like rtl_433 where, even when I know I'm giving the silly thing the correct devices, it still refuses to play the game... 🤬

@BlueWings172
Copy link
Author

very interesting. That explains the resurrection of the volume folder. What is strange is that pihole had my historical data when I couldn't see the volume folder and also when it was recreated. I wonder where it is keeping its data.

thanks for the explanation.

@Paraphraser
Copy link
Contributor

In that case, I'd go back to what I said in my original reply about checking whether PiHole really was running under IOTstack or had, perhaps, been installed natively at one point.

I began my own Raspberry Pi journey with a 3B+ and a native install. When I started to experiment with IOTstack I used a clean image on a different SD card. I only had a handful of custom black/whitelist entries so it was easy enough to re-enter those. But, if I had installed IOTstack on top of the original SD, I might well have found something like this. It's just a guess. Instinct suggests the native install would probably have started before the containerised version, that Docker would've moaned about the ports being in use and sent the container into a restart loop, and that I would've noticed that moaning and figured it out. But I don't know. Maybe Docker would grab the ports first and the native install would then moan into the system log where I might not have noticed it...

Can't think of any other reason. I agree - a bit of a mystery.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants