Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NT hash lookup #161

Closed
wants to merge 5 commits into from
Closed

NT hash lookup #161

wants to merge 5 commits into from

Conversation

m4lwhere
Copy link

@m4lwhere m4lwhere commented Jan 7, 2024

This module parses the nxc database for NT hashes and queries them against the https://ntlm.pw database. This module works by connecting to the local database, parsing all NT hashes, and then sending them as a POST request to ntlm.pw. Any matches are then queried to the database again and added to accounts which match the NT hash.

Note that this module is NOT OPSEC safe, as anytime we send hashes outside of our control there is a potential loss of confidentiality.

image

This module can be invoked with the name nt_lookup for the smb protocol.

nxc smb 192.168.56.11 -M nt_lookup

@Marshall-Hallenbeck
Copy link
Collaborator

Whoa this is sick!

@ILightThings
Copy link
Contributor

It's not really my place to say, but I am a bit wary of this as a service.
https://ntlm.pw/about states that it will not open source the service or its dataset. To its benefit, it is entirely free, with rate limits in place. No pay-to-play options in place. But the fact that it's a closed source means that they can change that on a dime, and there is not much we can do about it.

However, ignoring that, reading their API, they offer 5000 points every 15 minutes. Via the API, each hash request is 4 points. It would be best if you used a method that would abide by the rate limit or, at the very least, warn that the request will limited based on usage. That way, you are not losing all the hash results once you hit the rate limit.

5 requests of 250 hashes = 1250 hashes 
1250 hashes * 4 credits each = 5000 credits

@lkarlslund
Copy link

I'm running the https://ntlm.pw/ service, and thought I'd chip in here.

First of all thanks for coding this up, @m4lwhere - by making the service even more accessible you're enabling more users the ability to resolve common passwords.

Secondly, the objections @ILightThings points out are true and valid, except for the open source part. Not open sourcing the backend has nothing to do with the usage restrictions and the policy on the site. The service is run and operated not as something that should undermine the users security but rather help them get rid of weak passwords. Yes, I keep the unresolved hashes, and yes I crack them, and yes I use them for research - none are merged into the database yet, and I haven't decided whether I'll be doing that or not, but I reserve the right to do so.

From a service usage perspective, you get 5000 points every 15 minutes, and can spend them just as fast as you wish. Single lookups cost 5p, and bulk ones 4p each, but everything might change whenever it makes sense.

You need to have enough points to resolve all bulk passwords in order for it to process - so if you have 100 points left and bulk submit 25 hashes it will process, but if you submit 26 you'll get a 204 code back. Right now there is a limit of 100 hashes at a time in the bulk lookups, so you need to batch them like that.

If you get a 204 code, just wait a minute and retry, when you get more quota points the request will resolve as it should.

I'm open to returning JSON or similar if anyone needs it.

@ILightThings
Copy link
Contributor

ILightThings commented Jan 9, 2024

Thank you @lkarlslund. I appreciate you taking my words as constructive points rather than derogatory ones. Your corrections to my assumptions provide the necessary insight. I've seen your name pop up on LinkedIn and Git Hub regarding your products, and it carries a good reputation.

So, I would bring forth a few points regarding the plugin itself to keep focus on NetExec:

  1. As far as I am aware, this is the first plugin of its kind to rely entirely on externally hosted services with no option for a locally hosted alternative (bloodhound, C2, etc) . The decision can/will dictate NetExec guidelines on the policy of third-party controlled services and third integration into NetExec

  2. If this plugin should be approved, I feel a warning prompt would be necessary before executing the first time, stating that the data will be going to a third party and confidentially/sensitive data should be a consideration before doing so.

  3. I think a custom header/user agent should be sent with the requests so that @lkarlslund could gather analytics regarding where the traffic is coming from. Wouldn't hurt as the service is already free to use.

I really do think https://ntlm.pw/ and its developer are authentic, do good work, and this plugin shows positive growth for the repo, but I just advise caution.

@m4lwhere
Copy link
Author

m4lwhere commented Jan 9, 2024

Thanks for the discussion here everyone, I'm excited to see that this has been well received so far! Based on the feedback from both @ILightThings and @lkarlslund, I'll be making the following changes to the module:

  1. Discard lookups for the blank NT hash
  2. Identify if any existing plaintext passwords match captured NT hashes in database.
    1. This will prevent lookups for hashes we already know
    2. Additionally, if we've previously hit the credit threshold, re-running the module would only be unknown hashes
  3. If total hashes are more than 100, break into chunks of 100
    1. This ensures anytime we stay within @lkarlslund's service limits of 100 per lookup
  4. Add error checking for 204 status code
    1. Notify the operator if we exceed the current number of lookups
  5. Update the user-agent to mark the lookups completed from NetExec
    1. @lkarlslund if a custom Header works better instead let me know
  6. Add an additional warning, waiting for user input, before sending hashes off

@ILightThings You bring up some great points as well, I'm unaware of any other types of locally hash lookups but would love to integrate that as well if you know of any. I'm certainly interested in what the other maintainers and community's thoughts on integrating a 3rd party service, considering that this can provide a direct benefit to operators leveraging the module. In the event that the https://ntlm.pw website is removed, we can always remove the module.

I can additionally add another warning prompt beyond the OPSEC alert, each time this runs, to ensure the operator has a clear warning that the hashes will be sent to a 3rd party.

The goal of this module was to help identify and show risk faster in an environment, showing that weak passwords have already been found and are available without any cracking required. Integrating @lkarlslund's service was chosen because of its simplicity and availability for the community.

Again, thanks for the discussion! :)

@lkarlslund
Copy link

lkarlslund commented Jan 11, 2024

Sorry!

I said code 204 for quota limit, this is WRONG. A 204 is returned on SINGLE LOOKUPS using GET where there are no known plaintext.

Code 429 for quota limit - see https://ntlm.pw/docs ...

Also I've upped the limit to 500 hashes in each POST and doubled the available points granted every 15 minutes to 10000 points.

@m4lwhere
Copy link
Author

@lkarlslund major thanks for upping the limits on the service!

I've added all of these enhancements into the code now, the following items were done:

  1. Any blank NT hash is discarded
  2. There is an explicit user warning on top of the OPSEC warning. This defaults to no if the user hits enter (i.e. [y/N])
    1. This warning is prompted every time the module runs
  3. All plaintext creds have their NT hash calculated, then any matching NT hashes are removed from the lookup
    1. There's no reason to lookup hashes we already know!
  4. If there are more than 500 hashes, they're broken up into groups of 500 or less
    1. Each group is queried for results
  5. Added a User-Agent string for NetExec.
    1. This takes the version from importlib.metadata.version("netexec")
    2. The UA is formatted like NetExec/1.1.0 nt_lookup module
  6. Added various other error checking throughout module

Separately, I've noticed that this module will only run if a valid machine is contacted on the network. Is there a way to allow this to run without requiring a machine to be contacted? If this is too complicated to implement then it's not a huge deal :)

Thanks again everyone, I'm looking forward to more feedback 🚀

@NeffIsBack
Copy link
Contributor

Thank you all for the discussion.

I have taken some time to think about this as it seems quite difficult and here are my thoughts. At first I was split because it is absolutely true what @ILightThings said, communicating and even sending sensitive information to a 3rd party service is always a risk, especially if you have no control over it. And you are right, if this is going to be merged in, we will put a big fat red warning to make sure the user is aware that this is an external service.
At the end of the day, the decision that has to be made is whether you want to communicate with a third party service and potentially leak information, and that has to be made by the pentester using the tool. For me this is a separate decision, this module would just automate it, but you could also just export the hashes and request them manually.

Also imo ntlm.pw is a really cool project! However, what is a deal breaker for me is that "If you submit a hash that is not in our database, we may try to crack it, and we may also add it to the database".
I completely understand that this is a great way to grow the project and make it more useful, but netexec is aimed at pentesting. I am not a lawyer, but I am pretty sure that it would be a violation of the gpdr for a pentester to upload customer data (or possibly the entire AD) to an external source. Yes, as a consultant you are responsible for your actions and should know exactly what your tools do, but imo this is too big a risk that this could be done without being fully aware of the consequences.

What I could think of is if we implement a custom header that the requests are thrown away right after the comparison with the database. This decision would be up to you, @lkarlslund, and I fully understand if that's not what you want to do with the project.

@lkarlslund
Copy link

lkarlslund commented Jan 16, 2024

Imagine you're standing in front of your locked front door. You don't want anyone to break in, so for your door there exists 2^128 possible keys, and if you try a wrong key 3 times the door will not unlock even with the right key. Cool door!

You have lost your primary key, but you have 50 keys in your drawer - unfortunately they're all unmarked - but you remember that the one to your door said "Frontdoor123". So worrisome, what to do?

You can call the keymaster anonymously (1-800-KEYMSTR) and ask him whether he remembers ever seeing a given key and what was written on it! He will then tell you what was written on it IF it was readable (some keys are incomprehensible, he doesn't know about those).

The keymaster hasn't seen all keys that exist, and he might have seen some fake keys in another keymasters workshop. But he has other things to do than being on the free phone service, so you can only ask him about 100 keys at a time, then you have to wait a little and call again. The next time someone asks him about a key, he might answer that he knows about it, because you asked him about it - but he's old, and can't remember if it was a key he made, a fake one or one someone else asked him about.

Does asking him about your 50 keys undermine the security of your front door, given that the keymaster doesn't remember who you are and he already knows about 8.7B real/fake keys?

Back in the real world, where ntlm.pw exists - there is no sign up, I don't know who uses it, and I don't want to either. You can hide behind Tor, a VPN service or a free airport Wifi. Unless you submit the hash for "This is a dump of BigCorp's AD", and I can crack it, gather other hashes from the same IP at the same time ... and then what? I think you're pretty safe even if I did have evil intentions.

Adding a real world password to the database? It's one among 8.7B, you still have to know a hash in order to be able to look it up, and you have no way of knowing whether it's one I made up with a pattern generator or if it came from something a user submitted. I on the other hand have no idea of knowing if the hashes submitted are real/fake or crackable/uncrackable. Once in a while I empty the queue of uncracked hashes, transfer them to another system, and they're just hashes in a bucket - I don't know where they came from, when they arrived or anything else other than they're not in the database. So far ~30% is crackable, and having looked through what I got so far, there's nothing that IMO would compromise anything at all by mixing them in with the rest of the database.

Yes, I know this is controversial, and I'm very open for having a conversation about it - but adding a statement of intent to not log/save/add keys will not make a difference for security.

If you submit a hash to the site, you should:

  1. have permission to do so
  2. feel comfortable that it will not compromise the security of the people you're trying to help

Hugs all around!

edit: I have updated the "about" page and also added another about "should you use this", comments welcome

@NeffIsBack
Copy link
Contributor

Sorry for the late reply, I have been very busy the last few weeks.

I agree, targeting a single account is absolutely not feasible. My scenario that I fear is that one day a pentester will test a medium/large company, say "amazon". They get domain admin and dump their ntds.dit including a few tens of thousands of hashes. For their nice report they want to check if these are crackable or maybe already known somewhere. Ntlm.pw would be a good choice, so they upload all their data.

This scenario poses two major problems:

  1. My understanding is that "uploading" (and by that I mean sending the hashes to an online service where they might be stored) is a violation of the GPDR. Yes, you should always know what your tool does, but I'm afraid that users (as they tend to be) don't fully read the about page or understand what using the service means, not only comparing them against a database, but also giving them to a third party.
  2. I am not worried about a particular target being cracked. My worry is that if you have a whole AD with dozens of passwords from a company, you will always get weak passwords that contain the company name. Having passwords with something like "@mazon12345" should never be in AD in the first place and should immediately raise alerts and force a password change, but in my experience companies are slow and often have weak password policies.

Having a bunch of passwords with the company name in the database does not guarantee that you will find a match, but it does pose the threat that one day someone could guess a weak password that includes a company name. With a few correct guesses, someone could gather a list of usernames (OSINT, some other leak or email list, whatever) and try to match them against an online service like Outlook. You wouldn't have to bruteforce a user, you'd just need to find 2-3 pretty good looking passwords and a list of users and you could just check where they match.

I know it's like finding a needle in a haystack, but for my comfort there would potentially be too many needles where finding just one would suffice (even ignoring the GPDR problem).

Maybe i am too anxious, @Marshall-Hallenbeck @mpgn @zblurx what are your thoughts?

PS: The "should you use this" page is really good!

@Marshall-Hallenbeck
Copy link
Collaborator

@m4lwhere I think that we sort of settled that this shouldn't be a module due to how easy it is for people to accidentally send sensitive hashes to a third party; HOWEVER, I think this is a sweet tool for CTFs, etc and would be okay with putting a link in our Wiki (with a disclaimer) to your project if you want to spin up a repository. Alternatively we could turn it into a standalone script under Pennyw0rth, but you may prefer to have the code under your own GitHub account.

@NeffIsBack
Copy link
Contributor

Adding to this, @Adamkadaban shared a standalone project based on this idea on the discord: https://github.com/Adamkadaban/NTLMCrack

@m4lwhere
Copy link
Author

@Marshall-Hallenbeck agreed with the rest of the analysis here, certainly too much of a risk within an operational environment. I'd be more than happy having this provided as a link within the wiki :)

As far as going forward, I don't mind if it's kept as a separate script under Pennyw0rth, I'm not much of a developer so my Github isn't my primary resume. I'm curious if there's any other CTF-centric tools I might be able to build into this as well.

Let me know if you need anything else from me, and major thanks everyone!

@Marshall-Hallenbeck Marshall-Hallenbeck added the wontfix This will not be worked on label May 19, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
wontfix This will not be worked on
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants