Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Implement Functionality in Metasploit #3

Open
sempervictus opened this issue Jan 6, 2018 · 7 comments
Open

RFC: Implement Functionality in Metasploit #3

sempervictus opened this issue Jan 6, 2018 · 7 comments

Comments

@sempervictus
Copy link

So there's a basic http proxy in the PR queue with all the trappings of Rex, and it would be cool to have the SSRF proxy pivoted into a client network natively via Rex Sockets (and also as a type of http tunneling session handler). I havent read over the codebase yet, but figure you might be able to ballpark the effort to porting the SSRF parts over while leaving that proxy code to work as it does.

@bcoles
Copy link
Owner

bcoles commented Jan 6, 2018

Hi @sempervictus

If I understand correctly, it wouldn't take much effort. First, some background, to ensure we're on the same page.

Design

SSRF Proxy is designed to be used as both a command line proxy tool and a Ruby library.

The command line tool is designed to allow existing tools such as metasploit, dirb, nikto, sqlmap, etc to be run on hosts via the proxy, effectively rendering the SSRF transparent to the tools (or at least as transparent as possible).

As a command line utility, SSRF Proxy can be leveraged by specifying set proxies http:127.0.0.1:8081 from msfconsole.

The library is designed to offer a programmable interface for SSRF vulnerabilities, laying the groundwork for further SSRF tools, such as a SSRF scanner identification and capability enumeration tool, and an SSRF exploitation tool with pre-configured exploits for low-hanging fruit typically exploited over SSRF - both of which are in the works, but require a stable library from which to build.

Implementation

Ideally, the library would be sufficiently robust such that Metasploit could simply require ssrf_proxy to leverage SSRF Proxy functionality.

As an aside, it's worth noting that the version 0.0.5.pre development branch is a large refactor and breaks backwards compatibility.

In it's current state, there are a few shortcomings of the existing implementation.

Firstly, the command line tool does not support HTTPS (a workaround exists). A fix is in the works but not ready for release. The library does support HTTPS. Implementing HTTPS for the command line utility is currently the highest priority. This is unlikely to affect any integration with Metasploit, as the integration would make use of the library rather than the command line utility.

Secondly, SSRF Proxy does not support SSRF via CRLF injection / request smuggling. While this kind of SSRF is a large part of the SSRF paradigm, especially following Orange Tsai's excellent research on exploiting URL parsers, it was not a large part of the original design as it's rare to achieve bi-directional communications using this method. Usually it's not possible to view the response. Support for CRLF injection is a possibility in the future. It's something I'll look at after implementing HTTPS.

Thirdly, SSRF Proxy can target only HTTP(S) servers vulnerable to SSRF. Recent refactoring makes other target protocols possible (mongo for example) but still a lot of work to (re)implement.

Fourth, SSRF Proxy can only tunnel HTTP(S) traffic. However, as it's a HTTP(S) proxy, there's nothing stopping you from smuggling non-HTTP traffic inside HTTP traffic through SSRF Proxy like you would any other inter-protocol attack, so long as the SSRF server allows some control over the request body, such as POST data.

Integration

As for integration with Metasploit, this is something I've wondered about in the past. It would be cool to leverage SSRF for a command and control channel, and to target SSRF on an internal host over a session pivot.

Currently, SSRF Proxy makes use of it's own HTTP requests (using Net::HTTP), however, foreseeing that consumers of the library may wish to submit requests using their own libraries, I've recently done some refactoring.

Recent modifications to SSRF Proxy (in the dev branch) mean that it should be fairly simple to modify the SSRF Proxy library such that any Ruby program can give it a HTTP request, and retrieve the appropriately formatted HTTP request ready for submission to the SSRF server using whichever HTTP library the developer chooses. While this isn't implemented yet, it should be fairly easy to implement.

As for porting the code to Metasploit, if the library isn't sufficiently easy for other Ruby programs to leverage, then the library is a failure. Also, this has been my pet project on and off for more than 6 years, and will continue to be so for the foreseeable future. As such, I would prefer if Metasploit consumes SSRF Proxy functionality as a gem rather than stealing re-implementing the code. That said, it's MIT licensed, so you can do whatever you want with the code so long as you're not in breach of the license.

@bcoles
Copy link
Owner

bcoles commented Jan 6, 2018

Neglected to mention, in addition to providing a HTTP request to SSRF Proxy as input, and retrieving the appropriately formatted HTTP request as output, the same can be done with HTTP responses.

Changing the library would be fairly simple. The following code segments output the formatted request and response for debugging purposes. Instead, they could be returned from format_request and format_response exposed as public methods.

      logger.debug("Prepared request:\n" \
                   "#{ssrf_request.method} #{ssrf_request.url} HTTP/1.1\n" \
                   "#{ssrf_request.headers.map{|k, v| "#{k}: #{v}"}.join("\n")}\n" \
                   "#{ssrf_request.body}")
      logger.debug("Prepared response:\n" \
                   "#{result['status_line']}\n" \
                   "#{result['headers']}\n" \
                   "#{result['body']}")

@sempervictus
Copy link
Author

sempervictus commented Jan 6, 2018 via email

@bcoles
Copy link
Owner

bcoles commented Jan 7, 2018

The HTTP objects in the above example aren't stdlib objects.

Request object is a struct Struct.new(:url, :method, :headers, :body) :

'url'     # [String] client request destination URL
'method'  # [String] HTTP request method
'headers' # [Hash] HTTP request headers
'body'    # [String] HTTP request body

Response object is a Hash :

'url'          # [String] client request destination URL is also available to the response object
'http_version' # [String] HTTP response version
'code'         # [Integer] HTTP response status code
'message'      # [String] HTTP response status message
'status_line'  # [String] "HTTP/<version> <code> <message>"
'headers'      # [String] HTTP response headers ('\n' separated)
'body'         # [String] HTTP response body

After sending the request and receiving a reply (or no reply in the event the SSRF server didn't respond) the response object is also populated with a duration key to store the request duration.

These objects can be reworked internally where necessary. It's also likely there will be some minor refactoring here for consistency as part of the recent changes. (duplicate request headers aren't handled properly; and the response headers should be a hash rather than a newline separated string).

It would be fairly trivial to add to_* methods (to_json, to_hash, to_nethttp_request_object, to_webrick, etc) to act as a translation layer for request and response objects to return the request/response in an acceptable format for consumption. I would prefer not to drag in Rex as a dependency unless necessary, however if Rex offers some advantages (HTTP parsing?) I could be convinced to do so.

Changing the code to allow passing in raw sockets is a much bigger task. I might need to see an example of what you'd like to achieve. This does tie in with implementing an extendable request layer which is something I've been considering as part of allowing developers to handle issuing of requests themselves (and due to Net::HTTP being an unfriendly library), but hasn't been a high priority.

@sempervictus
Copy link
Author

We can chain the proxies as a hacky shorcut, forcing your ssrf proxy through msf socks or that http proxy and hitting targets over a pivot (i think).
I think that to fit the reqs of being a portable gem, good cli tool, and specifically portable to MSF (where we have basically rewritten a ton of stdlib for the specific needs of the community), we need to separate out the pieces that are Net::HTTP specific, and the SSRF-relevant code into modules within their own files like we do in Rex/Msf, and then compose class objects for specific uses such as the CLI relying on Net::Http, but Rex::Proto::Http::Proxy::SSRF would compose from Rex modules + the SSRF code which would evolve in the gem, and be updated in the msf gemfile as we go.
Hell, if my proxy code aint landed soon i'll publish as a plugin because it populates the wmap targets by user/scanner activity which, umm, not only framework users might want. That approach might benefit this venture as it would let us muck about with pre-release code without making Brent go bald writing catch-all exception handlers around changes coming from upstream.
I'm hoping to get a few hours tonight to either finish the dnsruby conversion for the Rex::Proto::DNS stuff, or finalize a working handler PoC for the DefconRussia folks (i have all the proto parsers and encoders done). Once i dig out of that tech debt a bit, i could try to PR what i mean about the code separation, if you think that approach is viable.

@bcoles
Copy link
Owner

bcoles commented Jan 7, 2018

I think it would be best to wait until I've finished refactoring. It seems the direction the project is heading is in line with what you need. This will take some time. A few days, time permitting, but more likely a couple weeks.

The implementation of the prepare_request and prepare_response methods would suit your needs, allowing developers to issue requests using a library of their choice, rather than relying on SSRF Proxy to issue the requests.

You want two things:

  • The methods used to prepare the request and response to be exposed as public methods. Recent refactoring is heading in this direction.
  • The request layer to be separated from the parsing layer into separate module(s). This was likely going to be implemented during the refactoring to supporting HTTPS.

It's worth noting that issuing of requests is not specific to the CLI. It's intentionally coupled with the library. The primary reason for this is to determine request duration. Stripping the duration property from the response object will force developers to calculate the duration themselves. While unfortunate, it's an acceptable loss.

This leaves two issues to be resolved.

The first is whether to separate the SSRF functionality out into a separate gem, leaving the proxy functionality in the SSRF Proxy gem. This would decrease the number of dependencies for developers who only want to use the library, which is good. Perhaps this is what you meant by a separate module. I assumed you were referring to a separate module within the same namespace.

However, this would incur some development overhead, if for no other reason than I'll have to change namespaces. As I want to eventually support more than just HTTP(S), this would also require some refactoring.

The second is designing request and response objects for consumption outside of the library.

I noticed you've forked the project. It's important to work from the 0.0.5.pre branch which contains significant refactoring, and not the master branch.

@sempervictus
Copy link
Author

sempervictus commented Jan 7, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants