Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issues scanning large accounts #137

Open
rdegraaf opened this issue Oct 10, 2023 · 8 comments
Open

Performance issues scanning large accounts #137

rdegraaf opened this issue Oct 10, 2023 · 8 comments
Labels
bug Something isn't working

Comments

@rdegraaf
Copy link

rdegraaf commented Oct 10, 2023

Describe the bug

PMapper takes excessive time to process data pulled from some accounts with many resources. This implies that it's using a very inefficient algorithm at some point.

For instance, when I scanned an account containing 369 CloudFormation Stacks in the same region, PMapper made 4 calls to cloudformation:DescribeStacks (because of pagination limits) ending at 10:27:24, then spent the next six minutes processing before emitting any results. From the debug log:

2023-10-10 10:27:54-0700 | INFO | principalmapper.graphing.cloudformation_edges | Generating Edges based on data from CloudFormation.
2023-10-10 10:33:48-0700 | INFO | principalmapper.graphing.cloudformation_edges | Found new edge: role/ can create a stack in CloudFormation to access role/

Next up was CodeBuild. That didn't take long because the account doesn't use CodeBuild. Then came Lambda. The account has 141 Functions with 129 of them in the same region. It finished pulling data and started processing at 10:40:19:

2023-10-10 10:40:19-0700 | DEBUG | principalmapper.graphing.lambda_edges | Identified 141 Lambda functions for processing

I gave up waiting and killed the process 4 hours later since my AWS creds would have expired by then even if PMapper did eventually progress to its next service.

During this time, the Python process running PMapper was using approximately 100% of one CPU core. The process' memory use was unremarkable, only 0.7% of available memory as per top. It was not waiting for something on the network and wasn't making repetitive calls to AWS; I attached a proxy and confirmed that no further requests were made after 10:40:19. "--debug" logging did not do anything to reveal the problem. I tried running it under cProfile, which of course made everything orders of magnitude slower; it took 40 minutes to process AutoScaling data as opposed to the 21 second that it took to do the same without cProfile:

2023-10-10 11:14:13-0700 | DEBUG | principalmapper.graphing.autoscaling_edges | Looking at region us-west-2
2023-10-10 11:54:29-0700 | INFO | principalmapper.graphing.autoscaling_edges | Found new edge: role/ can use the EC2 Auto Scaling service role and create a launch configuration to access role/

I killed the profiled process after >3 hours while it was still working on CloudFormation; see pmapper-cprofile.txt for its output. Based on a quick scan of the results, it looks like _compose_pattern() in local_policy_simulation.py is a major bottleneck: the program spent about 170 minutes in there. Can you optimize that at all? Maybe find out if you're repeatedly compiling the same patterns and cache the results somehow?

It is possible that this issue is related to #115, though I'm not getting any permission errors.

To Reproduce

Sorry, I can't give precise details on the account because it isn't my account. By the time that someone investigates this issue, I most likely will no longer have access to the account. I don't know if the problem has to do with the number of resources or something about how they are configured.

Expected behavior

PMapper should run in a reasonable amount of time.

@rdegraaf rdegraaf added the bug Something isn't working label Oct 10, 2023
@rdegraaf
Copy link
Author

rdegraaf commented Oct 10, 2023

I looked a little farther and realized that you are caching results in _compose_pattern() through the use of the @functools.lru_cache decorator. However, your cache size of 2048 is far too small: in the checks for IAM alone, that function processed 7,100 unique inputs. I'm sure that the caching helps a little but it looks like a lot were expiring from the cache before being requested again.

So, I removed the size limit by changing this line (principalmapper/querying/local_policy_simulation.py:891):

@functools.lru_cache(maxsize=2048, typed=True)

to this:

@functools.cache

With that change, processing for CloudFormation dropped from 354 seconds to 41 seconds. Lambda went from >4 hours to 27 minutes.

Those times still seem excessive to me. I suspect that there are other regex-related bottlenecks in PMapper. The call to re.match at local_policy_simulation.py:428 looks like another good candidate for optimization. Maybe if I find some time, I'll re-run my modified build with the profiler.

@rdegraaf
Copy link
Author

I re-ran the modified version with cProfile; it turns out that local_policy_simulation.py:428 is a non-issue. However, _matches_after_expansion(), the function that calls _compose_pattern(), remains a bottleneck. In a 5225-second run (that I killed before completion), that function was called ~81 million times for a total of 1,769 seconds with 1,146 of those being in the function itself (most of the remainder was in pattern.match()). policy_has_matching_statement(), one of the main callers of _matches_after_expansion(), was also a big time-sink, consuming 660 seconds with ~2.4 million calls. There must be something calling those functions far too often.

There are also some basic data operations that may be being called too often. 282 seconds were consumed by str.lower() (called 42 million times), which is more than 5% of the execution time of the program. Adding things to and removing things from CaseInsensitiveDict consumed more than 10%. These are probably related since CaseInsensitiveDict calls str.lower on every operation. Maybe convert data to lower-case on ingestion so that you don't need to worry about it later? Or maybe you can optimize the number of CaseInsensitiveDict operations down from ~20 million to something more reasonable and the time spent converting to lower case will no longer be important.

Here's the profiler data

@rdegraaf
Copy link
Author

rdegraaf commented Oct 11, 2023

I scanned an even bigger account; PMapper took five hours to run (with the fix above but without cProfile). The graph stats are:

# of Nodes:              1487 (63 admins)
# of Edges:              14406
# of Groups:             19
# of (tracked) Policies: 2030

While that is a big account, it shouldn't take that long.

@Fennerr
Copy link

Fennerr commented Nov 28, 2023

This is some awesome commentary. I have also been struggling with pmapper's performance. Typically the CPU % is high (~99%), whilst RAM usage would be less than 1%.

I will try and look into the recommendations you made, and also want to look into multi-threading the edge identification code. I killed pmapper last night after it took 6+ hrs just trying to perform edge identification on Lambda.

I am also looking at moving across to awspx. I was mainly using pmapper over awspx as it has more attack paths in it, and has a lower barrier to entry to contribute due to its more straightforward code base. But I am friends with the created of awspx, and would like to pick up improving it (the creator has moved onto a new job role, and maintaining awspx is no longer a priority for him)

@skeggse
Copy link

skeggse commented Nov 29, 2023

Maybe related to #55. 16 days to complete is an interesting figure.

@Fennerr
Copy link

Fennerr commented Dec 7, 2023

@rdegraaf I made some slight optimizations to local_policy_simulation - theres still probably more that can be done, but it improved performance in a small account by ~60%. I have also implemented multiprocessing.

Can you give this branch of my fork a try? https://github.com/Fennerr/PMapper/tree/multiprocessing

@rdegraaf
Copy link
Author

rdegraaf commented Dec 9, 2023

@Fennerr Thank you for taking this on! Unfortunately, I no longer have access to the accounts described above or to anything similar; they are not my accounts and I only temporary access to perform a security review. I'll see if I can find a co-worker who can help, but no promises.

@Fennerr
Copy link

Fennerr commented Dec 11, 2023

Thank you @rdegraaf - I have also commited these changes to my main branch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants