Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Lazy restore tends to restore all pages rather than those pages that really touched ? #2399

Open
LanYuqiao opened this issue Apr 27, 2024 · 9 comments

Comments

@LanYuqiao
Copy link

LanYuqiao commented Apr 27, 2024

For example, an application works like this:

  1. Initialization phase. Load libs into memory, doing heavy initialization works.
  2. Accepting request. Simply run an rpc server, sitting in a loop waiting for incoming requests. In this phase, only a small set of memory pages are touched.
  3. A request arrives and is handled. In this phase, more pages than phase 2 are touched.
    When this application comes to phase 2, checkpoint it, then lazy restore it. It seems like all pages are restored rather than that only pages touched in phase 2 are restored.

Is that possible to achieve that only restore pages touched in phase 2, and lazily restore pages touched in phase 3 ? I think this is true lazy restore

@rst0git
Copy link
Member

rst0git commented Apr 28, 2024

Is that possible to achieve that only restore pages touched in phase 2, and lazily restore pages touched in phase 3 ?

Adrian has a good blog post on how this could be achieved:
https://lisas.de/~adrian/posts/2016-Oct-14-combining-pre-copy-and-post-copy-migration.html

@LanYuqiao
Copy link
Author

Thanks for your reply. This blog is fairly good but I think there are ambiguities in my description. I said 'touch' here means 'access' rather than 'modify'. What I mean is that is it possible to restore pages accessed in phase 2 only. These pages are not necessarily dirty. In the blog you mentioned above, Adrian pre-dump all pages, in my case, all pages allocated in phase 1 and phase 2, then dump dirty pages modified in phase 2, then restored all pages pre-dumped to /tmp/cp/1, then lazily restore pages dumped to /tmp/cp/2. This does not look like lazy restore to me, since it restores all pages dumped by pre-dump phase.

@avagin
Copy link
Member

avagin commented Apr 29, 2024

This does not look like lazy restore to me, since it restores all pages dumped by pre-dump phase.

Because it was designed for lazy-live-migration. The behavior that you expects can be easy implemented. How are you going to use it? What profits do you see in this use-case?

@LanYuqiao
Copy link
Author

The behavior that you expects can be easy implemented

Could you give me some idea ? I have some naive idea on it:
CRIU use echo 4 > /proc/pid/clear_refs to track dirty pages. If I want to track accessed pages instead, then use echo 1 > /proc/pid/clear_refs, then dump accessed pages only. But how to restore accessed pages only rather than all pages ?

What profits do you see in this use-case?

For many applications (serverless applications for example), their memory working set size goes through an inflated-deflated phase. That is, during initialization phase (loading lots of libraries), they access much more pages than serving phase (just a rpc server sitting there waiting for requests). A big fraction of pages accessed in init phase (I call it cold pages) are rarely accessed in serving phase, so I want to store away those cold pages on disk, only restore hot pages (accessed in serving phase). Cold pages can be brought into memory on demand (use page faults).

I think this can save memory and accelerate restoration.

@liruivah
Copy link

liruivah commented May 3, 2024

Hi, I used to work on using CRIU to implement the "lazy migration" you describe for serverless functions. I also noticed that the current lazy migration in CRIU is actually lazy-live-migration, not lazy restore.

From my understanding, I think your current idea is to track hot pages and prefetch them, and then use userfaultfd to load other cold pages. I would recommend looking at a paper (https://dl.acm.org/doi/pdf/10.1145/3445814.3446714) where the authors have implemented a similar idea on vHive. I hope their design can help you develop your idea.

@LanYuqiao
Copy link
Author

In fact I've already read vHive and this idea is exactly inspired by it. VHive works on firecracker VM whose checkpoint is dumped from anonymous memory of VM monitor process, thus it contains full data of VM memory, resulting in an easier implementation to REcord-And-Replay. But CRIU works on process, whose checkpoint only contains private data(annonymous and dirty file-backed pages) of process. That's why I need to track and dump not only private data but all accessed pages.

But thanks for your kindness.
:->

@LanYuqiao
Copy link
Author

And I still wonder if developers of CRIU have an interest in supporting true "lazy restore".

The behavior that you expects can be easy implemented.

Although I'm interested and I think it's useful, It's a bit difficult for me to implement.

@rst0git
Copy link
Member

rst0git commented May 4, 2024

@LanYuqiao, thank you for clarifying your use case. As Andrei mentioned, the original implementation was designed for live migration. In this scenario, we have residual dependencies between the source and destination machines. In particular, we need to make sure that all pages are restored because if the source machine becomes unavailable (e.g., due to system failure), the restored application would fail.

Is that possible to achieve that only restore pages touched in phase 2, and lazily restore pages touched in phase 3 ?

It should be fairly easy to modify CRIU to restore memory pages only when a page fault occurs. However, this will result in poor performance for the restored application. Loading a memory page from disk (or over the network) is significantly slower compared to direct access from memory. This could be observed in the following demo: https://asciinema.org/a/4QgtYPW9XtTngTyCX5Jsibqth (the application is significantly slower for a few seconds after restore).

What is the main problem you are trying to solve? Why do you want pages to be restored only when accessed in phase 3?

@LanYuqiao
Copy link
Author

What is the main problem you are trying to solve? Why do you want pages to be restored only when accessed in phase 3?

In my case, the numbers of pages accessed in phase 2 and phase 3 are much smaller than that in phase 1. There are quite a large part of pages accessed in phase 1 will not be accessed anymore, which means we don't need to restore all pages to run the app, we only restore those pages accessed. Restoring all pages will be a waste of memory and time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants