Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Factor out memory allocation #151

Open
wants to merge 10 commits into
base: master
Choose a base branch
from
Open

Conversation

srebhan
Copy link

@srebhan srebhan commented Aug 14, 2023

This PR factors out the memory allocation (and disposal) of buffers to a MemAllocator interface abstracting memory allocation strategies. Afterwards, the page-allocation strategy is implemented as an instance of a MemAllocator including unit-test.
Finally, a SLAB allocation strategy is implement to be able to make better use of locked memory pages for systems with a very limited number of available locked pages or for software with many secrets handled by memguard, such as Telegraf.

provides a basis to implement #124
fixes #148

@srebhan
Copy link
Author

srebhan commented Sep 8, 2023

@awnumar any comments?

@awnumar
Copy link
Owner

awnumar commented Sep 8, 2023

Hi, thanks for the PR. I'm sorry I haven't had time to review it yet

Copy link
Owner

@awnumar awnumar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks very much for your contribution. Please add your name to the AUTHORS file, if you wish.

core/exit.go Show resolved Hide resolved
core/memallocator.go Outdated Show resolved Hide resolved
core/memallocator_slab.go Show resolved Hide resolved
@srebhan
Copy link
Author

srebhan commented Oct 16, 2023

@awnumar any comment?

@srebhan
Copy link
Author

srebhan commented Jan 3, 2024

@awnumar ping...

@srebhan
Copy link
Author

srebhan commented Feb 21, 2024

@awnumar any news on this PR? Anything I can do to get it merged?

@awnumar
Copy link
Owner

awnumar commented Mar 7, 2024

@srebhan sorry again for the delay. I sadly don't have a lot of free time lately.

The PR looks good to me. I have been thinking about how this could actually be integrated into the API. I think it would require an overhaul of how the library is used. In particular, users should be able to initialise an allocator object that they use to create buffers and enclaves, something like:

type Allocator struct {
    ma MemAllocator
    key Coffer
    buffers []Buffer
}

this would also allow us to remove all global state that we currently store within the library. Also it'd be nice to take the opportunity of introducing a breaking change to perform some other clean-up actions:

  • Remove the finaliser and remove (or change) the Panic and Exit wrappers. These were optimistic features that in retrospect are not suited to solve the problem they're intended for. Memguard is at the wrong level for that and the problem itself is more complex than these features make it seem. Callers should be responsible for their own cleanup, and an Allocator instance would make that easier to handle for them.
  • Possibly merge the core and root libraries. The layering is confusing and doesn't add anything.

In terms of your changes in this PR, it feels strange to have the new allocator merged into master when it would remain unused for the time being. How would you feel about either:

  • merging the changes into a branch, where development can continue
  • extracting out some of the relevant changes such as the MemAllocator interface and the PageAllocator, and having a separate PR for the Slab allocator (which can be merged into a branch for the time being)

@srebhan
Copy link
Author

srebhan commented Mar 14, 2024

@awnumar no worries!

Regarding your suggestion, I'm all for a v2 with the breaking changes you suggested. Being able to create multiple, isolated memguard instances is a good thing IMO. A general interface to memguard could look like this

package memguard

import "runtime"

type Option func(*Guard)

func WithAllocator(allocator MemAllocator) Option {
	return func(g *Guard) { g.allocator = allocator }
}

type Guard struct {
	allocator MemAllocator
	buffers   bufferList
	key       *Coffer
}

func NewGuard(options ...Option) *Guard {
	g := &Guard{}

	// Apply the options
	for _, opt := range options {
		opt(g)
	}

	// Filling in defaults for mandatory options and initialize fields
	if g.allocator == nil {
		g.allocator = NewPageAllocator()
	}
	g.key = NewCoffer()

	// Make sure we don't leave unencrypted data behind
	runtime.SetFinalizer(g, func(dg *Guard){dg.Destroy()})

	return g
}

func (g *Guard) Destroy() {
	...
}

func (g *Guard) NewBuffer(size int) (*Buffer, error) {
	if size < 1 {
		return nil, ErrNullBuffer
	}

	b := new(Buffer)

	// Allocate the total needed memory
	var err error
	b.data, err = g.allocator.Alloc(size)
	if err != nil {
		Panic(err)
	}

	// Set remaining properties
	b.alive = true
	b.mutable = true

	// Append the container to list of active buffers.
	g.buffers.add(b)

	// Return the created Buffer to the caller.
	return b, nil
}

...

I'm also willing to help out here, if you could spend some time to review my changes!?!? If so I do propose the following steps:

  1. Close this PR for now as it needs to be based against the v2 version.
  2. Create a v1 branch with the current code in preparation of v2.
  3. Modify master's go.mod to github.com/awnumar/memguard/v2, potentially also bump the minimum go version if you like.
  4. Put up PRs to merge core into the main package.
  5. Put up PRs to factor out the global variables into a struct as shown above.
    6.Put up PRs to get rid of init() functions as good as possible.
  6. Rebase this PR against the v2
  7. Release a version of v2

What do you think?

@JustinTimperio
Copy link

Hi all, would love to see this project move along and stay maintained. I don't have a ton of time but I would be happy to test things for you @srebhan or @awnumar if that is helpful. Happy also to provide a code review as well if that is helpful.

@awnumar
Copy link
Owner

awnumar commented Mar 28, 2024

@srebhan

It's strange to go to /V2 when it's currently not even v1

In terms of the API, how about passing a config struct instead of passing functions to configure it? The approach sounds good overall

@srebhan
Copy link
Author

srebhan commented Apr 30, 2024

@awnumar sorry for the late reply!

It's strange to go to /V2 when it's currently not even v1

Also fine with calling it v1. ;-)

In terms of the API, how about passing a config struct instead of passing functions to configure it?

The functional option pattern has some strong benefits, but the biggest one IMO is that you can modify specific options only without touching the defaults of other config options. This is especially relevant if you add a new field as every user providing an explicit config will now silently override the new field with nil...

@awnumar
Copy link
Owner

awnumar commented May 6, 2024

@srebhan

The functional option pattern has some strong benefits, but the biggest one IMO is that you can modify specific options only without touching the defaults of other config options.

Yeah it does have some benefits, though it's also more verbose leading to long lines or arbitrary newlines in the code. Config structs are used throughout the standard library so they're familiar and easy to reason about.

Also, if a user does not provide a value in a struct (therefore it gets assigned the zero value), this should be interpreted as "leave that option as the default value". This is similar to what the standard library does, e.g. here.

This is especially relevant if you add a new field as every user providing an explicit config will now silently override the new field with nil...

I believe this is a problem shared with the "passing funcs as configs" pattern as any code written before a new config option is added will automatically be assigned the default value.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Minimize amount of locked memory pages for scaling
3 participants