Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allocator traits and std::heap #32838

Open
6 of 12 tasks
nikomatsakis opened this issue Apr 8, 2016 · 470 comments
Open
6 of 12 tasks

Allocator traits and std::heap #32838

nikomatsakis opened this issue Apr 8, 2016 · 470 comments
Labels
A-allocators Area: Custom and system allocators B-RFC-approved Feature: Approved by a merged RFC but not yet implemented. B-unstable Feature: Implemented in the nightly compiler and unstable. C-tracking-issue Category: A tracking issue for an RFC or an unstable feature. Libs-Tracked Libs issues that are tracked on the team's project board. S-tracking-needs-summary Status: It's hard to tell what's been done and what hasn't! Someone should do some investigation. T-lang Relevant to the language team, which will review and decide on the PR/issue. T-libs-api Relevant to the library API team, which will review and decide on the PR/issue.

Comments

@nikomatsakis
Copy link
Contributor

nikomatsakis commented Apr 8, 2016

📢 This feature has a dedicated working group, please direct comments and concerns to the working group's repo.

The remainder of this post is no longer an accurate summary of the current state; see that dedicated working group instead.

Old content

Original Post:


FCP proposal: #32838 (comment)
FCP checkboxes: #32838 (comment)


Tracking issue for rust-lang/rfcs#1398 and the std::heap module.

State of std::heap after #42313:

pub struct Layout { /* ... */ }

impl Layout {
    pub fn new<T>() -> Self;
    pub fn for_value<T: ?Sized>(t: &T) -> Self;
    pub fn array<T>(n: usize) -> Option<Self>;
    pub fn from_size_align(size: usize, align: usize) -> Option<Layout>;
    pub unsafe fn from_size_align_unchecked(size: usize, align: usize) -> Layout;

    pub fn size(&self) -> usize;
    pub fn align(&self) -> usize;
    pub fn align_to(&self, align: usize) -> Self;
    pub fn padding_needed_for(&self, align: usize) -> usize;
    pub fn repeat(&self, n: usize) -> Option<(Self, usize)>;
    pub fn extend(&self, next: Self) -> Option<(Self, usize)>;
    pub fn repeat_packed(&self, n: usize) -> Option<Self>;
    pub fn extend_packed(&self, next: Self) -> Option<(Self, usize)>;
}

pub enum AllocErr {
    Exhausted { request: Layout },
    Unsupported { details: &'static str },
}

impl AllocErr {
    pub fn invalid_input(details: &'static str) -> Self;
    pub fn is_memory_exhausted(&self) -> bool;
    pub fn is_request_unsupported(&self) -> bool;
    pub fn description(&self) -> &str;
}

pub struct CannotReallocInPlace;

pub struct Excess(pub *mut u8, pub usize);

pub unsafe trait Alloc {
    // required
    unsafe fn alloc(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>;
    unsafe fn dealloc(&mut self, ptr: *mut u8, layout: Layout);

    // provided
    fn oom(&mut self, _: AllocErr) -> !;
    fn usable_size(&self, layout: &Layout) -> (usize, usize);
    unsafe fn realloc(&mut self,
                      ptr: *mut u8,
                      layout: Layout,
                      new_layout: Layout) -> Result<*mut u8, AllocErr>;
    unsafe fn alloc_zeroed(&mut self, layout: Layout) -> Result<*mut u8, AllocErr>;
    unsafe fn alloc_excess(&mut self, layout: Layout) -> Result<Excess, AllocErr>;
    unsafe fn realloc_excess(&mut self,
                             ptr: *mut u8,
                             layout: Layout,
                             new_layout: Layout) -> Result<Excess, AllocErr>;
    unsafe fn grow_in_place(&mut self,
                            ptr: *mut u8,
                            layout: Layout,
                            new_layout: Layout) -> Result<(), CannotReallocInPlace>;
    unsafe fn shrink_in_place(&mut self,
                              ptr: *mut u8,
                              layout: Layout,
                              new_layout: Layout) -> Result<(), CannotReallocInPlace>;

    // convenience
    fn alloc_one<T>(&mut self) -> Result<Unique<T>, AllocErr>
        where Self: Sized;
    unsafe fn dealloc_one<T>(&mut self, ptr: Unique<T>)
        where Self: Sized;
    fn alloc_array<T>(&mut self, n: usize) -> Result<Unique<T>, AllocErr>
        where Self: Sized;
    unsafe fn realloc_array<T>(&mut self,
                               ptr: Unique<T>,
                               n_old: usize,
                               n_new: usize) -> Result<Unique<T>, AllocErr>
        where Self: Sized;
    unsafe fn dealloc_array<T>(&mut self, ptr: Unique<T>, n: usize) -> Result<(), AllocErr>
        where Self: Sized;
}

/// The global default allocator
pub struct Heap;

impl Alloc for Heap {
    // ...
}

impl<'a> Alloc for &'a Heap {
    // ...
}

/// The "system" allocator
pub struct System;

impl Alloc for System {
    // ...
}

impl<'a> Alloc for &'a System {
    // ...
}
@nikomatsakis nikomatsakis added B-RFC-approved Feature: Approved by a merged RFC but not yet implemented. T-lang Relevant to the language team, which will review and decide on the PR/issue. T-libs-api Relevant to the library API team, which will review and decide on the PR/issue. B-unstable Feature: Implemented in the nightly compiler and unstable. labels Apr 8, 2016
@gereeter
Copy link
Contributor

I unfortunately wasn't paying close enough attention to mention this in the RFC discussion, but I think that realloc_in_place should be replaced by two functions, grow_in_place and shrink_in_place, for two reasons:

  • I can't think of a single use case (short of implementing realloc or realloc_in_place) where it is unknown whether the size of the allocation is increasing or decreasing. Using more specialized methods makes it slightly more clear what is going on.
  • The code paths for growing and shrinking allocations tend to be radically different - growing involves testing whether adjacent blocks of memory are free and claiming them, while shrinking involves carving off properly sized subblocks and freeing them. While the cost of a branch inside realloc_in_place is quite small, using grow and shrink better captures the distinct tasks that an allocator needs to perform.

Note that these can be added backwards-compatibly next to realloc_in_place, but this would constrain which functions would be by default implemented in terms of which others.

For consistency, realloc would probably also want to be split into grow and split, but the only advantage to having an overloadable realloc function that I know of is to be able to use mmap's remap option, which does not have such a distinction.

@gereeter
Copy link
Contributor

Additionally, I think that the default implementations of realloc and realloc_in_place should be slightly adjusted - instead of checking against the usable_size, realloc should just first try to realloc_in_place. In turn, realloc_in_place should by default check against the usable size and return success in the case of a small change instead of universally returning failure.

This makes it easier to produce a high-performance implementation of realloc: all that is required is improving realloc_in_place. However, the default performance of realloc does not suffer, as the check against the usable_size is still performed.

eddyb added a commit to eddyb/rust that referenced this issue Oct 19, 2016
…sakis

`#[may_dangle]` attribute

`#[may_dangle]` attribute

Second step of rust-lang#34761. Last big hurdle before we can work in earnest towards Allocator integration (rust-lang#32838)

Note: I am not clear if this is *also* a syntax-breaking change that needs to be part of a breaking-batch.
eddyb added a commit to eddyb/rust that referenced this issue Oct 19, 2016
…sakis

`#[may_dangle]` attribute

`#[may_dangle]` attribute

Second step of rust-lang#34761. Last big hurdle before we can work in earnest towards Allocator integration (rust-lang#32838)

Note: I am not clear if this is *also* a syntax-breaking change that needs to be part of a breaking-batch.
@pnkfelix
Copy link
Member

pnkfelix commented Oct 26, 2016

Another issue: The doc for fn realloc_in_place says that if it returns Ok, then one is assured that ptr now "fits" new_layout.

To me this implies that it must check that the alignment of the given address matches any constraint implied by new_layout.

However, I don't think the spec for the underlying fn reallocate_inplace function implies that it will perform any such check.

  • Furthermore, it seems reasonable that any client diving into using fn realloc_in_place will themselves be ensuring that the alignments work (in practice I suspect it means that the same alignment is required everywhere for the given use case...)

So, should the implementation of fn realloc_in_place really be burdened with checking that the alignment of the given ptr is compatible with that of new_layout? It is probably better in this case (of this one method) to push that requirement back to the caller...

@pnkfelix
Copy link
Member

@gereeter you make good points; I will add them to the check list I am accumulating in the issue description.

@pnkfelix
Copy link
Member

(at this point I am waiting for #[may_dangle] support to ride the train into the beta channel so that I will then be able to use it for std collections as part of allocator integration)

@joshlf
Copy link
Contributor

joshlf commented Jan 4, 2017

I'm new to Rust, so forgive me if this has been discussed elsewhere.

Is there any thought on how to support object-specific allocators? Some allocators such as slab allocators and magazine allocators are bound to a particular type, and do the work of constructing new objects, caching constructed objects which have been "freed" (rather than actually dropping them), returning already-constructed cached objects, and dropping objects before freeing the underlying memory to an underlying allocator when required.

Currently, this proposal doesn't include anything along the lines of ObjectAllocator<T>, but it would be very helpful. In particular, I'm working on an implementation of a magazine allocator object-caching layer (link above), and while I can have this only wrap an Allocator and do the work of constructing and dropping objects in the caching layer itself, it'd be great if I could also have this wrap other object allocators (like a slab allocator) and truly be a generic caching layer.

Where would an object allocator type or trait fit into this proposal? Would it be left for a future RFC? Something else?

@Ericson2314
Copy link
Contributor

I don't think this has been discussed yet.

You could write your own ObjectAllocator<T>, and then do impl<T: Allocator, U> ObjectAllocator<U> for T { .. }, so that every regular allocator can serve as an object-specific allocator for all objects.

Future work would be modifying collections to use your trait for their nodes, instead of plain ole' (generic) allocators directly.

@nikomatsakis
Copy link
Contributor Author

@pnkfelix

(at this point I am waiting for #[may_dangle] support to ride the train into the beta channel so that I will then be able to use it for std collections as part of allocator integration)

I guess this has happened?

@joshlf
Copy link
Contributor

joshlf commented Jan 4, 2017

@Ericson2314 Yeah, writing my own is definitely an option for experimental purposes, but I think there'd be much more benefit to it being standardized in terms of interoperability (for example, I plan on also implementing a slab allocator, but it would be nice if a third-party user of my code could use somebody else's slab allocator with my magazine caching layer). My question is simply whether an ObjectAllocator<T> trait or something like it is worth discussing. Although it seems that it might be best for a different RFC? I'm not terribly familiar with the guidelines for how much belongs in a single RFC and when things belong in separate RFCs...

@steveklabnik
Copy link
Member

@joshlf

Where would an object allocator type or trait fit into this proposal? Would it be left for a future RFC? Something else?

Yes, it would be another RFC.

I'm not terribly familiar with the guidelines for how much belongs in a single RFC and when things belong in separate RFCs...

that depends on the scope of the RFC itself, which is decided by the person who writes it, and then feedback is given by everyone.

But really, as this is a tracking issue for this already-accepted RFC, thinking about extensions and design changes isn't really for this thread; you should open a new one over on the RFCs repo.

@Ericson2314
Copy link
Contributor

Ericson2314 commented Jan 4, 2017

@joshlf Ah, I thought ObjectAllocator<T> was supposed to be a trait. I meant prototype the trait not a specific allocator. Yes that trait would merit its own RFC as @steveklabnik says.


@steveklabnik yeah now discussion would be better elsewhere. But @joshlf was also raising the issue lest it expose a hitherto unforeseen flaw in the accepted but unimplemented API design. In that sense it matches the earlier posts in this thread.

@joshlf
Copy link
Contributor

joshlf commented Jan 4, 2017

@Ericson2314 Yeah, I thought that was what you meant. I think we're on the same page :)

@steveklabnik Sounds good; I'll poke around with my own implementation and submit an RFC if it ends up seeming like a good idea.

@alexreg
Copy link
Contributor

alexreg commented Jan 4, 2017

@joshlf I don't any reason why custom allocators would go into the compiler or standard library. Once this RFC lands, you could easily publish your own crate that does an arbitrary sort of allocation (even a fully-fledged allocator like jemalloc could be custom-implemented!).

@joshlf
Copy link
Contributor

joshlf commented Jan 4, 2017

@alexreg This isn't about a particular custom allocator, but rather a trait that specifies the type of all allocators which are parametric on a particular type. So just like RFC 1398 defines a trait (Allocator) that is the type of any low-level allocator, I'm asking about a trait (ObjectAllocator<T>) that is the type of any allocator which can allocate/deallocate and construct/drop objects of type T.

@Ericson2314
Copy link
Contributor

@alexreg See my early point about using standard library collections with custom object-specific allocators.

@alexreg
Copy link
Contributor

alexreg commented Jan 4, 2017 via email

@alexreg
Copy link
Contributor

alexreg commented Jan 4, 2017 via email

@joshlf
Copy link
Contributor

joshlf commented Jan 4, 2017

Sure, but I’m not sure that would belong in the standard library. Could easily go into another crate, with no loss of functionality or usability.

Yes but you probably want some standard library functionality to rely on it (such as what @Ericson2314 suggested).

I think you’d want to use standard-library collections (any heap-allocated value) with an arbitrary custom allocator; i.e. not limited to object-specific ones.

Ideally you'd want both - to accept either type of allocator. There are very significant benefits to using object-specific caching; for example, both slab allocation and magazine caching give very significant performance benefits - take a look at the papers I linked to above if you're curious.

@alexreg
Copy link
Contributor

alexreg commented Jan 4, 2017 via email

@joshlf
Copy link
Contributor

joshlf commented Jan 4, 2017

But the object allocator trait could simply be a subtrait of the general allocator trait. It’s as simple as that, as far as I’m concerned. Sure, certain types of allocators can be more efficient than general-purpose allocators, but neither the compiler nor the standard really need to (or indeed should) know about this.

Ah, so the problem is that the semantics are different. Allocator allocates and frees raw byte blobs. ObjectAllocator<T>, on the other hand, would allocate already-constructed objects and would also be responsible for dropping these objects (including being able to cache constructed objects which could be handed out later in leu of constructing a newly-allocated object, which is expensive). The trait would look something like this:

trait ObjectAllocator<T> {
    fn alloc() -> T;
    fn free(t T);
}

This is not compatible with Allocator, whose methods deal with raw pointers and have no notion of type. Additionally, with Allocators, it is the caller's responsibility to drop the object being freed first. This is really important - knowing about the type T allows ObjectAllocator<T> to do things like call T's drop method, and since free(t) moves t into free, the caller cannot drop t first - it is instead the ObjectAllocator<T>'s responsibility. Fundamentally, these two traits are incompatible with one another.

@RalfJung
Copy link
Member

RalfJung commented Jun 25, 2023 via email

@thomcc
Copy link
Member

thomcc commented Jun 26, 2023

Yes, but that's highly non-obvious1, and will be a subtle footgun that's hard to get right -- specifically, this is hard to get right for both the user of the Allocator API, and implementer of the Allocator API. In concrete terms, you will often end up with checks on both side of the API boundary, which is definitely a sign that something is wrong.

I didn't say that the API cannot be made to work in the way it currently is, just that it's a footgun on both ends.

Footnotes

  1. Your earlier comment that it's unsurprising is counter to my experience that almost everybody using this API has gotten it wrong, including the stdlib.

@RalfJung
Copy link
Member

IMO that the source of the problem here is Vec insisting on not interacting with the allocator in Vec::new. This means it must always know whether it has something to give back to the allocator or not. That's an important piece of state to track. If Vec used a capacity of Option<usize> where None indicates "not allocated" this would be trivial, but Vec chose to represent None as 0 and that's where things start to become subtle. The shrinking logic seems to currently get that wrong, but I wouldn't blame the allocator API for that -- it's Vec that chose to be "clever" in its state representation here.

In most cases it is probably perfectly fine to have a data structure invariant saying that the data pointer is always something that was returned by alloc, and hence can be freed by dealloc. Then grow/shrink can be implemented in the obvious way without any checks on the data structure side. That would be a Vec that just always calls alloc in new and dealloc in drop. However, the real Vec chooses to not follow that path and instead has an invariant saying "if the capacity is non-zero, then there is some memory to give back to the allocator, else there is not". Vec then has to live with the consequences of that choice, in particular it uses capacity 0 as a sentinel itself and hence has to always special-case this situation everywhere. Why would we punish every user of Allocator (having them special case capacity 0) for this somewhat special choice Vec made?

Or is this common enough that we want to tailor the entire allocator trait towards data structures that use capacity 0 as a sentinel themselves to avoid calling alloc in new?

@the8472
Copy link
Member

the8472 commented Jun 26, 2023

Or is this common enough that we want to tailor the entire allocator trait towards data structures that use capacity 0 as a sentinel themselves to avoid calling alloc in new?

Most collections do this, yeah. To enable an empty collection to be about as cheap as an Option::None, to make it const fn new() and to make construction panic-free.

Though as @thomcc already mentioned Box is inconsistent in when it calls the allocator or uses a dangling pointer.

@Amanieu
Copy link
Member

Amanieu commented Jun 28, 2023

I've addressed the issues with ZST handling in Box and Vec in #113113.

As a side note, all the logic for handling zero-sized allocations could also be implemented as a type that wraps an Allocator and causes zero-sized allocations to become no-ops. We don't have to provide this in the standard library, it can just be a crate.

@lilith
Copy link

lilith commented Jun 28, 2023 via email

@Amanieu
Copy link
Member

Amanieu commented Jun 28, 2023

The wrapper would return NonNull::dangling for all zero-sized allocations and ignore all zero-sized deallocations. Non-zero allocations are passed on to the underlying allocator.

@lilith
Copy link

lilith commented Jun 28, 2023 via email

@thomcc
Copy link
Member

thomcc commented Jun 28, 2023

As a side note, all the logic for handling zero-sized allocations could also be implemented as a type that wraps an Allocator and causes zero-sized allocations to become no-ops.

This is true, but you end up with code performing the same checks many times. For example, both sides of the interface end up checking here if your underlying allocator doesn't handle zero-sized allocations well, and most (or at least very many) designs for allocators do not, in my experience.

Given that forbidding it is consistent with the earlier design (GlobalAlloc), I don't really see what we get by allowing it at this point when it clearly causes trouble both for users and implementers of the API.

@RalfJung
Copy link
Member

I don't really see what we get by allowing it at this point when it clearly causes trouble both for users and implementers of the API.

It can't possible cause trouble for users -- every user that is correct wrt the more restricted API (that is UB on zero-sized allocs) is also correct wrt the current API.

@Jules-Bertholet
Copy link
Contributor

Jules-Bertholet commented Jun 28, 2023

Another reason to have the collection check for zero size: for e.g. Box<T>, the collection knows statically whether T is a ZST, so the checks can be optimized out.

It can't possible cause trouble for users -- every user that is correct wrt the more restricted API (that is UB on zero-sized allocs) is also correct wrt the current API.

Correctness trouble is the worst kind of trouble, but not the only kind. Users also want performance, which the current API pessimizes.

@thomcc
Copy link
Member

thomcc commented Jun 28, 2023

I also don't necessarily agree that the current API avoids correctness issues. We clearly had one in the stdlib code, which would have almost certainly been avoided if we made it clear that zero-sized allocations were forbidden (especially if we forbid it by making things take a NonZeroLayout or such).

My experience is that the sort of optimizations that the stdlib does here are quite common, and not a special case. This is likely because of how long we've documented that zero-sized allocations are free and infallible (something which definitely won't be universally true if we delegate this to the underlying allocator).

@RalfJung
Copy link
Member

FWIW, if I read 56cbf2f correctly, then RawVec actually did this correctly when the Allocator (AllocRef back then) trait was changed to allow zero-sized allocations. The bug was introduced in 2526acc, a later commit in the same PR (#70362).

I also don't necessarily agree that the current API avoids correctness issues.

I agree that for data structures that have a special zero-size state themselves where they don't own any memory that must be given back to the allocator, the current API does not help at all.

The question is, is that case common enough to justify hurting the alternative case where data structures would be completely fine with always owning some allocated memory, even when the size is 0 -- basically leaving it to allocators to make size 0 efficient, instead of having to implement that optimization for each data structure? Naively it seems much better to do this in the allocator once rather than in every single data structure, but clearly Vec disagrees since it doesn't trust the allocator doing this right. Interesting.

Here is a possible alternative to just making Allocator require non-ZST: we could say that passing the result of NonNull::dangling() to deallocate/grow/shrink with a size-zero layout must always be allowed and not attempt to actually deallocate that pointer. Then Vec::new could still use NonNull::dangling, but all the rest of the Vec code could freely treat the backing buffer as if it was created by the allocator, and drop could unconditionally call deallocate.

That would make the buggy shrink actually legal. It would avoid having to re-implement the "size 0 optimization" in each and every data structure, instead having it implemented once in the allocator. So just from a code structure perspective, it seems to me like that is the API that types like Vec actually want: a const-compatible, zero-cost fn new without having to worry about size 0 anywhere. allocate would be safe without the need for a NonZeroLayout.

What I don't know is whether this API is something allocators can reasonably implement. @thomcc what do you think?

@nbdd0121
Copy link
Contributor

Here is a possible alternative to just making Allocator require non-ZST: we could say that passing the result of NonNull::dangling() to deallocate/grow/shrink with a size-zero layout must always be allowed and not attempt to actually deallocate that pointer.

How would you tell apart dangling() from a valid pointer?

@RalfJung
Copy link
Member

Vec does it, so clearly it is possible. It would be up to the allocator to ensure that for size 0, it can never confuse dangling with a valid pointer.

@Jules-Bertholet
Copy link
Contributor

Vec checks for capacity == 0 to determine whether its pointer is dangling.

@thomcc
Copy link
Member

thomcc commented Jun 28, 2023

a const-compatible, zero-cost fn new without having to worry about size 0 anywhere

I'm not sure how it's const-compatible unless the allocator is a const trait/impl/something (or at least allocation is const), which doesn't seem likely to be the case most of the time (since usually allocation requires a number of operations which are problematic for const). I have to think about the rest of your comment though.

@RalfJung
Copy link
Member

NonNull::dangling is a const fn, which is all that is needed for Vec::new. (The function would stay unchanged compared to what the stdlib does right now.)

To be clear, I have no idea if this proposal makes sense. The part where the allocator, in dealloc, has to handle dangling and therefore has to ensure to never actually put a real allocation (that must be freed) of size 0 at that address is certainly somewhat suspicious. I arrived at this purely from the perspective of "what would it take for Vec to not need to special-case capacity 0".

@thomcc
Copy link
Member

thomcc commented Jun 28, 2023

Oh, I see. I misinterpreted your proposal then.

I think the problem here is that now dealloc can't solely use allocation size to tell the difference between, since it needs to know if the allocation came from a call to alloc with a zero-sized layout, or if it came from NonNull::dangling(), which can return a valid pointer (and in practice often will on 32 bit targets if the alignment is sufficiently high).

@RalfJung
Copy link
Member

RalfJung commented Jun 29, 2023 via email

@thomcc
Copy link
Member

thomcc commented Jun 29, 2023

Yeah. We'd have to document that a zero sized allocation needs to be equivalent to dangling (or at least some kind of no-op), which seems a bit odd to me, but it would work. Basically, instead of telling users of Allocator that they cant' give allocators a zero-sized layout and should allocate a zero-sized layout in a certain way, we're saying they must behave a specific manner when given zero-sized layouts.

The downside here is that the checks couldn't always be removed in cases where Allocator is a trait object. It also plausibly adds a branch into the allocation path that could otherwise be avoided. It also is a bit error-prone if not documented properly, as I go over in #32838 (comment) (although I've softened on this proposal since the issues I hit could possibly be addressed with documentation).

This would make the "handle zero-sized layout by rounding up" approach suggested elsewhere in this thread invalid though (but I don't see a way to keep it without many other downsides).

@thomcc
Copy link
Member

thomcc commented Aug 7, 2023

I wrote a blog post announcing that I'm intending on working on the Allocator design, and I wrote down (roughly) a set of things I'm thinking about https://shift.click/blog/allocator-trait-talk.

Largely speaking my feeling that Allocator needs more comprehensive rework/consideration is why I haven't filed a PR for the extra parameter for grow, for any changes around zero-sized allocations1, etc.

Anyway, all of this is tricky because Allocator is trying to serve so many roles, so it's hard to find a design that doesn't end up making a trade-off that negatively impacts something or other, and it takes a lot of experimentation to play around with different implementations of the trait and code using it.

Footnotes

  1. I've started to have second thoughts on zero-sized allocations, which is one of the things I'm hoping to work through. I think that perhaps Allocators which use resources for zero-sized allocations is a little analogous to Iterators which return None in the middle -- IOW, could an approach more similar to FusedIterator/Fuse work? I'm not sure, maybe.

@wallgeek
Copy link

I'm sorry but isn't both "new_in" and "with_capacity_in" have very minor mistakes in source documentation in example section? Or am I missing something?
https://doc.rust-lang.org/std/collections/struct.VecDeque.html

@udoprog
Copy link
Contributor

udoprog commented Jan 23, 2024

@wallgeek No, you're right. They don't exemplify the APIs at all. It should be fixed!

@pravic
Copy link
Contributor

pravic commented Jan 24, 2024

I've checked all new_in methods in https://doc.rust-lang.org/std/index.html?search=new_in&filter-crate=std:

  • VecDeque - a blunt copy from new
  • BTreeMap
    • Makes a new empty BTreeMap with a reasonable choice for B.

    • what is B exactly?
  • BTreeSet - ditto with BTreeMap
  • LinkedList
    • Constructs an empty LinkedList<T, A>.

    • the description can be improved

The rest is okay.

@SimonSapin
Copy link
Contributor

  • what is B exactly?

Currently in std:

const B: usize = 6;

The struct-level docs explain this and compare a B-tree with a binary tree that would have individual allocations for each item:

https://doc.rust-lang.org/std/collections/struct.BTreeMap.html

A B-Tree instead makes each node contain B-1 to 2B-1 elements in a contiguous array. By doing this, we reduce the number of allocations by a factor of B, and improve cache efficiency in searches.

It’s probably not relevant for the docs of a constructor to talk about the "choice" of B, since that choice is compile-time constant in the current implementation. (As opposed to something users could influence like Vec::with_capacity)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-allocators Area: Custom and system allocators B-RFC-approved Feature: Approved by a merged RFC but not yet implemented. B-unstable Feature: Implemented in the nightly compiler and unstable. C-tracking-issue Category: A tracking issue for an RFC or an unstable feature. Libs-Tracked Libs issues that are tracked on the team's project board. S-tracking-needs-summary Status: It's hard to tell what's been done and what hasn't! Someone should do some investigation. T-lang Relevant to the language team, which will review and decide on the PR/issue. T-libs-api Relevant to the library API team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests