Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support cfg(target_arch = "wasm32", target_feature = "atomic") #8

Open
yvt opened this issue Mar 18, 2024 · 0 comments
Open

Support cfg(target_arch = "wasm32", target_feature = "atomic") #8

yvt opened this issue Mar 18, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@yvt
Copy link
Owner

yvt commented Mar 18, 2024

rlsf::GlobalTlsf (std::alloc::GlobalAlloc implementation) requires target-specific code to synchronize concurrent function calls. The code for cfg(target_arch = "wasm32", target_feature = "atomic") was left out as its support (rust-lang/rust#77839) in Rust was still in infancy.

#[cfg(not(target_feature = "atomics"))]
impl Mutex {
// Single-threaded WebAssembly environment
#[inline]
pub fn lock(&self) {}
#[inline]
pub fn unlock(&self) {}
}

Now that WASM atomic intrinsics have been introduced to core (still unstable though), it's possible to implement this now, but we must decide which mutex implementation to use:

  1. std::sync::Mutex

    • Pro: Already stable
    • Con: Runtime and code-size overhead due to poisoning mechanism
    • Con: The current implementation does not provide fairness guarantees, meaning acquiring a lock might not complete in a finite time
      • ...which might be not that bad, considering that the rest of an application is probably gonna use it anyway.
  2. Flag + memory_atomic_{wait32,notify}

    • Pro: Minimum code-size overhead
    • Pro: Minimum runtime overhead in uncontended cases
    • Con: Does not provide fairness guarantees, meaning acquiring a lock might not complete in a finite time
  3. Ticket lock + memory_atomic_{wait32,notify}

    • Pro: Provides fairness guarantees
    • Con: Thundering herd problem, which increases runtime overhead in contended cases
  4. Queue-based lock (e.g., MCS, CLH) + memory_atomic_{wait32,notify}

    • Pro: Provides fairness guarantees
    • Pro: Lowest worst-case runtime overhead
    • Con: High best-case runtime overhead
    • Con: More complicated to implement than other options
    • Scott, Michael L., and William N. Scherer. "Scalable queue-based spin locks with timeout." ACM SIGPLAN Notices 36.7 (2001): 44-52.
  5. Flag + busy loop (spinlock)

    • Pro: Minimum code-size overhead
    • Pro: Minimum runtime overhead in uncontended cases
    • Con: Does not provide fairness guarantees, meaning acquiring a lock might not complete in a finite time
    • Con: Catastrophic runtime overhead in contended cases
  6. Ticket lock + busy loop

    • Pro: Provides fairness guarantees (assuming the scheduler is fair)
    • Con: Catastrophic runtime overhead in contended cases
@yvt yvt added the enhancement New feature or request label Mar 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant