Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Questions/research into using ZSSP in a project #35

Open
617a7aa opened this issue Jan 28, 2024 · 3 comments
Open

Questions/research into using ZSSP in a project #35

617a7aa opened this issue Jan 28, 2024 · 3 comments

Comments

@617a7aa
Copy link

617a7aa commented Jan 28, 2024

Hi there! Thanks for releasing this incredible work into open source.

I'm looking at building a protocol that uses ZSSP's stack for connection security. The implementation operates from OSI layers 7 to 2, reading/writing packets at the datalink layer - bypassing any kernel networking. In the white paper, ZFP adds a header to each packet which is AES encrypted. I'm curious as to how I could also use this to implement reliable transport, as our underlying transport protocol is UDP.

The header contains a fragment number, count and packet counter which is all that's needed for a basic acknowledgment protocol to work with on top of. However, due to the encryption of the header these are not available before decryption of the packet. As far as I can see, you can't directly access these headers from the application.

How would you recommend implementing packet reliability for ZSSP? Also, would ZFP even be needed for an application whose abstraction is a byte stream? When the application is aware of the active MTU, the byte stream can be split up into packets sized appropriately even before reaching ZSSP. Reliability could then be addressed through a simple packet counter instead of the ZFP. I'm imagining this would bring along its own security considerations however.

I'd like to discuss some of these ideas with the people who designed this protocol in the first place before implementing them myself :)

@mamoniot
Copy link
Contributor

mamoniot commented Jan 29, 2024

The packet counter was made unavailable to the application layer for two reasons. First, ZSSP was designed explicitly for parallelized encryption. This makes the value of packet counters inherently non-deterministic unless the application explicitly serializes all sends for a session. Second, the security properties of the packet counter are meaningfully different and worse than those of a proper ZSSP payload. There are ways for an application to insecurely misuse the packet counter, and it is surprisingly difficult to prove that any specific use is secure. These issues in combination made me feel it would be better for an application to embed its own counter in the payload of a packet rather than rely on the ZSSP packet counter.

You are right, ZFP is not really needed for applications which need a reliable, in-order byte stream. There may be performance benefits to using ZFP in such an environment, since encryption can be done in bulk for large packets, but this has not been properly tested. ZFP is not built in a way that it can be made to provide reliable transport. It lacks a "send-window" that stores sent packets so they may be resent in the event they are not received. If a send-window were added, header authentication would have to be heavily modified to protect it from DDOS attacks. For that reason, I would recommend implementing reliable, in-order transport by running some TCP-like protocol on top of ZSSP in the protocol stack.

At ZeroTier we've also run into issues were we need reliable, in-order transport. To solve this I created another protocol that runs on top of ZSSP, Sequential-Exchange. Its abstraction is still packets rather than a byte stream, but applications can just interpret the payload of a packet as the next bytes of a byte stream. SEQEX was designed assuming an application will want to operate both a data-plane and a control-plane over the same encrypted tunnel, so it provides several means of creating and separating different streams of data with different transport guarantees. If you truly only need a single byte-stream though, a custom protocol would be able to have lower overhead.

@617a7aa
Copy link
Author

617a7aa commented Feb 6, 2024

Thank you for the quick response. Sorry for taking so long to reply - I've been researching/learning and expanding on this since.

As far as I'm aware, a TCP-like protocol could either be implemented on top or on the bottom, in the form of actual TCP. However, the disadvantage of TCP is that data sent across a connection then has to be ordered, which would make it very slow to do parallel encryption/transmission across a single connect as synchronization has to occur.

In our application:

  • All data needs to be delivered
  • Some of this data (control messages/metadata) needs to be ordered, sent from a single thread
  • The vast majority of messages (our data) don't need to be ordered, sent and received from many threads

For simplicity, we only want to maintain one connection to each peer (I think ZSSP enforces this anyway?). However, we want to scale this one connection across multiple threads. We're receiving raw packets using netmap, and these packets are fanned out across multiple threads. We can optionally enforce that packets to a single connection by hashing (proto, src, src port, dest, dest port) are delivered to a single CPU core, though this limits the performance of a single ZSSP connection to a single CPU core. Ideally, we want to avoid that and scale ingress (and egress) packet processing across multiple threads, even if they're on the same core. This would encapsulate ZSSP, SEQEX and datalink processing. All data is then offloaded to separate cores for {de,}compression and processing, depending on the type of data.

There's another conversation here to determine if decompression should be done on the core that received the packet, or if the payload should be heap-allocated and moved to another thread for decompression. Netmap (more like the library we're actually using, libpnet) allows us to receive a packet by getting a &[u8] reference directly to the packet on the network card. This obviously can't be mutated, so a copy will need to be made to decrypt it. Even after decompression, this data is moved around in ownership across several threads several times. Deserialization is zero-copy, so the data will remain as-is in memory and passed between threads that are processing it. Heap-allocation is probably better here, but as I said... another conversation.

Is SEQEX capable (as a protocol) of handling this, granted that the actual use (multiple streams, all delivered, some ordered some not) is very similar to ZT? All of our messages/payloads will be under MAX_FRAGMENTS * 1500 (72 kB), so the higher level protocol won't need to do any fragmentation itself. From first looks, something alongside the sync/tokio module in SEQEX will need to be written by us to work with our more custom setup. Our datalink processing is synchronous, but the "connection" abstraction that tokio/std provides doesn't exist, and we'll make this an async interface for ease of use and performance. Probably.

@617a7aa 617a7aa changed the title Using the ZFP header for packet reliability Questions/research into using ZSSP in a project Feb 6, 2024
@mamoniot
Copy link
Contributor

From what I can tell, yes SEQEX can handle the design requirements you have described. Packets which pass through SEQEX are guaranteed to be delivered, and only packets which are marked with the "SeqCst" flag are guaranteed to be delivered in-order. SEQEX ingress and egress is designed to scale across multiple threads to the maximum extent possible while providing these guarantees. A mutex is required to update the SEQEX send and receive windows safely, and some kind of conditional-variable primitive will be required if you want to guarantee all sends will succeed. Beyond this all threads using SEQEX can operate in parallel. "Incoming" and "outgoing" packets within our implementation of SEQEX are also abstracted behind generics, so it is relatively easy to compare the performance of heap-allocation vs zero-copy vs other forms of memory management. SEQEX at its heart is a low-level, data-model agnostic protocol, so it will generally require a custom interface to use properly.

Yes ZSSP strictly enforces one connection to each peer. There is a means of efficiently defining many "peers" on a single machine but it sounds like this would be unnecessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants