Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change Gson to faster serialization library #3624

Open
3 of 4 tasks
andreasdc opened this issue Feb 27, 2024 · 68 comments
Open
3 of 4 tasks

Change Gson to faster serialization library #3624

andreasdc opened this issue Feb 27, 2024 · 68 comments

Comments

@andreasdc
Copy link

Feature description

I was looking and the profiler and saw that serializing and deserializing of strings such as scoreboard components is taking some processing time, maybe it would be a good idea to change it to a faster library? The servers have scoreboard plugin, I think tablist plugin like https://github.com/CodeCrafter47/BungeeTabListPlus would have better performance too?
image
image

Goal of the feature

When looking at the results from here, gson, which is currently being used in BungeeCord, got 8677 ns, while for example fury-fastest got 442 and jackson 1994. This is almost 20x improvement with fury or 4x with jackson.
https://github.com/eishay/jvm-serializers/wiki

Unfitting alternatives

Checking

  • This is not a question or plugin creation help request.
  • This is a feature or improvement request.
  • I have not read these checkboxes and therefore I just ticked them all.
  • I did not use this form to report a bug.
@Outfluencer
Copy link
Contributor

Outfluencer commented Feb 27, 2024

i personally add a component serial and deserialisation cache to my bungee do speed exactly this up
I think not using gson is a bad idea, as gson is the most widely used lib

@andreasdc
Copy link
Author

i personally add a component serial and deserialisation cache to my bungee do speed exactly this up I think not using gson is a bad idea, as gson is the most widely used lib

Can I see how did you do that? Did you see any improvement? If the behaviour will be exactly the same and only the performance will be better, I think that's a good idea to implement that.

@Outfluencer
Copy link
Contributor

Behaviour is the same but i have no benchmarks

@Outfluencer
Copy link
Contributor

I added a threadlocal limited cache that only caches the most used components

@andreasdc
Copy link
Author

I added a threadlocal limited cache that only caches the most used components

With many components that don't repeat it will be slower than normal one probably.

@Outfluencer
Copy link
Contributor

Outfluencer commented Feb 27, 2024

In most cases one component will be sent to multiple players and gets deserialized multiple times so in most cases its much faster

@Outfluencer
Copy link
Contributor

The only case its, i would not even call ist slower, but has no performance boost is if you senden one single component to only one player

@andreasdc
Copy link
Author

andreasdc commented Feb 27, 2024

Oh yes with some players I see that it will improve by a bit, even proxy pinging will be improved, along with chat, bossbar, scoreboard and yes, every tab component like Item, Header/Footer uses gson too (BungeeTabListPlus would be improved already, because it uses ComponentSerializer.deserialize(). If it won't break anything and would help up to 20x, then I think it's a good idea.
image
image
image
image
In some project I saw that ProxyPing packet is cached, I think that would be nice idea too.

@Janmm14
Copy link
Contributor

Janmm14 commented Feb 27, 2024

Just because a library is widely used doesn't mean that bungee should stick to it. Bungee would only have to carry the library as dead weight around in case plugins use it for a couple years (file size is not a big problem), but it could switch to a faster serialization library at an time, especially when a new library seems to be an order of magnitude faster in benchmarks.

Edit: However it seems like Fury for example is not yet in a usable state. It does not contain json (de-)serialization.

@andreasdc
Copy link
Author

Just because a library is widely used doesn't mean that bungee should stick to it. Bungee would only have to carry the library as dead weight around in case plugins use it for a couple years (file size is not a big problem), but it could switch to a faster serialization library at an time, especially when a new library seems to be an order of magnitude faster in benchmarks.

Edit: However it seems like Fury for example is not yet in a usable state. It does not contain json (de-)serialization.

That's not it?
image

@Janmm14
Copy link
Contributor

Janmm14 commented Feb 27, 2024

Just because a library is widely used doesn't mean that bungee should stick to it. Bungee would only have to carry the library as dead weight around in case plugins use it for a couple years (file size is not a big problem), but it could switch to a faster serialization library at an time, especially when a new library seems to be an order of magnitude faster in benchmarks.
Edit: However it seems like Fury for example is not yet in a usable state. It does not contain json (de-)serialization.

That's not it? image

No, that does not create json, it serializes to bytes with a custom format or maybe its using jvm's serialization format.
I am not sure whether Fury is actually intended to serialize to json at all, given there is no mention of such json in explixitely in documentation or code.
Also the existing documentation is lacking a lot.

@andreasdc
Copy link
Author

Hmm, the fastest that has json in it's name is: https://github.com/ngs-doo/dsl-json - which is almost 8x faster than gson.

@Outfluencer
Copy link
Contributor

problem with cache is that a cached component is mutable, in most cases thats not rpoblem but if you use components weirdly it can be

@Outfluencer
Copy link
Contributor

that i i thought we could do it

image

@Outfluencer
Copy link
Contributor

in this case IdentityLinkedHashMap is limited to 128 and the oldest (the oldest one is the one that is used the least) value in der cache is removed any time a new is added

@andreasdc
Copy link
Author

I don't think using cache, when you don't know what will go in the json, is a good idea. Many json strings will be unique, having different things like stats, names, ping etc. You could technically check if it's unique, but it will still use extra resources. IMO using better library will increase json's performance by up to 8x and should bring great result. 🚀 💘

@Outfluencer
Copy link
Contributor

I don't think using cache, when you don't know what will go in the json, is a good idea. Many json strings will be unique, having different things like stats, names, ping etc. You could technically check if it's unique, but it will still use extra resources. IMO using better library will increase json's performance by up to 8x and should bring great result. 🚀 💘

thats why i am checking for identity instead so we dont lose any performance

@Janmm14
Copy link
Contributor

Janmm14 commented Feb 27, 2024

I don't think using cache, when you don't know what will go in the json, is a good idea. Many json strings will be unique, having different things like stats, names, ping etc. You could technically check if it's unique, but it will still use extra resources. IMO using better library will increase json's performance by up to 8x and should bring great result. 🚀 💘

thats why i am checking for identity instead so we dont lose any performance

This is no suitable general solution as BaseComponents are not final and itself or its extra's could change at any point in time without identity changing.

With an identity based hash map, I do not see any actual cache hits in vanilla bungeecord operation.

@Outfluencer
Copy link
Contributor

Outfluencer commented Feb 28, 2024

I don't think using cache, when you don't know what will go in the json, is a good idea. Many json strings will be unique, having different things like stats, names, ping etc. You could technically check if it's unique, but it will still use extra resources. IMO using better library will increase json's performance by up to 8x and should bring great result. 🚀 💘

thats why i am checking for identity instead so we dont lose any performance

This is no suitable general solution as BaseComponents are not final and itself or its extra's could change at any point in time without identity changing.

With an identity based hash map, I do not see any actual cache hits in vanilla bungeecord operation.

you're 100% right its not any solution i would recommend or make a pr for but maybe in this case it could help him

In my opinion we should not change lib

@andreasdc
Copy link
Author

I don't think using cache, when you don't know what will go in the json, is a good idea. Many json strings will be unique, having different things like stats, names, ping etc. You could technically check if it's unique, but it will still use extra resources. IMO using better library will increase json's performance by up to 8x and should bring great result. 🚀 💘

thats why i am checking for identity instead so we dont lose any performance

This is no suitable general solution as BaseComponents are not final and itself or its extra's could change at any point in time without identity changing.
With an identity based hash map, I do not see any actual cache hits in vanilla bungeecord operation.

you're 100% right its not any solution i would recommend or make a pr for but maybe in this case it could help him

In my opinion we should not change lib

What's the reason you prefer gson?

@Outfluencer
Copy link
Contributor

most used json lib ig, also i personally used it in all of my projects, i dont think we should recode all json related code just to get 0,01% better cpu times, also components are not sent by the client so its not exploitable

@Janmm14
Copy link
Contributor

Janmm14 commented Feb 29, 2024

I took a look at some of the java json libraries ranking high here: https://github.com/fabienrenaud/java-json-benchmark
I looked at fastjson, dsljson and avajejsonb.

Fastjson is missing documentation in english. The other two are also kinda missing some documentation for the more customizable parts.

I didnt look further into fastjson, but dsljson and avajejsonb are missing the flexibility of redirecting the target deserialization object like we do in ComponentSerializer class. These fast libraries use annotations to generate code in compile time where they use a json parser to directly parse the string into the target object, customited serialization is possible, but to get back the flexibility we'd basically have to create our own parser to a json object and then write code to interprete it into our Component classes. Gson provides this flexibility already.

I did not look at more libraries, the performance improvement is not that high anymore that I'd consider it worth.
I also think that with bungee's heavy use of custom serializers we are likely near the top of gson's performance as there is less reflection involved.
(I did not look at the gson usage in the benchmark, but if it doesn't use registered serializers but more reflection instead, there's a chance we are even faster than the bench measured.)

@andreasdc
Copy link
Author

I think it would help massively, I summarized all gson usage and it was 4,12%, it will help with scoreboard, bossbar, chat, server pinging, BungeeTabListPlus and some more. It was calculated during a big attack, that had no impact on gson. Without the attack it looks like gson operations are taking 8% of cpu time.

@Outfluencer
Copy link
Contributor

And what do you expect from the change? 1% less cpu times?

@andreasdc
Copy link
Author

And what do you expect from the change? 1% less cpu times?

Well initially I saw 8x improvement, but idk which lib is possible to use.

@bob7l
Copy link
Contributor

bob7l commented Mar 7, 2024

Bungee's serialization is complicated and deep. Changing the json library probably wouldn't net a significant performance improvement. Deeper profiling should be done to see exactly what's taking the most time. If I got time I'll drop some timings from Yourkit here

@andreasdc
Copy link
Author

I did my timings using Spark and I described the usage above.

@bob7l
Copy link
Contributor

bob7l commented Mar 8, 2024

I did my timings using Spark and I described the usage above.

Yeah, but profiling with something like YourKit will show very detailed timings.

Anyways, here's some profiled results. This is from 130,584 total milliseconds. Just looked into deserialization since it's likely the bigger resource hog.

Part1: https://i.gyazo.com/9d56a65540433521f7e34b7b33943137.png
Part2: https://i.gyazo.com/e0fa9bf1dc0141feb981bca81a1e7e2c.png

Flamegraph: https://i.gyazo.com/d5c470b668985eccb58b325dc67e8ce7.png

This is just on a test server with a bunch of fake (1.8.9) players and a single real 1.20.4 player, so the data isn't too good. Would be interesting to see with live 1.20.4+ players.

A large amount of the timings are just from Bungee's deserialization. Which is to be expected, it's pretty intensive. A good amount of timings also go to Gson's internal usage of LinkedTreeMap. I imagine most if not all of these libraries use a linked hashmap or a similar implementation with string keys.

Here's one part of the flamegraph I found interesting: https://i.gyazo.com/3e392e51e317ce3886c55f5dfda8ac0e.png

The toUppercase is using a pretty large amount of time: https://github.com/SpigotMC/BungeeCord/blob/1b88a8471077929bcfbd3a5bd6c7cfdf93df92de/chat/src/main/java/net/md_5/bungee/api/ChatColor.java#L267C49-L267C60

Which would likely be a pretty simple optimization.

Second minor optimization in that same function would be to just flip ( string.startsWith( "#" ) && string.length() == 7 ) so it checks the length first

@md-5
Copy link
Member

md-5 commented Mar 8, 2024

Which would likely be a pretty simple optimization.

What's the optimisation?

@bob7l
Copy link
Contributor

bob7l commented Mar 8, 2024

Which would likely be a pretty simple optimization.

What's the optimisation?

I meant the toUppercase could be easily optimized, not the whole thing. Whole thing is already well written.

For the uppercase, you could add the different uppercase/lowercase variants to the map rather then changing the string case. Would be a couple thousand entries in total though.

It's only 8.33% of the ChatColor.of function though so mostly a micro optimization...

@Outfluencer
Copy link
Contributor

lazyloading would be implementable for all packets that are directly try to parse it as an basecomponent

@Janmm14
Copy link
Contributor

Janmm14 commented Mar 8, 2024

Its a little contradicting that we have BY_NAME map using upper case while we serialize to lower case.
We should examine whether other chat libraries are using uppercase or lowercase after serializartion and then switch our serialization or BY_NAME map. My guess for this is that it might be faster to call lowercase on something that is already lowercase. In theory we could also attempt using length and charAt checks with a final equals check against single values instead of a map for optimization. We do not expect any more named colors and some new boolean style option is also not expected.
(I checked the maps chatcolor uses and there are no hash collisions btw)

What I noticed more in the benchmark is the huge JsonElement.has usage inside CompoenentStyleSerializer, actually taking most of the deserialization time.
This can be a little optimized by using get and a null check to optimize "hits".
For all chat jsons the amount of modifiers in total should not be that high, so I think iterating over the EntrySet of the json object and using a switch inside should be faster, even tho gson is using a linkedtreemap.

I have pushed these little optimizations of the 2nd paragraph here: https://github.com/Janmm14/BungeeCord/tree/chat-optimization
@bob7l can you compare the performance maybe?

Lazily deserializing chat might be a valueable optimization as well.

@andreasdc
Copy link
Author

I don't see JsonElement.has in my profilers.
image

@Janmm14
Copy link
Contributor

Janmm14 commented Mar 8, 2024

I don't see JsonElement.has in my profilers.

You are here in the serialization (objects -> json objects -> json string) part while bob7l and me found things about deserialization (json string -> json objects -> objects).

@bob7l
Copy link
Contributor

bob7l commented Mar 8, 2024

I don't see JsonElement.has in my profilers.

Serialization/Deserialization differences aside, you're using Spark's sampler. While it's good for getting a ballpark idea of what's taking up time, it's not very accurate and usually wont show quick functions like map.get and so on.

Whereas I'm using instrumentation/tracing which offers a more precise measurement and is able to pickup on finer details. It's also a lot more useful in these cases with massive trees from having to iterate over many children components.

You should be able to grab a free trial of YourKit. Incredible profiler

@andreasdc
Copy link
Author

ComponentSerializer.toString uses gson.toJson, there's nothing about JsonElement.has.

@Janmm14
Copy link
Contributor

Janmm14 commented Mar 8, 2024

ComponentSerializer.toString uses gson.toJson, there's nothing about JsonElement.has.

Yes. That is the "serializing" part: java objects to json string.

gson is also involved in the "deserializing" part: json string to java objects.
This is done with ComponentSerializer.deserialize method (previously .parse).

Different method, still being used a lot in bungee.

If we go to handling "chat" data from packets only on demand, that could save us both steps in a lot of cases, leaving only bungee plugins sending chat to be affected on the bungee side (as the chat part is also used in spigot, serialization speed is still a concern somewhat tho)

@Janmm14
Copy link
Contributor

Janmm14 commented Mar 9, 2024

In branch chat-optimization-v2 I have created an implementation of lazy deserialization in packets, with methods to hopefully retain full bytecode compatibility (Edit: for the individual packets, not for DefinedPacket methods).

Edit 2:
Its untested btw. Maybe someone has a better way of implementing it?
It should be possible to replace the long generics of Deserializable with static types, for shorter code, but I'm unsure if thats desired over flexibility towards possible similar attempts for other complex deserialization.

@andreasdc
Copy link
Author

andreasdc commented Mar 10, 2024

In branch chat-optimization-v2 I have created an implementation of lazy deserialization in packets, with methods to hopefully retain full bytecode compatibility (Edit: for the individual packets, not for DefinedPacket methods).

Edit 2: Its untested btw. Maybe someone has a better way of implementing it? It should be possible to replace the long generics of Deserializable with static types, for shorter code, but I'm unsure if thats desired over flexibility towards possible similar attempts for other complex deserialization.

One of the patches dd338c5 this or that Janmm14@5b12adb I think broke bold options from the tablist's entries and header/footer.

@andreasdc
Copy link
Author

And it's the same for strikethrough too.

@Janmm14
Copy link
Contributor

Janmm14 commented Mar 10, 2024

weird, i will test later and investigate and find the probably dumb mistake i made

@andreasdc
Copy link
Author

andreasdc commented Mar 11, 2024

Weird even more, because I'm testing with master and I still have this issue. I forced this to true and I don't see anything on the tablist as bold, should it be like that? I don't know where to look.


I did it here https://github.com/Janmm14/BungeeCord/blob/5b12adb97f0e9db28c123a4d3ef88fd0909ed730/chat/src/main/java/net/md_5/bungee/chat/ComponentStyleSerializer.java#L45
and here
https://github.com/Janmm14/BungeeCord/blob/5b12adb97f0e9db28c123a4d3ef88fd0909ed730/chat/src/main/java/net/md_5/bungee/chat/ComponentStyleSerializer.java#L81
and now it's bold.

@andreasdc
Copy link
Author

Yea, that's what I thought happened, it's an issue inside the BungeeCord. #3631

@bob7l
Copy link
Contributor

bob7l commented Mar 13, 2024

In branch chat-optimization-v2 I have created an implementation of lazy deserialization in packets, with methods to hopefully retain full bytecode compatibility (Edit: for the individual packets, not for DefinedPacket methods).

Edit 2: Its untested btw. Maybe someone has a better way of implementing it? It should be possible to replace the long generics of Deserializable with static types, for shorter code, but I'm unsure if thats desired over flexibility towards possible similar attempts for other complex deserialization.

Probably the most basic way of implementing it without breaking anything. Hopefully has a chance of getting merged.. De/serialization is only going to get heavier and heavier in the future.

@andreasdc
Copy link
Author

In branch chat-optimization-v2 I have created an implementation of lazy deserialization in packets, with methods to hopefully retain full bytecode compatibility (Edit: for the individual packets, not for DefinedPacket methods).
Edit 2: Its untested btw. Maybe someone has a better way of implementing it? It should be possible to replace the long generics of Deserializable with static types, for shorter code, but I'm unsure if thats desired over flexibility towards possible similar attempts for other complex deserialization.

Probably the most basic way of implementing it without breaking anything. Hopefully has a chance of getting merged.. De/serialization is only going to get heavier and heavier in the future.

How does it work btw? I don't understand that too well.

@bob7l
Copy link
Contributor

bob7l commented Mar 13, 2024

In branch chat-optimization-v2 I have created an implementation of lazy deserialization in packets, with methods to hopefully retain full bytecode compatibility (Edit: for the individual packets, not for DefinedPacket methods).
Edit 2: Its untested btw. Maybe someone has a better way of implementing it? It should be possible to replace the long generics of Deserializable with static types, for shorter code, but I'm unsure if thats desired over flexibility towards possible similar attempts for other complex deserialization.

Probably the most basic way of implementing it without breaking anything. Hopefully has a chance of getting merged.. De/serialization is only going to get heavier and heavier in the future.

How does it work btw? I don't understand that too well.

He's not deserializating the string payload unless it's requested. So when it comes time to write the packet, it just writes the string and doesn't need to serialize the text components back into a string assuming it never deserialized

@Janmm14
Copy link
Contributor

Janmm14 commented Mar 13, 2024

In branch chat-optimization-v2 I have created an implementation of lazy deserialization in packets, with methods to hopefully retain full bytecode compatibility (Edit: for the individual packets, not for DefinedPacket methods).
Edit 2: Its untested btw. Maybe someone has a better way of implementing it? It should be possible to replace the long generics of Deserializable with static types, for shorter code, but I'm unsure if thats desired over flexibility towards possible similar attempts for other complex deserialization.

Probably the most basic way of implementing it without breaking anything. Hopefully has a chance of getting merged.. De/serialization is only going to get heavier and heavier in the future.

How does it work btw? I don't understand that too well.

He's not deserializating the string payload unless it's requested. So when it comes time to write the packet, it just writes the string and doesn't need to serialize the text components back into a string assuming it never deserialized

Once deserialized, the code will ignore the previously serialized string as the deserialized representation could have been changed/updated, because it was requested.

I think I am using the Deserializable thing just with the same generics parameters. Should I, before creating the pull request, switch it to be ChatComponentHolder or so?

@bob7l
Copy link
Contributor

bob7l commented Mar 14, 2024

I think I am using the Deserializable thing just with the same generics parameters. Should I, before creating the pull request, switch it to be ChatComponentHolder or so?

I'd just open the pull and see what md_5 says. I think Deserializable is perfect.

@md-5
Copy link
Member

md-5 commented Mar 14, 2024

I'm not sure I understand the question

@Janmm14
Copy link
Contributor

Janmm14 commented Mar 14, 2024

I'm not sure I understand the question

Whether in chat-optimzation-v2 it should stay as Deserializeable<String, BaseComponent> or if I should remove the generics and call it sth like ChatComponentHolder

@md-5
Copy link
Member

md-5 commented Mar 15, 2024

You could have a fixed subclass of the generic one? I don't think it really matters

@Janmm14
Copy link
Contributor

Janmm14 commented Mar 15, 2024

You could have a fixed subclass of the generic one? I don't think it really matters

That doesn't wok this easily, a fixed sub-interface would need different implementations. So I kept it like it was.
PR here: #3634

@NEZNAMY
Copy link

NEZNAMY commented Mar 15, 2024

While on this topic I found something interesting I wanted to ask about if anyone knows.
BungeeCord deserializes scoreboard packets coming from backend in order to kick all players when a plugin accidentally double-registers a team, however, this deserialized packet is not forwarded into the channel pipeline like other packets (such as PlayerListItem), but instead, the raw bytebuf is. Why is that? Why does BungeeCord not forward the decoded scoreboard packet, but raw bytebuf instead? And why does it properly forward other packets?

@Janmm14
Copy link
Contributor

Janmm14 commented Mar 15, 2024

@NEZNAMY please link or mention the involved places in code. I do not think this is happening.

The deserialization of scoreboard-related packets is not just to kick players, the client would disconnect itself otherwise.
It is mainly to remove scoreboard stuff from previous server correctly for server switches in older versions.

@NEZNAMY
Copy link

NEZNAMY commented Mar 15, 2024

That's the issue - I didn't find the part of the code that is "wrong". While I disagree with what you are saying, that's not the point here. The point is that BungeeCord deserializes these packets, but when injecting custom duplex handler with a plugin and reading "write" direction, only the raw buffer is forwarded, so if I want to listen to those packets, I need to deserialize them once again, which overcomplicates things and slows down performance as well.

@andreasdc
Copy link
Author

That's the issue - I didn't find the part of the code that is "wrong". While I disagree with what you are saying, that's not the point here. The point is that BungeeCord deserializes these packets, but when injecting custom duplex handler with a plugin and reading "write" direction, only the raw buffer is forwarded, so if I want to listen to those packets, I need to deserialize them once again, which overcomplicates things and slows down performance as well.

Is this what you mean? #3503

@NEZNAMY
Copy link

NEZNAMY commented Mar 15, 2024

That's one of the reasons why BungeeCord deserializes these packets, yes. Since the job is already done, would be nice to see the deserialized packet forwarded to the pipeline instead of the raw buffer. Or is it to save resources while encoding?

@Janmm14
Copy link
Contributor

Janmm14 commented Mar 15, 2024

That's the issue - I didn't find the part of the code that is "wrong". While I disagree with what you are saying, that's not the point here. The point is that BungeeCord deserializes these packets, but when injecting custom duplex handler with a plugin and reading "write" direction, only the raw buffer is forwarded, so if I want to listen to those packets, I need to deserialize them once again, which overcomplicates things and slows down performance as well.

Actually this is intended behaviour and in fact an optimization inside the ChannelWrapper's write method. It applies to all registered packets those handlers in the classes extending "PacketHandler" (like DownsteamBridge) are not throwing CancelSendSignal.INSTANCE.

Either way, your code is not using API and some further bungeecord optimizations that are laying dormant in PRs here, but implemented for example in IvanCord could break your system of ignoring all existing bungeecord tooling for packet reading and hacking your way into the netty pipeline.
Move the code to your spigot servers and edit the packets right where they are created! Keep the bungee clean, stable and running fast.

@NEZNAMY
Copy link

NEZNAMY commented Mar 15, 2024

That's the issue - I didn't find the part of the code that is "wrong". While I disagree with what you are saying, that's not the point here. The point is that BungeeCord deserializes these packets, but when injecting custom duplex handler with a plugin and reading "write" direction, only the raw buffer is forwarded, so if I want to listen to those packets, I need to deserialize them once again, which overcomplicates things and slows down performance as well.

Actually this is intended behaviour and in fact an optimization inside the ChannelWrapper's write method. It applies to all registered packets those handlers in the classes extending "PacketHandler" (like DownsteamBridge) are not throwing CancelSendSignal.INSTANCE.

Either way, your code is not using API and some further bungeecord optimizations that are laying dormant in PRs here, but implemented for example in IvanCord could break your system of ignoring all existing bungeecord tooling for packet reading and hacking your way into the netty pipeline. Move the code to your spigot servers and edit the packets right where they are created! Keep the bungee clean, stable and running fast.

Thanks for the info, looks like it is intended.
Moving code to backend would make things drastically more complicated than they are on proxy installation and since I'm doing public development, tons of things are out of my control, making it even harder and having to do these kinds of things to at least partially gain control over important things. Public development sucks. I should leave.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants