This is a niche setup, however if your network is 100% dynamically configured, this is a handy feature to have available.
To support this functionality, a new PlayerChooseInitialServerEvent event was added to allow the initial server to connect to be changed as desired.
This commit absolutely does not change our support policy on this: this
is a completely unsupported setup. In any event, there is an existing
forwarding check in Velocity that covers this case quite well.
I am making this change to make the login process less "chatty" for
higher-latency links and 1.13+ servers.
The compression itself is zero-copy in most cases. However, the overhead
needed to copy a direct buffer into a heap buffer (and back) is still
present. If possible, use Linux for best performance.
All AES implementations being used are 'copy safe', where the source and
destination arrays may be the same. Lets save ourself a copy and reap
the performance wins!
Using the fact that the Java Deflater/Inflater API now supports
ByteBuffers as of Java 11, we can provide performance benefits equivalent
to the Velocity 1.0.x native compression on servers running Java 11+ on
non-macOS and non-Linux platforms (such as Windows).
If the remote server does flush consolidation, Velocity will be able to
frame the packets all at once instead of having to constantly decode
packets. This should provide a modest performance boost for them whilst
not impacting un-optimized servers.
zlib-ng boasts higher throughput than regular zlib, by combining patches
from Cloudflare, zlib, and ARM's improvements to zlib along with a more
modern codebase.
Profiling consistently shows that compression is the largest CPU expense
by far, so even a minor speed-up here is significant.