It was using a redirect, but apparently a small portion of systems,
networks, some mess, are having problems with that redirect.
Just use the direct link and skip the hassle.
This change by spigot ensures that many interactins with chunks,
e.g. getting a list of TEs will cause the chunk to be marked for not
unloading and will block their unload. This is especially true for
servers using Timings (it needs to access the TE list of chunks), or
any plugins which need to access entity/TE lists periodically.
At the time this was re-added, there was concern around how the JIT
would handle the system property that enabled it.
This shouldn't be a problem, and as such we no longer need to block
access to it.
The Vanilla Method Profiler will not provide much to most users however
there is no harm in providing it as an option. For most users, the
recommended and supported method for determining performance issues with
Paper will continue to be Timings.
In some enviroments, the channel limit set by spigot can cause issues,
e.g. servers which allow and support the usage of mod packs.
provide an optional flag to disable this check, at your own risk.
* Make the legacy ping handler more reliable
The Minecraft server often fails to respond to old ("legacy") pings
from old Minecraft versions using the protocol used before the switch
to Netty in Minecraft 1.7.
Due to packet fragmentation[1], we might not have all needed bytes
available when the LegacyPingHandler is called. In this case, it will
run into an error, remove the handler and continue using the modern
protocol.
This is unlikely to happen for the first two revisions of the legacy
ping protocol (used in Minecraft 1.5.x and older) since the request
consists of only one or two bytes, but happens frequently for the
last/third revision introduced in Minecraft 1.6.
It has much larger, variable packet sizes due to the inclusion of
the virtual host (the hostname/port used to connect to the server).
The solution[2] is simple: If we find more than two matching bytes,
we buffer the remaining bytes until we have enough to fully read and
respond to the request.
[1]: https://netty.io/wiki/user-guide-for-4.x.html#wiki-h3-11
[2]: https://netty.io/wiki/user-guide-for-4.x.html#wiki-h4-13
* Add legacy ping support to PaperServerListPingEvent
Add a new method to StatusClient check if the client is a legacy
client that does not support all of the features provided in the
event.
Some plugins (e.g. older builds of EssentialsX) assume that
ServerListPingEvent.getSampleText() returns a mutable copy of
the player sample list. This was the case until it was refactored
on top of the new API introduced in #980.
As a result, they may run into an error when trying to modify the
returned list directly. Modify getSampleText() to return a mutable
copy (a plain ArrayList) to mimic the old behavior.
Don't want to risk mutating players properties in server list (unlikely, but lets be proper)
and Skull also has a setter API, so that should be used too.
* Drop original implementation for old player sample API
* Add extended PaperServerListPingEvent
Add a new event that extends the original ServerListPingEvent
and allows full control of the response sent to the client.
* Implement deprecated player sample API
I mistakenly thought .complete() also checked for textures, which was not the case
So the logic was not working as desired.
Also some undesired logic paths lead to textures of the logging in player being dropped, forcing
us to always load the textures immediately again on login, leading to rate limits.
Everythings now good
the .complete() api now will default specify to also complete textures, but you may
pass false to it to skip loading textures.
Gets the unique ID of the player currently known as the specified player name
In Offline Mode, will return an Offline UUID
This is a more performant way to obtain a UUID for a name than loading an OfflinePlayer
This ensures we look up the name for ID only Profiles
If the profile is in the UserCache, we can get those details quickly
This should avoid some unnecessary round trips.
Additionally, handle profiles for offline mode to use offline UUID's
Bukkit restricts command execution of signs to test if the sender
has permission to run the specified command. This breaks vanilla
maps that use signs to intentionally run as elevated permission.
Bukkit provides an unrestricted advancements setting, so this setting
compliments that one and allows for unrestricted signs.
We still filter sign update packets to strip out commands at edit phase,
however there is no sanity in ever expecting creative mode to not be
able to create signs with any command.
Creative servers should absolutely never enable this.
Non creative servers, enable at own risk!!!
The Craft Scheduler still uses the primary thread for task scheduling.
This results in the main thread still having to do work as part of the
dispatching of async tasks.
If plugins make use of lots of async tasks, such as particle emitters
that want to keep the logic off the main thread, the main thread still
receives quite a bit of load from processing all of these queued tasks.
Additionally, resizing and managing the pending entries for all of
these asynchronous tasks takes up time on the main thread too.
This commit replaces the implementation of the scheduler when working
with asynchronous tasks, by forwarding calls to the new scheduler.
The Async Scheduler uses a single thread executor for "management" tasks.
The Management Thread is responsible for all adding and dispatching of
scheduled tasks.
The mainThreadHeartbeat will send a heartbeat task to the management thread
with the currentTick value, so that it can find which tasks to execute.
Scheduling of an async tasks also dispatches a management task, ensuring
that any Queue resizing operation occurs off of the main thread.
The async queue uses a complete separate PriorityQueue, ensuring that resize
operations are decoupled from the sync tasks queue.
Additionally, an optimization was made that if a plugin schedules
a single, non repeating, no delay task, that we immediately dispatch it
to the executor pool instead of scheduling it. This avoids an unnecessary
round trip through the queue, as well as will reduce the size growth of the
queue if a plugin schedules lots of asynchronous tasks.
This seems completely pointless, as packet dispatch uses .writeAndFlush.
Things seem to work fine without implicit flushing, but incase issues arise,
provide a System property to re-enable it using improved logic of doing the
flushing on the netty event loop, so it won't do the flush on the main thread.
Renable flushing by passing -Dpaper.implicit-flush=true