This should now complete legacy serialization to avoid ever
changing the output content.
This removes the concept of "Default Color" from the method as
that entire concept was flawed and broke the intent of chat components.
Going to actually PR this patch to Spigot soon.
This now puts us back at a point where any data saved pre Spigot
breaking things will still save back the exact same way as before,
but new component -> legacy will now be fixed to not insert undesirable
default colors (such as black) into the legacy string, and instead use
the proper reset code.
This means you can now safety get the text from a book and
put it in chat or an entity display name without worry about black
color codes or other undesired color codes leaking into the new
context where that color doesn't make sense.
There are several cases where the plugin classloader may be dumped to the logs,
however, this provides no indication of the owner of the classloader, making
these messages effectively useless, this patch rectifies this
This brings chat componenent serialization to 100% accuracy so
that any text input in the legacy format, converting to comps and
then back to legacy will result in identical results.
If the user explicitly sets a color as prefix to a string, it is retained,
even if that color matches the default.
This also helps improve dealing with the empty string wrappers Bukkit creates.
A unit test has been added to verify this behavior.
This patch fixes the serialization of display names, item lores and
other things which use strings with color codes. The old implementation
deleted the color codes at the beginning of the resulting string if it
matched the default color passed to the conversion function. This
resulted in items having a black display name losing the black color
code in the beginning of the text when the item was serialized (e.g.
saving an ItemStack in a Yaml config).
Spigot has now made the issue worse and expanded the scope to more places.
1) Improve frustum to look more at the near chunks and frontal chunks only instead of 1 large single look up.
2) Delay adding 33 tickets based on view distance and lower their task priority. This will slower roll out the spiral
3) Chunks behind the player have additional delay on loading, favoring chunks in front of the player.
This has benefit that if faster traveling, some of the chunks will be cancelled / not loaded.
This should reduce pressure on chunk loading, as well as reduce loading/unloading unnecessary chunks while moving.
When a chunk is loaded from disk that has already been generated,
the server has to promote the chunk through the system to reach
it's current desired status level.
This results in every single status transition going from the main thread
to the world gen threads, only to discover it has no work it actually
needs to do.... and then it returns back to main.
This back and forth costs a lot of time and can really delay chunk loads
when the server is under high TPS due to their being a lot of time in
between chunk load times, as well as hogs up the chunk threads from doing
actual generation and light work.
Additionally, the whole task system uses a lot of CPU on the server threads anyways.
So by optimizing status transitions for status's that are already complete,
we can run them to the desired level while on main thread (where it has
to happen anyways) instead of ever jumping to world gen thread.
This will improve chunk loading effeciency to be reduced down to the following
scenario / path:
1) MAIN: Chunk Requested, Load Request sent to ChunkTaskManager / IO Queue
2) IO: Once position in queue comes, submit read IO data and schedule to chunk task thread
3) CHUNK: Once IO is loaded and position in queue comes, deserialize the chunk data, process conversions, submit to main queue
4) MAIN: next Chunk Task process (Mid Tick or End Of Tick), load chunk data into world (POI, main thread tasks)
5) MAIN: process status transitions all the way to LIGHT, light schedules Threaded task
6) SERVER: Light tasks register light enablement for chunk and any lighting needing to be done
7) MAIN: Task returns to main, finish processing to FULL/TICKING status
Previously would have hopped to SERVER around 12+ times there extra.
Mojang has flaws in their logic about chunks being concurrently
wrote to. So we constantly see crashes around multiple threads writing.
Additionally, java has optimized synchronization so well that its
in many times faster than trying to manage read wrote locks for low
contention situations.
And this is extremely a low contention situation.
Fixes#3293Fixes#2493
I'm hoping the other fix in 324 for the level map getting corrupted
fixes the real issue and this isn't needed anymore, but i suspect it is
will wait until more study can be done though.
Fixes#3469
We must check the level tracker as ticket levels add "virtual"
tickets to neighbors.
Also added neighbor tracking during generation to be extra safe.
Fixes#3465Fixes#3451Fixes#3459
Mojang implemented a cache like chunks have, but this cache
is accessed by multiple threads and is totally not safe.
So just remove it
Fixes#3466
Also missed a pooled nibble release, so slid that in there too.
Upstream has released updates that appears to apply and compile correctly.
This update has not been tested by PaperMC and as with ANY update, please do your own testing
Bukkit Changes:
d6db2839 #497: Add Material#getCraftingRemainingItem() to get item left behind after crafting
1c2eaf96 SPIGOT-5748: Add instant effect potion break to the Effect enum
CraftBukkit Changes:
aae46f82#665: Add test for Material#getCraftingRemainingItem()
cfeef75c SPIGOT-5749: ItemMeta serializing to YAML not saving black colour code
eb1b19d9 SPIGOT-5748: Add instant effect potion break to the Effect enum
This change reimplements the entire BehaviorFindPosition method to
get rid of all of the streams, and implement the logic in a more sane way.
We keep vanilla behavior 100% the same with this change, just wrote more
optimal, as we can abort iterating POI's as soon as we find a match....
One slight change is that Minecraft adds a random delay before a POI is
attempted again. I've increased the amount of that delay based on the distance
to said POI, so farther POI's will not be attempted as often.
Additionally, we spiral out, so we favor local POI's before we ever favor farther POI's.
We also try to pathfind 1 POI at a time instead of collecting multiple POI's then tossing them
all to the pathfinder, so that once we get a match we can return before even looking at other
POI's.
This benefits us in that ideally, a villager will constantly find the near POI's and
not even try to pathfind to the farther POI. Trying to pathfind to distant POI's is
what causes significant lag.
Other improvements here is to stop spamming the POI manager with empty nullables.
Vanilla used them to represent if they needed to load POI data off disk or not.
Well, we load POI data async on chunk load, so we have it, and we surely do not ever
want to load POI data sync either for unloaded chunks!
So this massively reduces object count in the POI hashmaps, resulting in less hash collions,
and also less memory use.
Additionally, unemployed villagers were using significant time due to major ineffeciency in
the code rebuilding data that is static every single invocation for every POI type...
So we cache that and only rebuild it if professions change, which should be never unless
a plugin manipulates and adds custom professions, which it will handle by rebuilding.
Some plugins are doing really really bad things to worlds breaking the
ability to send sounds to some users.
So creating another reference to the player chunk map that plugins wont be breaking, and
print a stack trace at world creation if we ever get an expected world state to identify
who is doing it!
If we encounter this illegal state, we fall back to the old method of sending sounds, so
sending sounds will still work, just less effecient.
Spigot made structure start not load chunks, but forgot to null check
the result...
This likely never blew up before due to the chunk leak issue, but now
that leaky chunks are cleaned up, it was identified.
While last was mostly there, still had some slight risk of unloading
before it was fully finished.
So just going to bump the delay to 3 minutes to be safe. Better than
forever at least.
Was really hoping we could unload them as soon as they were done to
any memory prematurely promoting to old generation, but guess we can't.
A chunk was loaded but not yet finished in use and was unloaded too early.
This caused it to be reloaded again or caused crashes.
Now also check if the chunk pops out of the unload queue that it also
doesn't now have a ticket either.
Due to some complexity in mojangs complicated chain of juggling
whether or not a chunk should be unloaded when the last ticket is
removed, many chunks are remaining around in the cache.
These chunks are never being targetted for unload because they are
vastly out of view distance range and have no reason to be looked at.
This is a huge issue for performance because we have to iterate these
chunks EVERY TICK... This is what's been leading to high SELF time in
Ticking Chunks timings/profiler results.
We will now detect these chunks in that iteration, and automatically
add it to the unload queue when the chunk is found without any tickets.
Spigot inserted their Slack Activity Accountant in the wrong location
resulting in a chunk being removed from the unload queue, inserted into
the unload map, but never calling the function to finish the removal....
This caused the chunk to become stuck in the unload map if ever hit, because
the unload map was meant to be a TEMPORARY location while it was saving.
Fix this by abort iteration AFTER the current chunk is finisehd processing
Also, improve how aggressive we are at unloading chunks, targetting 10% per tick instead.
These saves are asynchronous so there should be less of a hit here.
This is for 2 reasons:
1) Ensuring our log4j is mostly loaded at OUR version.
I've seen stack traces with line numbers that do not match our version. This means that some
plugin has shaded in log4j and their loaded version is mixing with ours....
So by at least trying to load a bunch of log4j classes before we load plugins, we can be
more sure mixed versions are not loading.
2) If the jar file is replaced while the server is runnimg class not found errors galore
This will preloaod a bunch of classes commonly seen to error during shutdown due to this.
The goal here is to help let the server shutdown gracefully as possible. Some plugins will
still blow up here if they access a class that hadn't been loaded yet, but goal is to at least
stop freezing the shutdown process as it does with JLine and Log4j errors requiring an external kill.
Ideally you should not replace jars while the server is running, but it is something that happens in
development for testing.
Updated test server to do a copy though to avoid this happening in Paper development.
Accidently used the snapshot, as well as needed to exclude the
existing old 25 build of libnetty so that we load the newer build of
epoll instead.
Thanks to 56738 for the catch.
2 people had issues where some plugin is doing some reallly insane NMS hackery
that created invalid worlds, which caused some errors...
Really don't understand what in the world they did, but putting in a dumb guard that
shouldn't even be necessary to just not send the sound effect rather than erroring.
While this method has async in it's name, it's not actually meant
to be called asynchronously.... It just means IT will load the chunk
asynchronously without blocking main.
So fix this so that if a plugin calls it async, it forces the request back to main thread.
Minecraft's Netty version was severely out of date. There has been
numerous security fixes, bug fixes, and likely performance fixes
since the version Minecraft uses (4.1.25).
This fixes some known issues with "Closed Channel" spam.
Fixes#3388
Instead of using the entire world or player list, use the distance
maps to only iterate players who are even seeing the chunk the packet
is originating from.
This will drastically cut down on packet sending cost for worlds with
lots of players in them.
Closes#3437
Any full status chunk that was requested for any status less than full
would hold onto their entire nbt tree and every variable in that function.
This was due to use of a lambda that persists on the Chunk object
until that chunk reaches FULL status.
With introduction of no tick, we greatly increased the number of non
full chunks so this was really starting to hurt.
We further improve it by making a copy of the nbt tag with only the memory
it needs, so that we dont have to hold a copy to the entire compound.
This should help greatly (as long as this change works...) in
understanding an exception when it doesn't get truncated with
"... and 14 more" at a vital point of the stack trace.
Fixed issues where urgent and prioritized chunks didn't actually
always get their priority boosted correctly....
Properly deprioritize non ticking chunks.
Limit recursion on watchdog prints to stop flooding as much
Remove neighbor priorities from watchdog to reduce information
reduce synchronization duration so that watch dog won't block main should main actually wake up
probably fixed a deadlock risk in watchdog printing also that was leading to crashes
fixed chunk holder enqueues not being processed correctly
added async catchers in some locations that should not be ran async
Fixed upstream bug where VITAL callbacks that must run on main actually could
sometimes run on the server thread pool causing alot of these nasty bugs we've seen lately!
This build will provide massive improvements to stability as well as even faster
sync chunk load/gens now that priority is correctly set.
Fixes#3435
The nibble pooling for NBT Tags was 'semi' leaked from loaded chunks
as we store the NBT Tag of Tile Entities in a Chunk, but don't process
them and remove them until chunk reaches Entity Ticking status....
This caused some phantom references to persist causing high memory use
of these chunks.
So I just got rid of pooling from NBT deserialization and we'll have to
take the hit on memory allocations there because too many cascading concerns
with anyone using NBT Tag Byte Arrays.
Fixes#3431
I believe this brings us back to stable. A lot of complexity was
learned about juggling priorities.
We were essentially promoting more chunks to urgent than really
needed to be urgent.
So this commit adds a lot more logic to juggle neighbor priorities
and demote their priority once they meet the requirements needed of
them.
This greatly improves the performance of "urgent" chunks".
Fixes#3410Fixes#3426Fixes#3425Fixes#3416
CB only protected from > 64 but there's no reason an entity should ever
be more than 2x its width or 1x height as the BB is supposed to represent
the entity size.
BB is / 2 to calculate position.
Blow up if a plugin tries to mutate visibleChunks directly and prevent them
from doing so.
Also provide a safe get call if any plugins directly call get on it so
that it uses the special logic to check pending.
Also restores ABI for the visibleChunks field back to what it was too.
Additionally, remove the stack trace from Timings Stack Corruption for any
error thrown on Minecraft Timings, and tell them to get the error ABOVE this
instead, so people stop giving us useless error reports.
Also fixes a memory leak when the source map down sizes but dest map didn't,
which resulted in lingering references to old chunk holders.
Fixes#3414
synchronized arraydeque ends up still being way faster.
Kinda shocked how much that strategy was using, it wasn't really
that complicated... but oh well, this is even simpler and not
seeing blocked threads show up at all in profiling because
the lock is held for such a short amount of time.
also because most uses are on either server thread pool or chunk load pool.
Also optimize the pooling of nibbles to not register Cleaner's
for Light Engine directed usages, as we know we are properly
controlling clean up there, so we don't need to rely on GC.
This will return them to the pool manually, saving a lot of Cleaners.
Closes#3417
Fixed a few bugs, and made numerous improvements.
Fixed issue where a sync chunk load could have its ticket removed and the
priority ticket could expire...
Still not perfect there but better than before.
Also fixed few other misc issues such as watchdog cpu usage, chunk queue update
had risk of double enqueue due to it no longer being a set.
Added much more information about chunk state to watchdog prints.
I see some more room for improvement even, but this is much better than before.
Fixes#3407Fixes#3411Fixes#3395Fixes#3389
Dynmap accessed the raw bytes because it utilized NBT locally, but the
NBTTagcompound was garbage collected while the bytes were still being used.
This will return getBytes() back to being safe, and add a new PoolSafe method
that will prevent the additional allocations for general chunk loading.
Also fixed applyPatches for people with paths in their working directory
if they have mcdev sources built.