If the server tick length is high, then the amount of time
available to process chunk tasks inbetween ticks is low. As a
result, chunk loading and generation may appear to slow down.
To ensure that chunk tasks are always processed, we add logic to
execute chunk tasks during tile entity tick, entity tick, chunk
random ticking, and scheduled block/fluid ticking. The mid-tick task
execution is timed so that it is not prioritised over the server
tick.
While the null UUID is almost certainly an error, the old
implementation did not NPE as it used a plain HashMap for lookup
by UUID, whereas we use a ConcurrentHashMap which will NPE on
null keys.
The unload queue stored the chunks in the same section as
the chunk coordinate, when it needed to apply the unload shift.
Additionally, change the default region shift to the ticket
propagator shift as there is no benefit to using a low region
shift since no regionizing is occuring. This makes the unload
queue shift 6, which should reduce the number of sections to deal
with while processing unloads.
Somehow, a chunkholder is present in the unload queue after
it has been unloaded. It is likely that this is a result of
adding the chunk holder to the unload queue while it is
unloading. However, that should not be possible.
To find out where it is being added to the unload queue, track
the last stacktrace which adds to the unload queue and check
on chunk holder remove if the holder is present in the unload queue
and log the stacktrace.
Remove utilities that are unused, as well as replacing
the full chunk map with a concurrentutil implementation.
Additionally, fix the addition/removal of chunks to/from the
full chunk map so that getChunkIfLoaded correctly returns a
non-null chunk when calling the load or unload events.
The upstream implementation is returning true for non-full chunks.
This fix is not ideal since the new chunk system doesn't have a region file/chunk status patch. May want to be revisited before a non-experimental release.