diff --git a/patches/unapplied/server/0342-Increase-Light-Queue-Size.patch b/patches/removed/1.20/0342-Increase-Light-Queue-Size.patch
similarity index 100%
rename from patches/unapplied/server/0342-Increase-Light-Queue-Size.patch
rename to patches/removed/1.20/0342-Increase-Light-Queue-Size.patch
diff --git a/patches/unapplied/server/0433-Stop-copy-on-write-operations-for-updating-light-dat.patch b/patches/removed/1.20/0433-Stop-copy-on-write-operations-for-updating-light-dat.patch
similarity index 100%
rename from patches/unapplied/server/0433-Stop-copy-on-write-operations-for-updating-light-dat.patch
rename to patches/removed/1.20/0433-Stop-copy-on-write-operations-for-updating-light-dat.patch
diff --git a/patches/server/0019-Rewrite-chunk-system.patch b/patches/server/0019-Rewrite-chunk-system.patch
index 447c231cb7..b66bceb866 100644
--- a/patches/server/0019-Rewrite-chunk-system.patch
+++ b/patches/server/0019-Rewrite-chunk-system.patch
@@ -76,6 +76,78 @@ or for checking if the file exists can be heavy in
when pushing chunk generation extremely hard - as each chunk gen
request may effectively go through to the I/O thread.
+Use coordinate-based locking to increase chunk system parallelism
+
+A significant overhead in Folia comes from the chunk system's
+locks, the ticket lock and the scheduling lock. The public
+test server, which had ~330 players, had signficant performance
+problems with these locks: ~80% of the time spent ticking
+was _waiting_ for the locks to free. Given that it used
+around 15 cores total at peak, this is a complete and utter loss
+of potential.
+
+To address this issue, I have replaced the ticket lock and scheduling
+lock with two ReentrantAreaLocks. The ReentrantAreaLock takes a
+shift, which is used internally to group positions into sections.
+This grouping is neccessary, as the possible radius of area that
+needs to be acquired for any given lock usage is up to 64. As such,
+the shift is critical to reduce the number of areas required to lock
+for any lock operation. Currently, it is set to a shift of 6, which
+is identical to the ticket level propagation shift (and, it must be
+at least the ticket level propagation shift AND the region shift).
+
+The chunk system locking changes required a complete rewrite of the
+chunk system tick, chunk system unload, and chunk system ticket level
+propagation - as all of the previous logic only works with a single
+global lock.
+
+This does introduce two other section shifts: the lock shift, and the
+ticket shift. The lock shift is simply what shift the area locks use,
+and the ticket shift represents the size of the ticket sections.
+Currently, these values are just set to the region shift for simplicity.
+However, they are not arbitrary: the lock shift must be at least the size
+of the ticket shift and must be at least the size of the region shift.
+The ticket shift must also be >= the ceil(log2(max ticket level source)).
+
+The chunk system's ticket propagator is now global state, instead of
+region state. This cleans up the logic for ticket levels significantly,
+and removes usage of the region lock in this area, but it also means
+that the addition of a ticket no longer creates a region. To alleviate
+the side effects of this change, the global tick thread now processes
+ticket level updates for each world every tick to guarantee eventual
+ticket level processing. The chunk system also provides a hook to
+process ticket level changes in a given _section_, so that the
+region queue can guarantee that after adding its reference counter
+that the region section is created/exists/wont be destroyed.
+
+The ticket propagator operates by updating the sources in a single ticket
+section, and propagating the updates to its 1 radius neighbours. This
+allows the ticket updates to occur in parallel or selectively (see above).
+Currently, the process ticket level update function operates by
+polling from a concurrent queue of sections to update and simply
+invoking the single section update logic. This allows the function
+to operate completely in parallel, provided the queue is ordered right.
+Additionally, this limits the area used in the ticket/scheduling lock
+when processing updates, which should massively increase parallelism compared
+to before.
+
+The chunk system ticket addition for expirable ticket types has been modified
+to no longer track exact tick deadlines, as this relies on what region the
+ticket is in. Instead, the chunk system tracks a map of
+lock section -> (chunk coordinate -> expire ticket count) and every ticket
+has been changed to have a removeDelay count that is decremented each tick.
+Each region searches its own sections to find tickets to try to expire.
+
+Chunk system unloading has been modified to track unloads by lock section.
+The ordering is determined by which section a chunk resides in.
+The unload process now removes from unload sections and processes
+the full unload stages (1, 2, 3) before moving to the next section, if possible.
+This allows the unload logic to only hold one lock section at a time for
+each lock, which is a massive parallelism increase.
+
+In stress testing, these changes lowered the locking overhead to only 5%
+from ~70%, which completely fix the original problem as described.
+
== AT ==
public net.minecraft.server.level.ChunkMap setViewDistance(I)V
public net.minecraft.server.level.ChunkHolder pos
@@ -83,6 +155,630 @@ public net.minecraft.server.level.ChunkMap overworldDataStorage
public-f net.minecraft.world.level.chunk.storage.RegionFileStorage
public net.minecraft.server.level.ChunkMap getPoiManager()Lnet/minecraft/world/entity/ai/village/poi/PoiManager;
+diff --git a/src/main/java/ca/spottedleaf/concurrentutil/lock/ReentrantAreaLock.java b/src/main/java/ca/spottedleaf/concurrentutil/lock/ReentrantAreaLock.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..4fd9a0cd8f1e6ae1a97e963dc7731a80bc6fac5b
+--- /dev/null
++++ b/src/main/java/ca/spottedleaf/concurrentutil/lock/ReentrantAreaLock.java
+@@ -0,0 +1,395 @@
++package ca.spottedleaf.concurrentutil.lock;
++
++import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
++import it.unimi.dsi.fastutil.HashCommon;
++import java.util.ArrayList;
++import java.util.List;
++import java.util.concurrent.ConcurrentHashMap;
++import java.util.concurrent.locks.LockSupport;
++
++public final class ReentrantAreaLock {
++
++ public final int coordinateShift;
++
++ // aggressive load factor to reduce contention
++ private final ConcurrentHashMap nodes = new ConcurrentHashMap<>(128, 0.2f);
++
++ public ReentrantAreaLock(final int coordinateShift) {
++ this.coordinateShift = coordinateShift;
++ }
++
++ public boolean isHeldByCurrentThread(final int x, final int z) {
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int sectionX = x >> shift;
++ final int sectionZ = z >> shift;
++
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++ final Node node = this.nodes.get(coordinate);
++
++ return node != null && node.thread == currThread;
++ }
++
++ public boolean isHeldByCurrentThread(final int centerX, final int centerZ, final int radius) {
++ return this.isHeldByCurrentThread(centerX - radius, centerZ - radius, centerX + radius, centerZ + radius);
++ }
++
++ public boolean isHeldByCurrentThread(final int fromX, final int fromZ, final int toX, final int toZ) {
++ if (fromX > toX || fromZ > toZ) {
++ throw new IllegalArgumentException();
++ }
++
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int fromSectionX = fromX >> shift;
++ final int fromSectionZ = fromZ >> shift;
++ final int toSectionX = toX >> shift;
++ final int toSectionZ = toZ >> shift;
++
++ for (int currZ = fromSectionZ; currZ <= toSectionZ; ++currZ) {
++ for (int currX = fromSectionX; currX <= toSectionX; ++currX) {
++ final Coordinate coordinate = new Coordinate(Coordinate.key(currX, currZ));
++
++ final Node node = this.nodes.get(coordinate);
++
++ if (node == null || node.thread != currThread) {
++ return false;
++ }
++ }
++ }
++
++ return true;
++ }
++
++ public Node tryLock(final int x, final int z) {
++ return this.tryLock(x, z, x, z);
++ }
++
++ public Node tryLock(final int centerX, final int centerZ, final int radius) {
++ return this.tryLock(centerX - radius, centerZ - radius, centerX + radius, centerZ + radius);
++ }
++
++ public Node tryLock(final int fromX, final int fromZ, final int toX, final int toZ) {
++ if (fromX > toX || fromZ > toZ) {
++ throw new IllegalArgumentException();
++ }
++
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int fromSectionX = fromX >> shift;
++ final int fromSectionZ = fromZ >> shift;
++ final int toSectionX = toX >> shift;
++ final int toSectionZ = toZ >> shift;
++
++ final List areaAffected = new ArrayList<>();
++
++ final Node ret = new Node(this, areaAffected, currThread);
++
++ boolean failed = false;
++
++ // try to fast acquire area
++ for (int currZ = fromSectionZ; currZ <= toSectionZ; ++currZ) {
++ for (int currX = fromSectionX; currX <= toSectionX; ++currX) {
++ final Coordinate coordinate = new Coordinate(Coordinate.key(currX, currZ));
++
++ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
++
++ if (prev == null) {
++ areaAffected.add(coordinate);
++ continue;
++ }
++
++ if (prev.thread != currThread) {
++ failed = true;
++ break;
++ }
++ }
++ }
++
++ if (!failed) {
++ return ret;
++ }
++
++ // failed, undo logic
++ if (!areaAffected.isEmpty()) {
++ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
++ final Coordinate key = areaAffected.get(i);
++
++ if (this.nodes.remove(key) != ret) {
++ throw new IllegalStateException();
++ }
++ }
++
++ areaAffected.clear();
++
++ // since we inserted, we need to drain waiters
++ Thread unpark;
++ while ((unpark = ret.pollOrBlockAdds()) != null) {
++ LockSupport.unpark(unpark);
++ }
++ }
++
++ return null;
++ }
++
++ public Node lock(final int x, final int z) {
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int sectionX = x >> shift;
++ final int sectionZ = z >> shift;
++
++ final List areaAffected = new ArrayList<>(1);
++
++ final Node ret = new Node(this, areaAffected, currThread);
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++
++ for (long failures = 0L;;) {
++ final Node park;
++
++ // try to fast acquire area
++ {
++ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
++
++ if (prev == null) {
++ areaAffected.add(coordinate);
++ return ret;
++ } else if (prev.thread != currThread) {
++ park = prev;
++ } else {
++ // only one node we would want to acquire, and it's owned by this thread already
++ return ret;
++ }
++ }
++
++ ++failures;
++
++ if (failures > 128L && park.add(currThread)) {
++ LockSupport.park();
++ } else {
++ // high contention, spin wait
++ if (failures < 128L) {
++ for (long i = 0; i < failures; ++i) {
++ Thread.onSpinWait();
++ }
++ failures = failures << 1;
++ } else if (failures < 1_200L) {
++ LockSupport.parkNanos(1_000L);
++ failures = failures + 1L;
++ } else { // scale 0.1ms (100us) per failure
++ Thread.yield();
++ LockSupport.parkNanos(100_000L * failures);
++ failures = failures + 1L;
++ }
++ }
++ }
++ }
++
++ public Node lock(final int centerX, final int centerZ, final int radius) {
++ return this.lock(centerX - radius, centerZ - radius, centerX + radius, centerZ + radius);
++ }
++
++ public Node lock(final int fromX, final int fromZ, final int toX, final int toZ) {
++ if (fromX > toX || fromZ > toZ) {
++ throw new IllegalArgumentException();
++ }
++
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int fromSectionX = fromX >> shift;
++ final int fromSectionZ = fromZ >> shift;
++ final int toSectionX = toX >> shift;
++ final int toSectionZ = toZ >> shift;
++
++ if (((fromSectionX ^ toSectionX) | (fromSectionZ ^ toSectionZ)) == 0) {
++ return this.lock(fromX, fromZ);
++ }
++
++ final List areaAffected = new ArrayList<>();
++
++ final Node ret = new Node(this, areaAffected, currThread);
++
++ for (long failures = 0L;;) {
++ Node park = null;
++ boolean addedToArea = false;
++ boolean alreadyOwned = false;
++ boolean allOwned = true;
++
++ // try to fast acquire area
++ for (int currZ = fromSectionZ; currZ <= toSectionZ; ++currZ) {
++ for (int currX = fromSectionX; currX <= toSectionX; ++currX) {
++ final Coordinate coordinate = new Coordinate(Coordinate.key(currX, currZ));
++
++ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
++
++ if (prev == null) {
++ addedToArea = true;
++ allOwned = false;
++ areaAffected.add(coordinate);
++ continue;
++ }
++
++ if (prev.thread != currThread) {
++ park = prev;
++ alreadyOwned = true;
++ break;
++ }
++ }
++ }
++
++ if (park == null) {
++ if (alreadyOwned && !allOwned) {
++ throw new IllegalStateException("Improper lock usage: Should never acquire intersecting areas");
++ }
++ return ret;
++ }
++
++ // failed, undo logic
++ if (addedToArea) {
++ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
++ final Coordinate key = areaAffected.get(i);
++
++ if (this.nodes.remove(key) != ret) {
++ throw new IllegalStateException();
++ }
++ }
++
++ areaAffected.clear();
++
++ // since we inserted, we need to drain waiters
++ Thread unpark;
++ while ((unpark = ret.pollOrBlockAdds()) != null) {
++ LockSupport.unpark(unpark);
++ }
++ }
++
++ ++failures;
++
++ if (failures > 128L && park.add(currThread)) {
++ LockSupport.park(park);
++ } else {
++ // high contention, spin wait
++ if (failures < 128L) {
++ for (long i = 0; i < failures; ++i) {
++ Thread.onSpinWait();
++ }
++ failures = failures << 1;
++ } else if (failures < 1_200L) {
++ LockSupport.parkNanos(1_000L);
++ failures = failures + 1L;
++ } else { // scale 0.1ms (100us) per failure
++ Thread.yield();
++ LockSupport.parkNanos(100_000L * failures);
++ failures = failures + 1L;
++ }
++ }
++
++ if (addedToArea) {
++ // try again, so we need to allow adds so that other threads can properly block on us
++ ret.allowAdds();
++ }
++ }
++ }
++
++ public void unlock(final Node node) {
++ if (node.lock != this) {
++ throw new IllegalStateException("Unlock target lock mismatch");
++ }
++
++ final List areaAffected = node.areaAffected;
++
++ if (areaAffected.isEmpty()) {
++ // here we are not in the node map, and so do not need to remove from the node map or unblock any waiters
++ return;
++ }
++
++ // remove from node map; allowing other threads to lock
++ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
++ final Coordinate coordinate = areaAffected.get(i);
++ if (this.nodes.remove(coordinate) != node) {
++ throw new IllegalStateException();
++ }
++ }
++
++ Thread unpark;
++ while ((unpark = node.pollOrBlockAdds()) != null) {
++ LockSupport.unpark(unpark);
++ }
++ }
++
++ public static final class Node extends MultiThreadedQueue {
++
++ private final ReentrantAreaLock lock;
++ private final List areaAffected;
++ private final Thread thread;
++ //private final Throwable WHO_CREATED_MY_ASS = new Throwable();
++
++ private Node(final ReentrantAreaLock lock, final List areaAffected, final Thread thread) {
++ this.lock = lock;
++ this.areaAffected = areaAffected;
++ this.thread = thread;
++ }
++
++ @Override
++ public String toString() {
++ return "Node{" +
++ "areaAffected=" + this.areaAffected +
++ ", thread=" + this.thread +
++ '}';
++ }
++ }
++
++ private static final class Coordinate implements Comparable {
++
++ public final long key;
++
++ public Coordinate(final long key) {
++ this.key = key;
++ }
++
++ public Coordinate(final int x, final int z) {
++ this.key = key(x, z);
++ }
++
++ public static long key(final int x, final int z) {
++ return ((long)z << 32) | (x & 0xFFFFFFFFL);
++ }
++
++ public static int x(final long key) {
++ return (int)key;
++ }
++
++ public static int z(final long key) {
++ return (int)(key >>> 32);
++ }
++
++ @Override
++ public int hashCode() {
++ return (int)HashCommon.mix(this.key);
++ }
++
++ @Override
++ public boolean equals(final Object obj) {
++ if (this == obj) {
++ return true;
++ }
++
++ if (!(obj instanceof Coordinate other)) {
++ return false;
++ }
++
++ return this.key == other.key;
++ }
++
++ // This class is intended for HashMap/ConcurrentHashMap usage, which do treeify bin nodes if the chain
++ // is too large. So we should implement compareTo to help.
++ @Override
++ public int compareTo(final Coordinate other) {
++ return Long.compare(this.key, other.key);
++ }
++
++ @Override
++ public String toString() {
++ return "[" + x(this.key) + "," + z(this.key) + "]";
++ }
++ }
++}
+diff --git a/src/main/java/ca/spottedleaf/concurrentutil/lock/SyncReentrantAreaLock.java b/src/main/java/ca/spottedleaf/concurrentutil/lock/SyncReentrantAreaLock.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..64b5803d002b2968841a5ddee987f98b72964e87
+--- /dev/null
++++ b/src/main/java/ca/spottedleaf/concurrentutil/lock/SyncReentrantAreaLock.java
+@@ -0,0 +1,217 @@
++package ca.spottedleaf.concurrentutil.lock;
++
++import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
++import it.unimi.dsi.fastutil.longs.Long2ReferenceOpenHashMap;
++import it.unimi.dsi.fastutil.longs.LongArrayList;
++import java.util.concurrent.locks.LockSupport;
++
++// not concurrent, unlike ReentrantAreaLock
++// no incorrect lock usage detection (acquiring intersecting areas)
++// this class is nothing more than a performance reference for ReentrantAreaLock
++public final class SyncReentrantAreaLock {
++
++ private final int coordinateShift;
++
++ // aggressive load factor to reduce contention
++ private final Long2ReferenceOpenHashMap nodes = new Long2ReferenceOpenHashMap<>(128, 0.2f);
++
++ public SyncReentrantAreaLock(final int coordinateShift) {
++ this.coordinateShift = coordinateShift;
++ }
++
++ private static long key(final int x, final int z) {
++ return ((long)z << 32) | (x & 0xFFFFFFFFL);
++ }
++
++ public Node lock(final int x, final int z) {
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int sectionX = x >> shift;
++ final int sectionZ = z >> shift;
++
++ final LongArrayList areaAffected = new LongArrayList();
++
++ final Node ret = new Node(this, areaAffected, currThread);
++
++ final long coordinate = key(sectionX, sectionZ);
++
++ for (long failures = 0L;;) {
++ final Node park;
++
++ synchronized (this) {
++ // try to fast acquire area
++ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
++
++ if (prev == null) {
++ areaAffected.add(coordinate);
++ return ret;
++ } else if (prev.thread != currThread) {
++ park = prev;
++ } else {
++ // only one node we would want to acquire, and it's owned by this thread already
++ return ret;
++ }
++ }
++
++ ++failures;
++
++ if (failures > 128L && park.add(currThread)) {
++ LockSupport.park();
++ } else {
++ // high contention, spin wait
++ if (failures < 128L) {
++ for (long i = 0; i < failures; ++i) {
++ Thread.onSpinWait();
++ }
++ failures = failures << 1;
++ } else if (failures < 1_200L) {
++ LockSupport.parkNanos(1_000L);
++ failures = failures + 1L;
++ } else { // scale 0.1ms (100us) per failure
++ Thread.yield();
++ LockSupport.parkNanos(100_000L * failures);
++ failures = failures + 1L;
++ }
++ }
++ }
++ }
++
++ public Node lock(final int centerX, final int centerZ, final int radius) {
++ return this.lock(centerX - radius, centerZ - radius, centerX + radius, centerZ + radius);
++ }
++
++ public Node lock(final int fromX, final int fromZ, final int toX, final int toZ) {
++ if (fromX > toX || fromZ > toZ) {
++ throw new IllegalArgumentException();
++ }
++
++ final Thread currThread = Thread.currentThread();
++ final int shift = this.coordinateShift;
++ final int fromSectionX = fromX >> shift;
++ final int fromSectionZ = fromZ >> shift;
++ final int toSectionX = toX >> shift;
++ final int toSectionZ = toZ >> shift;
++
++ final LongArrayList areaAffected = new LongArrayList();
++
++ final Node ret = new Node(this, areaAffected, currThread);
++
++ for (long failures = 0L;;) {
++ Node park = null;
++ boolean addedToArea = false;
++
++ synchronized (this) {
++ // try to fast acquire area
++ for (int currZ = fromSectionZ; currZ <= toSectionZ; ++currZ) {
++ for (int currX = fromSectionX; currX <= toSectionX; ++currX) {
++ final long coordinate = key(currX, currZ);
++
++ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
++
++ if (prev == null) {
++ addedToArea = true;
++ areaAffected.add(coordinate);
++ continue;
++ }
++
++ if (prev.thread != currThread) {
++ park = prev;
++ break;
++ }
++ }
++ }
++
++ if (park == null) {
++ return ret;
++ }
++
++ // failed, undo logic
++ if (!areaAffected.isEmpty()) {
++ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
++ final long key = areaAffected.getLong(i);
++
++ if (!this.nodes.remove(key, ret)) {
++ throw new IllegalStateException();
++ }
++ }
++ }
++ }
++
++ if (addedToArea) {
++ areaAffected.clear();
++ // since we inserted, we need to drain waiters
++ Thread unpark;
++ while ((unpark = ret.pollOrBlockAdds()) != null) {
++ LockSupport.unpark(unpark);
++ }
++ }
++
++ ++failures;
++
++ if (failures > 128L && park.add(currThread)) {
++ LockSupport.park();
++ } else {
++ // high contention, spin wait
++ if (failures < 128L) {
++ for (long i = 0; i < failures; ++i) {
++ Thread.onSpinWait();
++ }
++ failures = failures << 1;
++ } else if (failures < 1_200L) {
++ LockSupport.parkNanos(1_000L);
++ failures = failures + 1L;
++ } else { // scale 0.1ms (100us) per failure
++ Thread.yield();
++ LockSupport.parkNanos(100_000L * failures);
++ failures = failures + 1L;
++ }
++ }
++
++ if (addedToArea) {
++ // try again, so we need to allow adds so that other threads can properly block on us
++ ret.allowAdds();
++ }
++ }
++ }
++
++ public void unlock(final Node node) {
++ if (node.lock != this) {
++ throw new IllegalStateException("Unlock target lock mismatch");
++ }
++
++ final LongArrayList areaAffected = node.areaAffected;
++
++ if (areaAffected.isEmpty()) {
++ // here we are not in the node map, and so do not need to remove from the node map or unblock any waiters
++ return;
++ }
++
++ // remove from node map; allowing other threads to lock
++ synchronized (this) {
++ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
++ final long coordinate = areaAffected.getLong(i);
++ if (!this.nodes.remove(coordinate, node)) {
++ throw new IllegalStateException();
++ }
++ }
++ }
++
++ Thread unpark;
++ while ((unpark = node.pollOrBlockAdds()) != null) {
++ LockSupport.unpark(unpark);
++ }
++ }
++
++ public static final class Node extends MultiThreadedQueue {
++
++ private final SyncReentrantAreaLock lock;
++ private final LongArrayList areaAffected;
++ private final Thread thread;
++
++ private Node(final SyncReentrantAreaLock lock, final LongArrayList areaAffected, final Thread thread) {
++ this.lock = lock;
++ this.areaAffected = areaAffected;
++ this.thread = thread;
++ }
++ }
++}
diff --git a/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java b/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java
index 146c78a333e47cb4d8aa97700e19a12ca176ce76..691239e65b0870ceb0d071b57793cff9b2593f62 100644
--- a/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java
@@ -1380,1125 +2076,6 @@ index 0000000000000000000000000000000000000000..99f49b5625cf51d6c97640553cf5c420
+ return ret;
+ }
+}
-diff --git a/src/main/java/io/papermc/paper/chunk/PlayerChunkLoader.java b/src/main/java/io/papermc/paper/chunk/PlayerChunkLoader.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..562630db2cf5f923bf5b611b828a365e6d60fefb
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/PlayerChunkLoader.java
-@@ -0,0 +1,1113 @@
-+package io.papermc.paper.chunk;
-+
-+import com.destroystokyo.paper.util.misc.PlayerAreaMap;
-+import com.destroystokyo.paper.util.misc.PooledLinkedHashSets;
-+import io.papermc.paper.configuration.GlobalConfiguration;
-+import io.papermc.paper.util.CoordinateUtils;
-+import io.papermc.paper.util.IntervalledCounter;
-+import io.papermc.paper.util.TickThread;
-+import it.unimi.dsi.fastutil.longs.LongOpenHashSet;
-+import it.unimi.dsi.fastutil.objects.Reference2IntOpenHashMap;
-+import it.unimi.dsi.fastutil.objects.Reference2ObjectLinkedOpenHashMap;
-+import it.unimi.dsi.fastutil.objects.ReferenceLinkedOpenHashSet;
-+import net.minecraft.network.protocol.game.ClientboundSetChunkCacheCenterPacket;
-+import net.minecraft.network.protocol.game.ClientboundSetChunkCacheRadiusPacket;
-+import net.minecraft.network.protocol.game.ClientboundSetSimulationDistancePacket;
-+import io.papermc.paper.util.MCUtil;
-+import net.minecraft.server.MinecraftServer;
-+import net.minecraft.server.level.*;
-+import net.minecraft.util.Mth;
-+import net.minecraft.world.level.ChunkPos;
-+import net.minecraft.world.level.chunk.LevelChunk;
-+import org.apache.commons.lang3.mutable.MutableObject;
-+import org.bukkit.craftbukkit.entity.CraftPlayer;
-+import org.bukkit.entity.Player;
-+import java.util.ArrayDeque;
-+import java.util.ArrayList;
-+import java.util.List;
-+import java.util.TreeSet;
-+import java.util.concurrent.atomic.AtomicInteger;
-+
-+public final class PlayerChunkLoader {
-+
-+ public static final int MIN_VIEW_DISTANCE = 2;
-+ public static final int MAX_VIEW_DISTANCE = 32;
-+
-+ public static final int TICK_TICKET_LEVEL = 31;
-+ public static final int LOADED_TICKET_LEVEL = 33;
-+
-+ public static int getTickViewDistance(final Player player) {
-+ return getTickViewDistance(((CraftPlayer)player).getHandle());
-+ }
-+
-+ public static int getTickViewDistance(final ServerPlayer player) {
-+ throw new UnsupportedOperationException();
-+ }
-+
-+ public static int getLoadViewDistance(final Player player) {
-+ return getLoadViewDistance(((CraftPlayer)player).getHandle());
-+ }
-+
-+ public static int getLoadViewDistance(final ServerPlayer player) {
-+ throw new UnsupportedOperationException();
-+ }
-+
-+ public static int getSendViewDistance(final Player player) {
-+ return getSendViewDistance(((CraftPlayer)player).getHandle());
-+ }
-+
-+ public static int getSendViewDistance(final ServerPlayer player) {
-+ throw new UnsupportedOperationException();
-+ }
-+
-+ protected final ChunkMap chunkMap;
-+ protected final Reference2ObjectLinkedOpenHashMap playerMap = new Reference2ObjectLinkedOpenHashMap<>(512, 0.7f);
-+ protected final ReferenceLinkedOpenHashSet chunkSendQueue = new ReferenceLinkedOpenHashSet<>(512, 0.7f);
-+
-+ protected final TreeSet chunkLoadQueue = new TreeSet<>((final PlayerLoaderData p1, final PlayerLoaderData p2) -> {
-+ if (p1 == p2) {
-+ return 0;
-+ }
-+
-+ final ChunkPriorityHolder holder1 = p1.loadQueue.peekFirst();
-+ final ChunkPriorityHolder holder2 = p2.loadQueue.peekFirst();
-+
-+ final int priorityCompare = Double.compare(holder1 == null ? Double.MAX_VALUE : holder1.priority, holder2 == null ? Double.MAX_VALUE : holder2.priority);
-+
-+ final int lastLoadTimeCompare = Long.compare(p1.lastChunkLoad - p2.lastChunkLoad, 0);
-+
-+ if ((holder1 == null || holder2 == null || lastLoadTimeCompare == 0 || holder1.priority < 0.0 || holder2.priority < 0.0) && priorityCompare != 0) {
-+ return priorityCompare;
-+ }
-+
-+ if (lastLoadTimeCompare != 0) {
-+ return lastLoadTimeCompare;
-+ }
-+
-+ final int idCompare = Integer.compare(p1.player.getId(), p2.player.getId());
-+
-+ if (idCompare != 0) {
-+ return idCompare;
-+ }
-+
-+ // last resort
-+ return Integer.compare(System.identityHashCode(p1), System.identityHashCode(p2));
-+ });
-+
-+ protected final TreeSet chunkSendWaitQueue = new TreeSet<>((final PlayerLoaderData p1, final PlayerLoaderData p2) -> {
-+ if (p1 == p2) {
-+ return 0;
-+ }
-+
-+ final int timeCompare = Long.compare(p1.nextChunkSendTarget - p2.nextChunkSendTarget, 0);
-+ if (timeCompare != 0) {
-+ return timeCompare;
-+ }
-+
-+ final int idCompare = Integer.compare(p1.player.getId(), p2.player.getId());
-+
-+ if (idCompare != 0) {
-+ return idCompare;
-+ }
-+
-+ // last resort
-+ return Integer.compare(System.identityHashCode(p1), System.identityHashCode(p2));
-+ });
-+
-+
-+ // no throttling is applied below this VD for loading
-+
-+ /**
-+ * The chunks to be sent to players, provided they're send-ready. Send-ready means the chunk and its 1 radius neighbours are loaded.
-+ */
-+ public final PlayerAreaMap broadcastMap;
-+
-+ /**
-+ * The chunks to be brought up to send-ready status. Send-ready means the chunk and its 1 radius neighbours are loaded.
-+ */
-+ public final PlayerAreaMap loadMap;
-+
-+ /**
-+ * Areamap used only to remove tickets for send-ready chunks. View distance is always + 1 of load view distance. Thus,
-+ * this map is always representing the chunks we are actually going to load.
-+ */
-+ public final PlayerAreaMap loadTicketCleanup;
-+
-+ /**
-+ * The chunks to brought to ticking level. Each chunk must have 2 radius neighbours loaded before this can happen.
-+ */
-+ public final PlayerAreaMap tickMap;
-+
-+ /**
-+ * -1 if defaulting to [load distance], else always in [2, load distance]
-+ */
-+ protected int rawSendDistance = -1;
-+
-+ /**
-+ * -1 if defaulting to [tick view distance + 1], else always in [tick view distance + 1, 32 + 1]
-+ */
-+ protected int rawLoadDistance = -1;
-+
-+ /**
-+ * Never -1, always in [2, 32]
-+ */
-+ protected int rawTickDistance = -1;
-+
-+ // methods to bridge for API
-+
-+ public int getTargetTickViewDistance() {
-+ return this.getTickDistance();
-+ }
-+
-+ public void setTargetTickViewDistance(final int distance) {
-+ this.setTickDistance(distance);
-+ }
-+
-+ public int getTargetNoTickViewDistance() {
-+ return this.getLoadDistance() - 1;
-+ }
-+
-+ public void setTargetNoTickViewDistance(final int distance) {
-+ this.setLoadDistance(distance == -1 ? -1 : distance + 1);
-+ }
-+
-+ public int getTargetSendDistance() {
-+ return this.rawSendDistance == -1 ? this.getLoadDistance() : this.rawSendDistance;
-+ }
-+
-+ public void setTargetSendDistance(final int distance) {
-+ this.setSendDistance(distance);
-+ }
-+
-+ // internal methods
-+
-+ public int getSendDistance() {
-+ final int loadDistance = this.getLoadDistance();
-+ return this.rawSendDistance == -1 ? loadDistance : Math.min(this.rawSendDistance, loadDistance);
-+ }
-+
-+ public void setSendDistance(final int distance) {
-+ if (distance != -1 && (distance < MIN_VIEW_DISTANCE || distance > MAX_VIEW_DISTANCE + 1)) {
-+ throw new IllegalArgumentException("Send distance must be a number between " + MIN_VIEW_DISTANCE + " and " + (MAX_VIEW_DISTANCE + 1) + ", or -1, got: " + distance);
-+ }
-+ this.rawSendDistance = distance;
-+ }
-+
-+ public int getLoadDistance() {
-+ final int tickDistance = this.getTickDistance();
-+ return this.rawLoadDistance == -1 ? tickDistance + 1 : Math.max(tickDistance + 1, this.rawLoadDistance);
-+ }
-+
-+ public void setLoadDistance(final int distance) {
-+ if (distance != -1 && (distance < MIN_VIEW_DISTANCE || distance > MAX_VIEW_DISTANCE + 1)) {
-+ throw new IllegalArgumentException("Load distance must be a number between " + MIN_VIEW_DISTANCE + " and " + (MAX_VIEW_DISTANCE + 1) + ", or -1, got: " + distance);
-+ }
-+ this.rawLoadDistance = distance;
-+ }
-+
-+ public int getTickDistance() {
-+ return this.rawTickDistance;
-+ }
-+
-+ public void setTickDistance(final int distance) {
-+ if (distance < MIN_VIEW_DISTANCE || distance > MAX_VIEW_DISTANCE) {
-+ throw new IllegalArgumentException("View distance must be a number between " + MIN_VIEW_DISTANCE + " and " + MAX_VIEW_DISTANCE + ", got: " + distance);
-+ }
-+ this.rawTickDistance = distance;
-+ }
-+
-+ /*
-+ Players have 3 different types of view distance:
-+ 1. Sending view distance
-+ 2. Loading view distance
-+ 3. Ticking view distance
-+
-+ But for configuration purposes (and API) there are:
-+ 1. No-tick view distance
-+ 2. Tick view distance
-+ 3. Broadcast view distance
-+
-+ These aren't always the same as the types we represent internally.
-+
-+ Loading view distance is always max(no-tick + 1, tick + 1)
-+ - no-tick has 1 added because clients need an extra radius to render chunks
-+ - tick has 1 added because it needs an extra radius of chunks to load before they can be marked ticking
-+
-+ Loading view distance is defined as the radius of chunks that will be brought to send-ready status, which means
-+ it loads chunks in radius load-view-distance + 1.
-+
-+ The maximum value for send view distance is the load view distance. API can set it lower.
-+ */
-+
-+ public PlayerChunkLoader(final ChunkMap chunkMap, final PooledLinkedHashSets pooledHashSets) {
-+ this.chunkMap = chunkMap;
-+ this.broadcastMap = new PlayerAreaMap(pooledHashSets,
-+ null,
-+ (ServerPlayer player, int rangeX, int rangeZ, int currPosX, int currPosZ, int prevPosX, int prevPosZ,
-+ com.destroystokyo.paper.util.misc.PooledLinkedHashSets.PooledObjectLinkedOpenHashSet newState) -> {
-+ PlayerChunkLoader.this.onChunkLeave(player, rangeX, rangeZ);
-+ });
-+ this.loadMap = new PlayerAreaMap(pooledHashSets,
-+ null,
-+ (ServerPlayer player, int rangeX, int rangeZ, int currPosX, int currPosZ, int prevPosX, int prevPosZ,
-+ com.destroystokyo.paper.util.misc.PooledLinkedHashSets.PooledObjectLinkedOpenHashSet newState) -> {
-+ if (newState != null) {
-+ return;
-+ }
-+ PlayerChunkLoader.this.isTargetedForPlayerLoad.remove(CoordinateUtils.getChunkKey(rangeX, rangeZ));
-+ });
-+ this.loadTicketCleanup = new PlayerAreaMap(pooledHashSets,
-+ null,
-+ (ServerPlayer player, int rangeX, int rangeZ, int currPosX, int currPosZ, int prevPosX, int prevPosZ,
-+ com.destroystokyo.paper.util.misc.PooledLinkedHashSets.PooledObjectLinkedOpenHashSet newState) -> {
-+ if (newState != null) {
-+ return;
-+ }
-+ ChunkPos chunkPos = new ChunkPos(rangeX, rangeZ);
-+ PlayerChunkLoader.this.chunkMap.level.getChunkSource().removeTicketAtLevel(TicketType.PLAYER, chunkPos, LOADED_TICKET_LEVEL, chunkPos);
-+ if (PlayerChunkLoader.this.chunkTicketTracker.remove(chunkPos.toLong())) {
-+ --PlayerChunkLoader.this.concurrentChunkLoads;
-+ }
-+ });
-+ this.tickMap = new PlayerAreaMap(pooledHashSets,
-+ (ServerPlayer player, int rangeX, int rangeZ, int currPosX, int currPosZ, int prevPosX, int prevPosZ,
-+ com.destroystokyo.paper.util.misc.PooledLinkedHashSets.PooledObjectLinkedOpenHashSet newState) -> {
-+ if (newState.size() != 1) {
-+ return;
-+ }
-+ LevelChunk chunk = PlayerChunkLoader.this.chunkMap.level.getChunkSource().getChunkAtIfLoadedMainThreadNoCache(rangeX, rangeZ);
-+ if (chunk == null || !chunk.areNeighboursLoaded(2)) {
-+ return;
-+ }
-+
-+ ChunkPos chunkPos = new ChunkPos(rangeX, rangeZ);
-+ PlayerChunkLoader.this.chunkMap.level.getChunkSource().addTicketAtLevel(TicketType.PLAYER, chunkPos, TICK_TICKET_LEVEL, chunkPos);
-+ },
-+ (ServerPlayer player, int rangeX, int rangeZ, int currPosX, int currPosZ, int prevPosX, int prevPosZ,
-+ com.destroystokyo.paper.util.misc.PooledLinkedHashSets.PooledObjectLinkedOpenHashSet newState) -> {
-+ if (newState != null) {
-+ return;
-+ }
-+ ChunkPos chunkPos = new ChunkPos(rangeX, rangeZ);
-+ PlayerChunkLoader.this.chunkMap.level.getChunkSource().removeTicketAtLevel(TicketType.PLAYER, chunkPos, TICK_TICKET_LEVEL, chunkPos);
-+ });
-+ }
-+
-+ protected final LongOpenHashSet isTargetedForPlayerLoad = new LongOpenHashSet();
-+ protected final LongOpenHashSet chunkTicketTracker = new LongOpenHashSet();
-+
-+ public boolean isChunkNearPlayers(final int chunkX, final int chunkZ) {
-+ final PooledLinkedHashSets.PooledObjectLinkedOpenHashSet playersInSendRange = this.broadcastMap.getObjectsInRange(chunkX, chunkZ);
-+
-+ return playersInSendRange != null;
-+ }
-+
-+ public void onChunkPostProcessing(final int chunkX, final int chunkZ) {
-+ this.onChunkSendReady(chunkX, chunkZ);
-+ }
-+
-+ private boolean chunkNeedsPostProcessing(final int chunkX, final int chunkZ) {
-+ final long key = CoordinateUtils.getChunkKey(chunkX, chunkZ);
-+ final ChunkHolder chunk = this.chunkMap.getVisibleChunkIfPresent(key);
-+
-+ if (chunk == null) {
-+ return false;
-+ }
-+
-+ final LevelChunk levelChunk = chunk.getSendingChunk();
-+
-+ return levelChunk != null && !levelChunk.isPostProcessingDone;
-+ }
-+
-+ // rets whether the chunk is at a loaded stage that is ready to be sent to players
-+ public boolean isChunkPlayerLoaded(final int chunkX, final int chunkZ) {
-+ final long key = CoordinateUtils.getChunkKey(chunkX, chunkZ);
-+ final ChunkHolder chunk = this.chunkMap.getVisibleChunkIfPresent(key);
-+
-+ if (chunk == null) {
-+ return false;
-+ }
-+
-+ final LevelChunk levelChunk = chunk.getSendingChunk();
-+
-+ return levelChunk != null && levelChunk.isPostProcessingDone && this.isTargetedForPlayerLoad.contains(key);
-+ }
-+
-+ public boolean isChunkSent(final ServerPlayer player, final int chunkX, final int chunkZ, final boolean borderOnly) {
-+ return borderOnly ? this.isChunkSentBorderOnly(player, chunkX, chunkZ) : this.isChunkSent(player, chunkX, chunkZ);
-+ }
-+
-+ public boolean isChunkSent(final ServerPlayer player, final int chunkX, final int chunkZ) {
-+ final PlayerLoaderData data = this.playerMap.get(player);
-+ if (data == null) {
-+ return false;
-+ }
-+
-+ return data.hasSentChunk(chunkX, chunkZ);
-+ }
-+
-+ public boolean isChunkSentBorderOnly(final ServerPlayer player, final int chunkX, final int chunkZ) {
-+ final PlayerLoaderData data = this.playerMap.get(player);
-+ if (data == null) {
-+ return false;
-+ }
-+
-+ final boolean center = data.hasSentChunk(chunkX, chunkZ);
-+ if (!center) {
-+ return false;
-+ }
-+
-+ return !(data.hasSentChunk(chunkX - 1, chunkZ) && data.hasSentChunk(chunkX + 1, chunkZ) &&
-+ data.hasSentChunk(chunkX, chunkZ - 1) && data.hasSentChunk(chunkX, chunkZ + 1));
-+ }
-+
-+ protected int getMaxConcurrentChunkSends() {
-+ return GlobalConfiguration.get().chunkLoading.maxConcurrentSends;
-+ }
-+
-+ protected int getMaxChunkLoads() {
-+ double config = GlobalConfiguration.get().chunkLoading.playerMaxConcurrentLoads;
-+ double max = GlobalConfiguration.get().chunkLoading.globalMaxConcurrentLoads;
-+ return (int)Math.ceil(Math.min(config * MinecraftServer.getServer().getPlayerCount(), max <= 1.0 ? Double.MAX_VALUE : max));
-+ }
-+
-+ protected long getTargetSendPerPlayerAddend() {
-+ return GlobalConfiguration.get().chunkLoading.targetPlayerChunkSendRate <= 1.0 ? 0L : (long)Math.round(1.0e9 / GlobalConfiguration.get().chunkLoading.targetPlayerChunkSendRate);
-+ }
-+
-+ protected long getMaxSendAddend() {
-+ return GlobalConfiguration.get().chunkLoading.globalMaxChunkSendRate <= 1.0 ? 0L : (long)Math.round(1.0e9 / GlobalConfiguration.get().chunkLoading.globalMaxChunkSendRate);
-+ }
-+
-+ public void onChunkPlayerTickReady(final int chunkX, final int chunkZ) {
-+ final ChunkPos chunkPos = new ChunkPos(chunkX, chunkZ);
-+ this.chunkMap.level.getChunkSource().addTicketAtLevel(TicketType.PLAYER, chunkPos, TICK_TICKET_LEVEL, chunkPos);
-+ }
-+
-+ public void onChunkSendReady(final int chunkX, final int chunkZ) {
-+ final PooledLinkedHashSets.PooledObjectLinkedOpenHashSet playersInSendRange = this.broadcastMap.getObjectsInRange(chunkX, chunkZ);
-+
-+ if (playersInSendRange == null) {
-+ return;
-+ }
-+
-+ final Object[] rawData = playersInSendRange.getBackingSet();
-+ for (int i = 0, len = rawData.length; i < len; ++i) {
-+ final Object raw = rawData[i];
-+
-+ if (!(raw instanceof ServerPlayer)) {
-+ continue;
-+ }
-+ this.onChunkSendReady((ServerPlayer)raw, chunkX, chunkZ);
-+ }
-+ }
-+
-+ public void onChunkSendReady(final ServerPlayer player, final int chunkX, final int chunkZ) {
-+ final PlayerLoaderData data = this.playerMap.get(player);
-+
-+ if (data == null) {
-+ return;
-+ }
-+
-+ if (data.hasSentChunk(chunkX, chunkZ) || !this.isChunkPlayerLoaded(chunkX, chunkZ)) {
-+ // if we don't have player tickets, then the load logic will pick this up and queue to send
-+ return;
-+ }
-+
-+ if (!data.chunksToBeSent.remove(CoordinateUtils.getChunkKey(chunkX, chunkZ))) {
-+ // don't queue to send, we don't want the chunk
-+ return;
-+ }
-+
-+ final long playerPos = this.broadcastMap.getLastCoordinate(player);
-+ final int playerChunkX = CoordinateUtils.getChunkX(playerPos);
-+ final int playerChunkZ = CoordinateUtils.getChunkZ(playerPos);
-+ final int manhattanDistance = Math.abs(playerChunkX - chunkX) + Math.abs(playerChunkZ - chunkZ);
-+
-+ final ChunkPriorityHolder holder = new ChunkPriorityHolder(chunkX, chunkZ, manhattanDistance, 0.0);
-+ data.sendQueue.add(holder);
-+ }
-+
-+ public void onChunkLoad(final int chunkX, final int chunkZ) {
-+ if (this.chunkTicketTracker.remove(CoordinateUtils.getChunkKey(chunkX, chunkZ))) {
-+ --this.concurrentChunkLoads;
-+ }
-+ }
-+
-+ public void onChunkLeave(final ServerPlayer player, final int chunkX, final int chunkZ) {
-+ final PlayerLoaderData data = this.playerMap.get(player);
-+
-+ if (data == null) {
-+ return;
-+ }
-+
-+ data.unloadChunk(chunkX, chunkZ);
-+ }
-+
-+ public void addPlayer(final ServerPlayer player) {
-+ TickThread.ensureTickThread("Cannot add player async");
-+ if (!player.isRealPlayer) {
-+ return;
-+ }
-+ final PlayerLoaderData data = new PlayerLoaderData(player, this);
-+ if (this.playerMap.putIfAbsent(player, data) == null) {
-+ data.update();
-+ }
-+ }
-+
-+ public void removePlayer(final ServerPlayer player) {
-+ TickThread.ensureTickThread("Cannot remove player async");
-+ if (!player.isRealPlayer) {
-+ return;
-+ }
-+
-+ final PlayerLoaderData loaderData = this.playerMap.remove(player);
-+ if (loaderData == null) {
-+ return;
-+ }
-+ loaderData.remove();
-+ this.chunkLoadQueue.remove(loaderData);
-+ this.chunkSendQueue.remove(loaderData);
-+ this.chunkSendWaitQueue.remove(loaderData);
-+ synchronized (this.sendingChunkCounts) {
-+ final int count = this.sendingChunkCounts.removeInt(loaderData);
-+ if (count != 0) {
-+ concurrentChunkSends.getAndAdd(-count);
-+ }
-+ }
-+ }
-+
-+ public void updatePlayer(final ServerPlayer player) {
-+ TickThread.ensureTickThread("Cannot update player async");
-+ if (!player.isRealPlayer) {
-+ return;
-+ }
-+ final PlayerLoaderData loaderData = this.playerMap.get(player);
-+ if (loaderData != null) {
-+ loaderData.update();
-+ }
-+ }
-+
-+ public PlayerLoaderData getData(final ServerPlayer player) {
-+ return this.playerMap.get(player);
-+ }
-+
-+ public void tick() {
-+ TickThread.ensureTickThread("Cannot tick async");
-+ for (final PlayerLoaderData data : this.playerMap.values()) {
-+ data.update();
-+ }
-+ this.tickMidTick();
-+ }
-+
-+ protected static final AtomicInteger concurrentChunkSends = new AtomicInteger();
-+ protected final Reference2IntOpenHashMap sendingChunkCounts = new Reference2IntOpenHashMap<>();
-+ private static long nextChunkSend;
-+ private void trySendChunks() {
-+ final long time = System.nanoTime();
-+ if (nextChunkSend - time > 0) {
-+ return;
-+ }
-+ // drain entries from wait queue
-+ while (!this.chunkSendWaitQueue.isEmpty()) {
-+ final PlayerLoaderData data = this.chunkSendWaitQueue.first();
-+
-+ if (data.nextChunkSendTarget - time > 0) {
-+ break;
-+ }
-+
-+ this.chunkSendWaitQueue.pollFirst();
-+
-+ this.chunkSendQueue.add(data);
-+ }
-+
-+ if (this.chunkSendQueue.isEmpty()) {
-+ return;
-+ }
-+
-+ final int maxSends = this.getMaxConcurrentChunkSends();
-+ final long nextPlayerDeadline = this.getTargetSendPerPlayerAddend() + time;
-+ for (;;) {
-+ if (this.chunkSendQueue.isEmpty()) {
-+ break;
-+ }
-+ final int currSends = concurrentChunkSends.get();
-+ if (currSends >= maxSends) {
-+ break;
-+ }
-+
-+ if (!concurrentChunkSends.compareAndSet(currSends, currSends + 1)) {
-+ continue;
-+ }
-+
-+ // send chunk
-+
-+ final PlayerLoaderData data = this.chunkSendQueue.removeFirst();
-+
-+ final ChunkPriorityHolder queuedSend = data.sendQueue.pollFirst();
-+ if (queuedSend == null) {
-+ concurrentChunkSends.getAndDecrement(); // we never sent, so decrease
-+ // stop iterating over players who have nothing to send
-+ if (this.chunkSendQueue.isEmpty()) {
-+ // nothing left
-+ break;
-+ }
-+ continue;
-+ }
-+
-+ if (!this.isChunkPlayerLoaded(queuedSend.chunkX, queuedSend.chunkZ)) {
-+ throw new IllegalStateException();
-+ }
-+
-+ data.nextChunkSendTarget = nextPlayerDeadline;
-+ this.chunkSendWaitQueue.add(data);
-+
-+ synchronized (this.sendingChunkCounts) {
-+ this.sendingChunkCounts.addTo(data, 1);
-+ }
-+
-+ data.sendChunk(queuedSend.chunkX, queuedSend.chunkZ, () -> {
-+ synchronized (this.sendingChunkCounts) {
-+ final int count = this.sendingChunkCounts.getInt(data);
-+ if (count == 0) {
-+ // disconnected, so we don't need to decrement: it will be decremented for us
-+ return;
-+ }
-+ if (count == 1) {
-+ this.sendingChunkCounts.removeInt(data);
-+ } else {
-+ this.sendingChunkCounts.put(data, count - 1);
-+ }
-+ }
-+
-+ concurrentChunkSends.getAndDecrement();
-+ });
-+
-+ nextChunkSend = this.getMaxSendAddend() + time;
-+ if (nextChunkSend - time > 0) {
-+ break;
-+ }
-+ }
-+ }
-+
-+ protected int concurrentChunkLoads;
-+ // this interval prevents bursting a lot of chunk loads
-+ protected static final IntervalledCounter TICKET_ADDITION_COUNTER_SHORT = new IntervalledCounter((long)(1.0e6 * 50.0)); // 50ms
-+ // this interval ensures the rate is kept between ticks correctly
-+ protected static final IntervalledCounter TICKET_ADDITION_COUNTER_LONG = new IntervalledCounter((long)(1.0e6 * 1000.0)); // 1000ms
-+ private void tryLoadChunks() {
-+ if (this.chunkLoadQueue.isEmpty()) {
-+ return;
-+ }
-+
-+ final int maxLoads = this.getMaxChunkLoads();
-+ final long time = System.nanoTime();
-+ boolean updatedCounters = false;
-+ for (;;) {
-+ final PlayerLoaderData data = this.chunkLoadQueue.pollFirst();
-+
-+ data.lastChunkLoad = time;
-+
-+ final ChunkPriorityHolder queuedLoad = data.loadQueue.peekFirst();
-+ if (queuedLoad == null) {
-+ if (this.chunkLoadQueue.isEmpty()) {
-+ break;
-+ }
-+ continue;
-+ }
-+
-+ if (!updatedCounters) {
-+ updatedCounters = true;
-+ TICKET_ADDITION_COUNTER_SHORT.updateCurrentTime(time);
-+ TICKET_ADDITION_COUNTER_LONG.updateCurrentTime(time);
-+ data.ticketAdditionCounterShort.updateCurrentTime(time);
-+ data.ticketAdditionCounterLong.updateCurrentTime(time);
-+ }
-+
-+ if (this.isChunkPlayerLoaded(queuedLoad.chunkX, queuedLoad.chunkZ)) {
-+ // already loaded!
-+ data.loadQueue.pollFirst(); // already loaded so we just skip
-+ this.chunkLoadQueue.add(data);
-+
-+ // ensure the chunk is queued to send
-+ this.onChunkSendReady(queuedLoad.chunkX, queuedLoad.chunkZ);
-+ continue;
-+ }
-+
-+ final long chunkKey = CoordinateUtils.getChunkKey(queuedLoad.chunkX, queuedLoad.chunkZ);
-+
-+ final double priority = queuedLoad.priority;
-+ // while we do need to rate limit chunk loads, the logic for sending chunks requires that tickets are present.
-+ // when chunks are loaded (i.e spawn) but do not have this player's tickets, they have to wait behind the
-+ // load queue. To avoid this problem, we check early here if tickets are required to load the chunk - if they
-+ // aren't required, it bypasses the limiter system.
-+ boolean unloadedTargetChunk = false;
-+ unloaded_check:
-+ for (int dz = -1; dz <= 1; ++dz) {
-+ for (int dx = -1; dx <= 1; ++dx) {
-+ final int offX = queuedLoad.chunkX + dx;
-+ final int offZ = queuedLoad.chunkZ + dz;
-+ if (this.chunkMap.level.getChunkSource().getChunkAtIfLoadedMainThreadNoCache(offX, offZ) == null) {
-+ unloadedTargetChunk = true;
-+ break unloaded_check;
-+ }
-+ }
-+ }
-+ if (unloadedTargetChunk && priority >= 0.0) {
-+ // priority >= 0.0 implies rate limited chunks
-+
-+ final int currentChunkLoads = this.concurrentChunkLoads;
-+ if (currentChunkLoads >= maxLoads || (GlobalConfiguration.get().chunkLoading.globalMaxChunkLoadRate > 0 && (TICKET_ADDITION_COUNTER_SHORT.getRate() >= GlobalConfiguration.get().chunkLoading.globalMaxChunkLoadRate || TICKET_ADDITION_COUNTER_LONG.getRate() >= GlobalConfiguration.get().chunkLoading.globalMaxChunkLoadRate))
-+ || (GlobalConfiguration.get().chunkLoading.playerMaxChunkLoadRate > 0.0 && (data.ticketAdditionCounterShort.getRate() >= GlobalConfiguration.get().chunkLoading.playerMaxChunkLoadRate || data.ticketAdditionCounterLong.getRate() >= GlobalConfiguration.get().chunkLoading.playerMaxChunkLoadRate))) {
-+ // don't poll, we didn't load it
-+ this.chunkLoadQueue.add(data);
-+ break;
-+ }
-+ }
-+
-+ // can only poll after we decide to load
-+ data.loadQueue.pollFirst();
-+
-+ // now that we've polled we can re-add to load queue
-+ this.chunkLoadQueue.add(data);
-+
-+ // add necessary tickets to load chunk up to send-ready
-+ for (int dz = -1; dz <= 1; ++dz) {
-+ for (int dx = -1; dx <= 1; ++dx) {
-+ final int offX = queuedLoad.chunkX + dx;
-+ final int offZ = queuedLoad.chunkZ + dz;
-+ final ChunkPos chunkPos = new ChunkPos(offX, offZ);
-+
-+ this.chunkMap.level.getChunkSource().addTicketAtLevel(TicketType.PLAYER, chunkPos, LOADED_TICKET_LEVEL, chunkPos);
-+ if (this.chunkMap.level.getChunkSource().getChunkAtIfLoadedMainThreadNoCache(offX, offZ) != null) {
-+ continue;
-+ }
-+
-+ if (priority > 0.0 && this.chunkTicketTracker.add(CoordinateUtils.getChunkKey(offX, offZ))) {
-+ // won't reach here if unloadedTargetChunk is false
-+ ++this.concurrentChunkLoads;
-+ TICKET_ADDITION_COUNTER_SHORT.addTime(time);
-+ TICKET_ADDITION_COUNTER_LONG.addTime(time);
-+ data.ticketAdditionCounterShort.addTime(time);
-+ data.ticketAdditionCounterLong.addTime(time);
-+ }
-+ }
-+ }
-+
-+ // mark that we've added tickets here
-+ this.isTargetedForPlayerLoad.add(chunkKey);
-+
-+ // it's possible all we needed was the player tickets to queue up the send.
-+ if (this.isChunkPlayerLoaded(queuedLoad.chunkX, queuedLoad.chunkZ)) {
-+ // yup, all we needed.
-+ this.onChunkSendReady(queuedLoad.chunkX, queuedLoad.chunkZ);
-+ } else if (this.chunkNeedsPostProcessing(queuedLoad.chunkX, queuedLoad.chunkZ)) {
-+ // requires post processing
-+ this.chunkMap.mainThreadExecutor.execute(() -> {
-+ final long key = CoordinateUtils.getChunkKey(queuedLoad.chunkX, queuedLoad.chunkZ);
-+ final ChunkHolder holder = PlayerChunkLoader.this.chunkMap.getVisibleChunkIfPresent(key);
-+
-+ if (holder == null) {
-+ return;
-+ }
-+
-+ final LevelChunk chunk = holder.getSendingChunk();
-+
-+ if (chunk != null && !chunk.isPostProcessingDone) {
-+ chunk.postProcessGeneration();
-+ }
-+ });
-+ }
-+ }
-+ }
-+
-+ public void tickMidTick() {
-+ // try to send more chunks
-+ this.trySendChunks();
-+
-+ // try to queue more chunks to load
-+ this.tryLoadChunks();
-+ }
-+
-+ static final class ChunkPriorityHolder {
-+ public final int chunkX;
-+ public final int chunkZ;
-+ public final int manhattanDistanceToPlayer;
-+ public final double priority;
-+
-+ public ChunkPriorityHolder(final int chunkX, final int chunkZ, final int manhattanDistanceToPlayer, final double priority) {
-+ this.chunkX = chunkX;
-+ this.chunkZ = chunkZ;
-+ this.manhattanDistanceToPlayer = manhattanDistanceToPlayer;
-+ this.priority = priority;
-+ }
-+ }
-+
-+ public static final class PlayerLoaderData {
-+
-+ protected static final float FOV = 110.0f;
-+ protected static final double PRIORITISED_DISTANCE = 12.0 * 16.0;
-+
-+ // Player max sprint speed is approximately 8m/s
-+ protected static final double LOOK_PRIORITY_SPEED_THRESHOLD = (10.0/20.0) * (10.0/20.0);
-+ protected static final double LOOK_PRIORITY_YAW_DELTA_RECALC_THRESHOLD = 3.0f;
-+
-+ protected double lastLocX = Double.NEGATIVE_INFINITY;
-+ protected double lastLocZ = Double.NEGATIVE_INFINITY;
-+
-+ protected int lastChunkX = Integer.MIN_VALUE;
-+ protected int lastChunkZ = Integer.MIN_VALUE;
-+
-+ // this is corrected so that 0 is along the positive x-axis
-+ protected float lastYaw = Float.NEGATIVE_INFINITY;
-+
-+ protected int lastSendDistance = Integer.MIN_VALUE;
-+ protected int lastLoadDistance = Integer.MIN_VALUE;
-+ protected int lastTickDistance = Integer.MIN_VALUE;
-+ protected boolean usingLookingPriority;
-+
-+ protected final ServerPlayer player;
-+ protected final PlayerChunkLoader loader;
-+
-+ // warning: modifications of this field must be aware that the loadQueue inside PlayerChunkLoader uses this field
-+ // in a comparator!
-+ protected final ArrayDeque loadQueue = new ArrayDeque<>();
-+ protected final LongOpenHashSet sentChunks = new LongOpenHashSet();
-+ protected final LongOpenHashSet chunksToBeSent = new LongOpenHashSet();
-+
-+ protected final TreeSet sendQueue = new TreeSet<>((final ChunkPriorityHolder p1, final ChunkPriorityHolder p2) -> {
-+ final int distanceCompare = Integer.compare(p1.manhattanDistanceToPlayer, p2.manhattanDistanceToPlayer);
-+ if (distanceCompare != 0) {
-+ return distanceCompare;
-+ }
-+
-+ final int coordinateXCompare = Integer.compare(p1.chunkX, p2.chunkX);
-+ if (coordinateXCompare != 0) {
-+ return coordinateXCompare;
-+ }
-+
-+ return Integer.compare(p1.chunkZ, p2.chunkZ);
-+ });
-+
-+ protected int sendViewDistance = -1;
-+ protected int loadViewDistance = -1;
-+ protected int tickViewDistance = -1;
-+
-+ protected long nextChunkSendTarget;
-+
-+ // this interval prevents bursting a lot of chunk loads
-+ protected final IntervalledCounter ticketAdditionCounterShort = new IntervalledCounter((long)(1.0e6 * 50.0)); // 50ms
-+ // this ensures the rate is kept between ticks correctly
-+ protected final IntervalledCounter ticketAdditionCounterLong = new IntervalledCounter((long)(1.0e6 * 1000.0)); // 1000ms
-+
-+ public long lastChunkLoad;
-+
-+ public PlayerLoaderData(final ServerPlayer player, final PlayerChunkLoader loader) {
-+ this.player = player;
-+ this.loader = loader;
-+ }
-+
-+ // these view distance methods are for api
-+ public int getTargetSendViewDistance() {
-+ final int tickViewDistance = this.tickViewDistance == -1 ? this.loader.getTickDistance() : this.tickViewDistance;
-+ final int loadViewDistance = Math.max(tickViewDistance + 1, this.loadViewDistance == -1 ? this.loader.getLoadDistance() : this.loadViewDistance);
-+ final int clientViewDistance = this.getClientViewDistance();
-+ final int sendViewDistance = Math.min(loadViewDistance, this.sendViewDistance == -1 ? (!GlobalConfiguration.get().chunkLoading.autoconfigSendDistance || clientViewDistance == -1 ? this.loader.getSendDistance() : clientViewDistance + 1) : this.sendViewDistance);
-+ return sendViewDistance;
-+ }
-+
-+ public void setTargetSendViewDistance(final int distance) {
-+ if (distance != -1 && (distance < MIN_VIEW_DISTANCE || distance > MAX_VIEW_DISTANCE + 1)) {
-+ throw new IllegalArgumentException("Send view distance must be a number between " + MIN_VIEW_DISTANCE + " and " + (MAX_VIEW_DISTANCE + 1) + " or -1, got: " + distance);
-+ }
-+ this.sendViewDistance = distance;
-+ }
-+
-+ public int getTargetNoTickViewDistance() {
-+ return (this.loadViewDistance == -1 ? this.getLoadDistance() : this.loadViewDistance) - 1;
-+ }
-+
-+ public void setTargetNoTickViewDistance(final int distance) {
-+ if (distance != -1 && (distance < MIN_VIEW_DISTANCE || distance > MAX_VIEW_DISTANCE)) {
-+ throw new IllegalArgumentException("Simulation distance must be a number between " + MIN_VIEW_DISTANCE + " and " + MAX_VIEW_DISTANCE + " or -1, got: " + distance);
-+ }
-+ this.loadViewDistance = distance == -1 ? -1 : distance + 1;
-+ }
-+
-+ public int getTargetTickViewDistance() {
-+ return this.tickViewDistance == -1 ? this.loader.getTickDistance() : this.tickViewDistance;
-+ }
-+
-+ public void setTargetTickViewDistance(final int distance) {
-+ if (distance != -1 && (distance < MIN_VIEW_DISTANCE || distance > MAX_VIEW_DISTANCE)) {
-+ throw new IllegalArgumentException("View distance must be a number between " + MIN_VIEW_DISTANCE + " and " + MAX_VIEW_DISTANCE + " or -1, got: " + distance);
-+ }
-+ this.tickViewDistance = distance;
-+ }
-+
-+ protected int getLoadDistance() {
-+ final int tickViewDistance = this.tickViewDistance == -1 ? this.loader.getTickDistance() : this.tickViewDistance;
-+
-+ return Math.max(tickViewDistance + 1, this.loadViewDistance == -1 ? this.loader.getLoadDistance() : this.loadViewDistance);
-+ }
-+
-+ public boolean hasSentChunk(final int chunkX, final int chunkZ) {
-+ return this.sentChunks.contains(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-+ }
-+
-+ public void sendChunk(final int chunkX, final int chunkZ, final Runnable onChunkSend) {
-+ if (this.sentChunks.add(CoordinateUtils.getChunkKey(chunkX, chunkZ))) {
-+ ((ServerLevel)this.player.level()).getChunkSource().chunkMap.updateChunkTracking(this.player,
-+ new ChunkPos(chunkX, chunkZ), new MutableObject<>(), false, true); // unloaded, loaded
-+ this.player.connection.connection.execute(onChunkSend);
-+ } else {
-+ throw new IllegalStateException();
-+ }
-+ }
-+
-+ public void unloadChunk(final int chunkX, final int chunkZ) {
-+ if (this.sentChunks.remove(CoordinateUtils.getChunkKey(chunkX, chunkZ))) {
-+ ((ServerLevel)this.player.level()).getChunkSource().chunkMap.updateChunkTracking(this.player,
-+ new ChunkPos(chunkX, chunkZ), null, true, false); // unloaded, loaded
-+ }
-+ }
-+
-+ protected static boolean wantChunkLoaded(final int centerX, final int centerZ, final int chunkX, final int chunkZ,
-+ final int sendRadius) {
-+ // expect sendRadius to be = 1 + target viewable radius
-+ return ChunkMap.isChunkInRange(chunkX, chunkZ, centerX, centerZ, sendRadius);
-+ }
-+
-+ protected static boolean triangleIntersects(final double p1x, final double p1z, // triangle point
-+ final double p2x, final double p2z, // triangle point
-+ final double p3x, final double p3z, // triangle point
-+
-+ final double targetX, final double targetZ) { // point
-+ // from barycentric coordinates:
-+ // targetX = a*p1x + b*p2x + c*p3x
-+ // targetZ = a*p1z + b*p2z + c*p3z
-+ // 1.0 = a*1.0 + b*1.0 + c*1.0
-+ // where a, b, c >= 0.0
-+ // so, if any of a, b, c are less-than zero then there is no intersection.
-+
-+ // d = ((p2z - p3z)(p1x - p3x) + (p3x - p2x)(p1z - p3z))
-+ // a = ((p2z - p3z)(targetX - p3x) + (p3x - p2x)(targetZ - p3z)) / d
-+ // b = ((p3z - p1z)(targetX - p3x) + (p1x - p3x)(targetZ - p3z)) / d
-+ // c = 1.0 - a - b
-+
-+ final double d = (p2z - p3z)*(p1x - p3x) + (p3x - p2x)*(p1z - p3z);
-+ final double a = ((p2z - p3z)*(targetX - p3x) + (p3x - p2x)*(targetZ - p3z)) / d;
-+
-+ if (a < 0.0 || a > 1.0) {
-+ return false;
-+ }
-+
-+ final double b = ((p3z - p1z)*(targetX - p3x) + (p1x - p3x)*(targetZ - p3z)) / d;
-+ if (b < 0.0 || b > 1.0) {
-+ return false;
-+ }
-+
-+ final double c = 1.0 - a - b;
-+
-+ return c >= 0.0 && c <= 1.0;
-+ }
-+
-+ public void remove() {
-+ this.loader.broadcastMap.remove(this.player);
-+ this.loader.loadMap.remove(this.player);
-+ this.loader.loadTicketCleanup.remove(this.player);
-+ this.loader.tickMap.remove(this.player);
-+ }
-+
-+ protected int getClientViewDistance() {
-+ return this.player.clientViewDistance == null ? -1 : Math.max(0, this.player.clientViewDistance.intValue());
-+ }
-+
-+ public void update() {
-+ final int tickViewDistance = this.tickViewDistance == -1 ? this.loader.getTickDistance() : this.tickViewDistance;
-+ // load view cannot be less-than tick view + 1
-+ final int loadViewDistance = Math.max(tickViewDistance + 1, this.loadViewDistance == -1 ? this.loader.getLoadDistance() : this.loadViewDistance);
-+ // send view cannot be greater-than load view
-+ final int clientViewDistance = this.getClientViewDistance();
-+ final int sendViewDistance = Math.min(loadViewDistance, this.sendViewDistance == -1 ? (!GlobalConfiguration.get().chunkLoading.autoconfigSendDistance || clientViewDistance == -1 ? this.loader.getSendDistance() : clientViewDistance + 1) : this.sendViewDistance);
-+
-+ final double posX = this.player.getX();
-+ final double posZ = this.player.getZ();
-+ final float yaw = MCUtil.normalizeYaw(this.player.getYRot() + 90.0f); // mc yaw 0 is along the positive z axis, but obviously this is really dumb - offset so we are at positive x-axis
-+
-+ // in general, we really only want to prioritise chunks in front if we know we're moving pretty fast into them.
-+ final boolean useLookPriority = GlobalConfiguration.get().chunkLoading.enableFrustumPriority && (this.player.getDeltaMovement().horizontalDistanceSqr() > LOOK_PRIORITY_SPEED_THRESHOLD ||
-+ this.player.getAbilities().flying);
-+
-+ // make sure we're in the send queue
-+ this.loader.chunkSendWaitQueue.add(this);
-+
-+ if (
-+ // has view distance stayed the same?
-+ sendViewDistance == this.lastSendDistance
-+ && loadViewDistance == this.lastLoadDistance
-+ && tickViewDistance == this.lastTickDistance
-+
-+ && (this.usingLookingPriority ? (
-+ // has our block stayed the same (this also accounts for chunk change)?
-+ Mth.floor(this.lastLocX) == Mth.floor(posX)
-+ && Mth.floor(this.lastLocZ) == Mth.floor(posZ)
-+ ) : (
-+ // has our chunk stayed the same
-+ (Mth.floor(this.lastLocX) >> 4) == (Mth.floor(posX) >> 4)
-+ && (Mth.floor(this.lastLocZ) >> 4) == (Mth.floor(posZ) >> 4)
-+ ))
-+
-+ // has our decision about look priority changed?
-+ && this.usingLookingPriority == useLookPriority
-+
-+ // if we are currently using look priority, has our yaw stayed within recalc threshold?
-+ && (!this.usingLookingPriority || Math.abs(yaw - this.lastYaw) <= LOOK_PRIORITY_YAW_DELTA_RECALC_THRESHOLD)
-+ ) {
-+ // nothing we care about changed, so we're not re-calculating
-+ return;
-+ }
-+
-+ final int centerChunkX = Mth.floor(posX) >> 4;
-+ final int centerChunkZ = Mth.floor(posZ) >> 4;
-+
-+ final boolean needsChunkCenterUpdate = (centerChunkX != this.lastChunkX) || (centerChunkZ != this.lastChunkZ);
-+ this.loader.broadcastMap.addOrUpdate(this.player, centerChunkX, centerChunkZ, sendViewDistance);
-+ this.loader.loadMap.addOrUpdate(this.player, centerChunkX, centerChunkZ, loadViewDistance);
-+ this.loader.loadTicketCleanup.addOrUpdate(this.player, centerChunkX, centerChunkZ, loadViewDistance + 1);
-+ this.loader.tickMap.addOrUpdate(this.player, centerChunkX, centerChunkZ, tickViewDistance);
-+
-+ if (sendViewDistance != this.lastSendDistance) {
-+ // update the view radius for client
-+ // note that this should be after the map calls because the client wont expect unload calls not in its VD
-+ // and it's possible we decreased VD here
-+ this.player.connection.send(new ClientboundSetChunkCacheRadiusPacket(sendViewDistance));
-+ }
-+ if (tickViewDistance != this.lastTickDistance) {
-+ this.player.connection.send(new ClientboundSetSimulationDistancePacket(tickViewDistance));
-+ }
-+
-+ this.lastLocX = posX;
-+ this.lastLocZ = posZ;
-+ this.lastYaw = yaw;
-+ this.lastSendDistance = sendViewDistance;
-+ this.lastLoadDistance = loadViewDistance;
-+ this.lastTickDistance = tickViewDistance;
-+ this.usingLookingPriority = useLookPriority;
-+
-+ this.lastChunkX = centerChunkX;
-+ this.lastChunkZ = centerChunkZ;
-+
-+ // points for player "view" triangle:
-+
-+ // obviously, the player pos is a vertex
-+ final double p1x = posX;
-+ final double p1z = posZ;
-+
-+ // to the left of the looking direction
-+ final double p2x = PRIORITISED_DISTANCE * Math.cos(Math.toRadians(yaw + (double)(FOV / 2.0))) // calculate rotated vector
-+ + p1x; // offset vector
-+ final double p2z = PRIORITISED_DISTANCE * Math.sin(Math.toRadians(yaw + (double)(FOV / 2.0))) // calculate rotated vector
-+ + p1z; // offset vector
-+
-+ // to the right of the looking direction
-+ final double p3x = PRIORITISED_DISTANCE * Math.cos(Math.toRadians(yaw - (double)(FOV / 2.0))) // calculate rotated vector
-+ + p1x; // offset vector
-+ final double p3z = PRIORITISED_DISTANCE * Math.sin(Math.toRadians(yaw - (double)(FOV / 2.0))) // calculate rotated vector
-+ + p1z; // offset vector
-+
-+ // now that we have all of our points, we can recalculate the load queue
-+
-+ final List loadQueue = new ArrayList<>();
-+
-+ // clear send queue, we are re-sorting
-+ this.sendQueue.clear();
-+ // clear chunk want set, vd/position might have changed
-+ this.chunksToBeSent.clear();
-+
-+ final int searchViewDistance = Math.max(loadViewDistance, sendViewDistance);
-+
-+ for (int dx = -searchViewDistance; dx <= searchViewDistance; ++dx) {
-+ for (int dz = -searchViewDistance; dz <= searchViewDistance; ++dz) {
-+ final int chunkX = dx + centerChunkX;
-+ final int chunkZ = dz + centerChunkZ;
-+ final int squareDistance = Math.max(Math.abs(dx), Math.abs(dz));
-+ final boolean sendChunk = squareDistance <= sendViewDistance && wantChunkLoaded(centerChunkX, centerChunkZ, chunkX, chunkZ, sendViewDistance);
-+
-+ if (this.hasSentChunk(chunkX, chunkZ)) {
-+ // already sent (which means it is also loaded)
-+ if (!sendChunk) {
-+ // have sent the chunk, but don't want it anymore
-+ // unload it now
-+ this.unloadChunk(chunkX, chunkZ);
-+ }
-+ continue;
-+ }
-+
-+ final boolean loadChunk = squareDistance <= loadViewDistance;
-+
-+ final boolean prioritised = useLookPriority && triangleIntersects(
-+ // prioritisation triangle
-+ p1x, p1z, p2x, p2z, p3x, p3z,
-+
-+ // center of chunk
-+ (double)((chunkX << 4) | 8), (double)((chunkZ << 4) | 8)
-+ );
-+
-+ final int manhattanDistance = Math.abs(dx) + Math.abs(dz);
-+
-+ final double priority;
-+
-+ if (squareDistance <= GlobalConfiguration.get().chunkLoading.minLoadRadius) {
-+ // priority should be negative, and we also want to order it from center outwards
-+ // so we want (0,0) to be the smallest, and (minLoadedRadius,minLoadedRadius) to be the greatest
-+ priority = -((2 * GlobalConfiguration.get().chunkLoading.minLoadRadius + 1) - manhattanDistance);
-+ } else {
-+ if (prioritised) {
-+ // we don't prioritise these chunks above others because we also want to make sure some chunks
-+ // will be loaded if the player changes direction
-+ priority = (double)manhattanDistance / 6.0;
-+ } else {
-+ priority = (double)manhattanDistance;
-+ }
-+ }
-+
-+ final ChunkPriorityHolder holder = new ChunkPriorityHolder(chunkX, chunkZ, manhattanDistance, priority);
-+
-+ if (!this.loader.isChunkPlayerLoaded(chunkX, chunkZ)) {
-+ if (loadChunk) {
-+ loadQueue.add(holder);
-+ if (sendChunk) {
-+ this.chunksToBeSent.add(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-+ }
-+ }
-+ } else {
-+ // loaded but not sent: so queue it!
-+ if (sendChunk) {
-+ this.sendQueue.add(holder);
-+ }
-+ }
-+ }
-+ }
-+
-+ loadQueue.sort((final ChunkPriorityHolder p1, final ChunkPriorityHolder p2) -> {
-+ return Double.compare(p1.priority, p2.priority);
-+ });
-+
-+ // we're modifying loadQueue, must remove
-+ this.loader.chunkLoadQueue.remove(this);
-+
-+ this.loadQueue.clear();
-+ this.loadQueue.addAll(loadQueue);
-+
-+ // must re-add
-+ this.loader.chunkLoadQueue.add(this);
-+
-+ // update the chunk center
-+ // this must be done last so that the client does not ignore any of our unload chunk packets
-+ if (needsChunkCenterUpdate) {
-+ this.player.connection.send(new ClientboundSetChunkCacheCenterPacket(centerChunkX, centerChunkZ));
-+ }
-+ }
-+ }
-+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java b/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java
index 95eac2e12a16938d81ab512b00e90c5234b42834..8f7bf1f0400aeab8b7801d113d244d0716c5eb84 100644
--- a/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java
@@ -2733,10 +2310,10 @@ index 95eac2e12a16938d81ab512b00e90c5234b42834..8f7bf1f0400aeab8b7801d113d244d07
private ChunkSystem() {
diff --git a/src/main/java/io/papermc/paper/chunk/system/RegionizedPlayerChunkLoader.java b/src/main/java/io/papermc/paper/chunk/system/RegionizedPlayerChunkLoader.java
new file mode 100644
-index 0000000000000000000000000000000000000000..48bfee5b9db501fcdba4ddb1e4bff2718956a680
+index 0000000000000000000000000000000000000000..305a4f747c9c8d99d482ba36e8c89a8412593f39
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/RegionizedPlayerChunkLoader.java
-@@ -0,0 +1,1417 @@
+@@ -0,0 +1,1416 @@
+package io.papermc.paper.chunk.system;
+
+import ca.spottedleaf.concurrentutil.collection.SRSWLinkedQueue;
@@ -3210,9 +2787,8 @@ index 0000000000000000000000000000000000000000..48bfee5b9db501fcdba4ddb1e4bff271
+ if (this.delayedTicketOps.isEmpty()) {
+ return;
+ }
-+ this.world.chunkTaskScheduler.chunkHolderManager.pushDelayedTicketUpdates(this.delayedTicketOps);
++ this.world.chunkTaskScheduler.chunkHolderManager.performTicketUpdates(this.delayedTicketOps);
+ this.delayedTicketOps.clear();
-+ this.world.chunkTaskScheduler.chunkHolderManager.tryDrainTicketUpdates();
+ }
+
+ private void pushDelayedTicketOp(final ChunkHolderManager.TicketOperation, ?> op) {
@@ -6995,26 +6571,27 @@ index 0000000000000000000000000000000000000000..300700477ee34bc22b31315825c0e40f
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java
new file mode 100644
-index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b4638188798609
+index 0000000000000000000000000000000000000000..8e52ebe8d12f5da3d877b0e4ff3723229fb47db1
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java
-@@ -0,0 +1,1373 @@
+@@ -0,0 +1,1499 @@
+package io.papermc.paper.chunk.system.scheduling;
-+
-+import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock;
+import ca.spottedleaf.concurrentutil.map.SWMRLong2ObjectHashTable;
-+import co.aikar.timings.Timing;
+import com.google.common.collect.ImmutableList;
+import com.google.gson.JsonArray;
+import com.google.gson.JsonObject;
+import com.mojang.logging.LogUtils;
++import io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader;
+import io.papermc.paper.chunk.system.io.RegionFileIOThread;
+import io.papermc.paper.chunk.system.poi.PoiChunk;
++import io.papermc.paper.threadedregions.TickRegions;
+import io.papermc.paper.util.CoordinateUtils;
+import io.papermc.paper.util.TickThread;
-+import io.papermc.paper.util.misc.Delayed8WayDistancePropagator2D;
+import io.papermc.paper.world.ChunkEntitySlices;
++import it.unimi.dsi.fastutil.longs.Long2ByteLinkedOpenHashMap;
++import it.unimi.dsi.fastutil.longs.Long2ByteMap;
+import it.unimi.dsi.fastutil.longs.Long2IntLinkedOpenHashMap;
+import it.unimi.dsi.fastutil.longs.Long2IntMap;
+import it.unimi.dsi.fastutil.longs.Long2IntOpenHashMap;
@@ -7023,13 +6600,11 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+import it.unimi.dsi.fastutil.longs.LongArrayList;
+import it.unimi.dsi.fastutil.longs.LongIterator;
+import it.unimi.dsi.fastutil.objects.ObjectRBTreeSet;
-+import it.unimi.dsi.fastutil.objects.ReferenceLinkedOpenHashSet;
+import net.minecraft.nbt.CompoundTag;
+import io.papermc.paper.chunk.system.ChunkSystem;
+import net.minecraft.server.MinecraftServer;
+import net.minecraft.server.level.ChunkHolder;
+import net.minecraft.server.level.ChunkLevel;
-+import net.minecraft.server.level.ChunkMap;
+import net.minecraft.server.level.FullChunkStatus;
+import net.minecraft.server.level.ServerLevel;
+import net.minecraft.server.level.Ticket;
@@ -7037,8 +6612,6 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+import net.minecraft.util.SortedArraySet;
+import net.minecraft.util.Unit;
+import net.minecraft.world.level.ChunkPos;
-+import net.minecraft.world.level.chunk.ChunkAccess;
-+import net.minecraft.world.level.chunk.ChunkStatus;
+import org.bukkit.plugin.Plugin;
+import org.slf4j.Logger;
+import java.io.IOException;
@@ -7049,12 +6622,12 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
-+import java.util.Objects;
++import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
++import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.concurrent.locks.LockSupport;
-+import java.util.concurrent.locks.ReentrantLock;
+import java.util.function.Predicate;
+
+public final class ChunkHolderManager {
@@ -7066,12 +6639,49 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ public static final int ENTITY_TICKING_TICKET_LEVEL = 31;
+ public static final int MAX_TICKET_LEVEL = ChunkLevel.MAX_LEVEL; // inclusive
+
-+ private static final long NO_TIMEOUT_MARKER = -1L;
++ private static final long NO_TIMEOUT_MARKER = Long.MIN_VALUE;
++ private static final long PROBE_MARKER = Long.MIN_VALUE + 1;
++ public final ReentrantAreaLock ticketLockArea = new ReentrantAreaLock(ChunkTaskScheduler.getChunkSystemLockShift());
+
-+ final ReentrantLock ticketLock = new ReentrantLock();
++ private final ConcurrentHashMap>> tickets = new java.util.concurrent.ConcurrentHashMap<>();
++ private final ConcurrentHashMap sectionToChunkToExpireCount = new java.util.concurrent.ConcurrentHashMap<>();
++ final ChunkQueue unloadQueue;
++
++ public boolean processTicketUpdates(final int posX, final int posZ) {
++ final int ticketShift = ThreadedTicketLevelPropagator.SECTION_SHIFT;
++ final int ticketMask = (1 << ticketShift) - 1;
++ final List scheduledTasks = new ArrayList<>();
++ final List changedFullStatus = new ArrayList<>();
++ final boolean ret;
++ final ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(
++ ((posX >> ticketShift) - 1) << ticketShift,
++ ((posZ >> ticketShift) - 1) << ticketShift,
++ (((posX >> ticketShift) + 1) << ticketShift) | ticketMask,
++ (((posZ >> ticketShift) + 1) << ticketShift) | ticketMask
++ );
++ try {
++ ret = this.processTicketUpdatesNoLock(posX >> ticketShift, posZ >> ticketShift, scheduledTasks, changedFullStatus);
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++
++ this.addChangedStatuses(changedFullStatus);
++
++ for (int i = 0, len = scheduledTasks.size(); i < len; ++i) {
++ scheduledTasks.get(i).schedule();
++ }
++
++ return ret;
++ }
++
++ private boolean processTicketUpdatesNoLock(final int sectionX, final int sectionZ, final List scheduledTasks,
++ final List changedFullStatus) {
++ return this.ticketLevelPropagator.performUpdate(
++ sectionX, sectionZ, this.taskScheduler.schedulingLockArea, scheduledTasks, changedFullStatus
++ );
++ }
+
+ private final SWMRLong2ObjectHashTable chunkHolders = new SWMRLong2ObjectHashTable<>(16384, 0.25f);
-+ private final Long2ObjectOpenHashMap>> tickets = new Long2ObjectOpenHashMap<>(8192, 0.25f);
+ // what a disaster of a name
+ // this is a map of removal tick to a map of chunks and the number of tickets a chunk has that are to expire that tick
+ private final Long2ObjectOpenHashMap removeTickToChunkExpireTicketCount = new Long2ObjectOpenHashMap<>();
@@ -7104,12 +6714,13 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ public ChunkHolderManager(final ServerLevel world, final ChunkTaskScheduler taskScheduler) {
+ this.world = world;
+ this.taskScheduler = taskScheduler;
++ this.unloadQueue = new ChunkQueue(TickRegions.getRegionChunkShift());
+ }
+
-+ private long statusUpgradeId;
++ private final AtomicLong statusUpgradeId = new AtomicLong();
+
+ long getNextStatusUpgradeId() {
-+ return ++this.statusUpgradeId;
++ return this.statusUpgradeId.incrementAndGet();
+ }
+
+ public List getOldChunkHolders() {
@@ -7274,22 +6885,63 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ }
+ }
+
-+ protected final Long2IntLinkedOpenHashMap ticketLevelUpdates = new Long2IntLinkedOpenHashMap() {
++ protected final ThreadedTicketLevelPropagator ticketLevelPropagator = new ThreadedTicketLevelPropagator() {
+ @Override
-+ protected void rehash(final int newN) {
-+ // no downsizing allowed
-+ if (newN < this.n) {
-+ return;
++ protected void processLevelUpdates(final Long2ByteLinkedOpenHashMap updates) {
++ // first the necessary chunkholders must be created, so just update the ticket levels
++ for (final Iterator iterator = updates.long2ByteEntrySet().fastIterator(); iterator.hasNext();) {
++ final Long2ByteMap.Entry entry = iterator.next();
++ final long key = entry.getLongKey();
++ final int newLevel = convertBetweenTicketLevels((int)entry.getByteValue());
++
++ NewChunkHolder current = ChunkHolderManager.this.chunkHolders.get(key);
++ if (current == null && newLevel > MAX_TICKET_LEVEL) {
++ // not loaded and it shouldn't be loaded!
++ iterator.remove();
++ continue;
++ }
++
++ final int currentLevel = current == null ? MAX_TICKET_LEVEL + 1 : current.getCurrentTicketLevel();
++ if (currentLevel == newLevel) {
++ // nothing to do
++ iterator.remove();
++ continue;
++ }
++
++ if (current == null) {
++ // must create
++ current = ChunkHolderManager.this.createChunkHolder(key);
++ synchronized (ChunkHolderManager.this.chunkHolders) {
++ ChunkHolderManager.this.chunkHolders.put(key, current);
++ }
++ current.updateTicketLevel(newLevel);
++ } else {
++ current.updateTicketLevel(newLevel);
++ }
++ }
++ }
++
++ @Override
++ protected void processSchedulingUpdates(final Long2ByteLinkedOpenHashMap updates, final List scheduledTasks,
++ final List changedFullStatus) {
++ final List prev = CURRENT_TICKET_UPDATE_SCHEDULING.get();
++ CURRENT_TICKET_UPDATE_SCHEDULING.set(scheduledTasks);
++ try {
++ for (final LongIterator iterator = updates.keySet().iterator(); iterator.hasNext();) {
++ final long key = iterator.nextLong();
++ final NewChunkHolder current = ChunkHolderManager.this.chunkHolders.get(key);
++
++ if (current == null) {
++ throw new IllegalStateException("Expected chunk holder to be created");
++ }
++
++ current.processTicketLevelUpdate(scheduledTasks, changedFullStatus);
++ }
++ } finally {
++ CURRENT_TICKET_UPDATE_SCHEDULING.set(prev);
+ }
-+ super.rehash(newN);
+ }
+ };
-+
-+ protected final Delayed8WayDistancePropagator2D ticketLevelPropagator = new Delayed8WayDistancePropagator2D(
-+ (final long coordinate, final byte oldLevel, final byte newLevel) -> {
-+ ChunkHolderManager.this.ticketLevelUpdates.putAndMoveToLast(coordinate, convertBetweenTicketLevels(newLevel));
-+ }
-+ );
+ // function for converting between ticket levels and propagator levels and vice versa
+ // the problem is the ticket level propagator will propagate from a set source down to zero, whereas mojang expects
+ // levels to propagate from a set value up to a maximum value. so we need to convert the levels we put into the propagator
@@ -7299,40 +6951,68 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ return ChunkLevel.MAX_LEVEL - level + 1;
+ }
+
-+ public boolean hasTickets() {
-+ this.ticketLock.lock();
-+ try {
-+ return !this.tickets.isEmpty();
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ }
-+
+ public String getTicketDebugString(final long coordinate) {
-+ this.ticketLock.lock();
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(CoordinateUtils.getChunkX(coordinate), CoordinateUtils.getChunkZ(coordinate));
+ try {
-+ final SortedArraySet> tickets = this.tickets.get(coordinate);
++ final SortedArraySet> tickets = this.tickets.get(new RegionFileIOThread.ChunkCoordinate(coordinate));
+
+ return tickets != null ? tickets.first().toString() : "no_ticket";
+ } finally {
-+ this.ticketLock.unlock();
++ if (ticketLock != null) {
++ this.ticketLockArea.unlock(ticketLock);
++ }
+ }
+ }
+
+ public Long2ObjectOpenHashMap>> getTicketsCopy() {
-+ this.ticketLock.lock();
-+ try {
-+ return this.tickets.clone();
-+ } finally {
-+ this.ticketLock.unlock();
++ final Long2ObjectOpenHashMap>> ret = new Long2ObjectOpenHashMap<>();
++ final Long2ObjectOpenHashMap> sections = new Long2ObjectOpenHashMap();
++ final int sectionShift = ChunkTaskScheduler.getChunkSystemLockShift();
++ for (final RegionFileIOThread.ChunkCoordinate coord : this.tickets.keySet()) {
++ sections.computeIfAbsent(
++ CoordinateUtils.getChunkKey(
++ CoordinateUtils.getChunkX(coord.key) >> sectionShift,
++ CoordinateUtils.getChunkZ(coord.key) >> sectionShift
++ ),
++ (final long keyInMap) -> {
++ return new ArrayList<>();
++ }
++ ).add(coord);
+ }
++
++ for (final Iterator>> iterator = sections.long2ObjectEntrySet().fastIterator();
++ iterator.hasNext();) {
++ final Long2ObjectMap.Entry> entry = iterator.next();
++ final long sectionKey = entry.getLongKey();
++ final List coordinates = entry.getValue();
++
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(
++ CoordinateUtils.getChunkX(sectionKey) << sectionShift,
++ CoordinateUtils.getChunkZ(sectionKey) << sectionShift
++ );
++ try {
++ for (final RegionFileIOThread.ChunkCoordinate coord : coordinates) {
++ final SortedArraySet> tickets = this.tickets.get(coord);
++ if (tickets == null) {
++ // removed before we acquired lock
++ continue;
++ }
++ ret.put(coord.key, new SortedArraySet<>(tickets));
++ }
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++ }
++
++ return ret;
+ }
+
+ public Collection getPluginChunkTickets(int x, int z) {
+ ImmutableList.Builder ret;
-+ this.ticketLock.lock();
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(x, z);
+ try {
-+ SortedArraySet> tickets = this.tickets.get(ChunkPos.asLong(x, z));
++ final long coordinate = CoordinateUtils.getChunkKey(x, z);
++ final SortedArraySet> tickets = this.tickets.get(new RegionFileIOThread.ChunkCoordinate(coordinate));
+
+ if (tickets == null) {
+ return Collections.emptyList();
@@ -7345,21 +7025,17 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ }
+ }
+ } finally {
-+ this.ticketLock.unlock();
++ this.ticketLockArea.unlock(ticketLock);
+ }
+
+ return ret.build();
+ }
+
-+ protected final int getPropagatedTicketLevel(final long coordinate) {
-+ return convertBetweenTicketLevels(this.ticketLevelPropagator.getLevel(coordinate));
-+ }
-+
+ protected final void updateTicketLevel(final long coordinate, final int ticketLevel) {
+ if (ticketLevel > ChunkLevel.MAX_LEVEL) {
-+ this.ticketLevelPropagator.removeSource(coordinate);
++ this.ticketLevelPropagator.removeSource(CoordinateUtils.getChunkX(coordinate), CoordinateUtils.getChunkZ(coordinate));
+ } else {
-+ this.ticketLevelPropagator.setSource(coordinate, convertBetweenTicketLevels(ticketLevel));
++ this.ticketLevelPropagator.setSource(CoordinateUtils.getChunkX(coordinate), CoordinateUtils.getChunkZ(coordinate), convertBetweenTicketLevels(ticketLevel));
+ }
+ }
+
@@ -7377,20 +7053,60 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ return this.addTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkX, chunkZ), level, identifier);
+ }
+
++ private void addExpireCount(final int chunkX, final int chunkZ) {
++ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++
++ final int sectionShift = TickRegions.getRegionChunkShift();
++ final RegionFileIOThread.ChunkCoordinate sectionKey = new RegionFileIOThread.ChunkCoordinate(CoordinateUtils.getChunkKey(
++ chunkX >> sectionShift,
++ chunkZ >> sectionShift
++ ));
++
++ this.sectionToChunkToExpireCount.computeIfAbsent(sectionKey, (final RegionFileIOThread.ChunkCoordinate keyInMap) -> {
++ return new Long2IntOpenHashMap();
++ }).addTo(chunkKey, 1);
++ }
++
++ private void removeExpireCount(final int chunkX, final int chunkZ) {
++ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++
++ final int sectionShift = TickRegions.getRegionChunkShift();
++ final RegionFileIOThread.ChunkCoordinate sectionKey = new RegionFileIOThread.ChunkCoordinate(CoordinateUtils.getChunkKey(
++ chunkX >> sectionShift,
++ chunkZ >> sectionShift
++ ));
++
++ final Long2IntOpenHashMap removeCounts = this.sectionToChunkToExpireCount.get(sectionKey);
++ final int prevCount = removeCounts.addTo(chunkKey, -1);
++
++ if (prevCount == 1) {
++ removeCounts.remove(chunkKey);
++ if (removeCounts.isEmpty()) {
++ this.sectionToChunkToExpireCount.remove(sectionKey);
++ }
++ }
++ }
++
+ // supposed to return true if the ticket was added and did not replace another
+ // but, we always return false if the ticket cannot be added
+ public boolean addTicketAtLevel(final TicketType type, final long chunk, final int level, final T identifier) {
-+ final long removeDelay = Math.max(0, type.timeout);
++ return this.addTicketAtLevel(type, chunk, level, identifier, true);
++ }
++
++ boolean addTicketAtLevel(final TicketType type, final long chunk, final int level, final T identifier, final boolean lock) {
++ final long removeDelay = type.timeout <= 0 ? NO_TIMEOUT_MARKER : type.timeout;
+ if (level > MAX_TICKET_LEVEL) {
+ return false;
+ }
+
-+ this.ticketLock.lock();
-+ try {
-+ final long removeTick = removeDelay == 0 ? NO_TIMEOUT_MARKER : this.currentTick + removeDelay;
-+ final Ticket ticket = new Ticket<>(type, level, identifier, removeTick);
++ final int chunkX = CoordinateUtils.getChunkX(chunk);
++ final int chunkZ = CoordinateUtils.getChunkZ(chunk);
++ final RegionFileIOThread.ChunkCoordinate chunkCoord = new RegionFileIOThread.ChunkCoordinate(chunk);
++ final Ticket ticket = new Ticket<>(type, level, identifier, removeDelay);
+
-+ final SortedArraySet> ticketsAtChunk = this.tickets.computeIfAbsent(chunk, (final long keyInMap) -> {
++ final ReentrantAreaLock.Node ticketLock = lock ? this.ticketLockArea.lock(chunkX, chunkZ) : null;
++ try {
++ final SortedArraySet> ticketsAtChunk = this.tickets.computeIfAbsent(chunkCoord, (final RegionFileIOThread.ChunkCoordinate keyInMap) -> {
+ return SortedArraySet.create(4);
+ });
+
@@ -7399,30 +7115,18 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ final int levelAfter = getTicketLevelAt(ticketsAtChunk);
+
+ if (current != ticket) {
-+ final long oldRemovalTick = current.removalTick;
-+ if (removeTick != oldRemovalTick) {
-+ if (oldRemovalTick != NO_TIMEOUT_MARKER) {
-+ final Long2IntOpenHashMap removeCounts = this.removeTickToChunkExpireTicketCount.get(oldRemovalTick);
-+ final int prevCount = removeCounts.addTo(chunk, -1);
-+
-+ if (prevCount == 1) {
-+ removeCounts.remove(chunk);
-+ if (removeCounts.isEmpty()) {
-+ this.removeTickToChunkExpireTicketCount.remove(oldRemovalTick);
-+ }
-+ }
-+ }
-+ if (removeTick != NO_TIMEOUT_MARKER) {
-+ this.removeTickToChunkExpireTicketCount.computeIfAbsent(removeTick, (final long keyInMap) -> {
-+ return new Long2IntOpenHashMap();
-+ }).addTo(chunk, 1);
++ final long oldRemoveDelay = current.removeDelay;
++ if (removeDelay != oldRemoveDelay) {
++ if (oldRemoveDelay != NO_TIMEOUT_MARKER && removeDelay == NO_TIMEOUT_MARKER) {
++ this.removeExpireCount(chunkX, chunkZ);
++ } else if (oldRemoveDelay == NO_TIMEOUT_MARKER) {
++ // since old != new, we have that NO_TIMEOUT_MARKER != new
++ this.addExpireCount(chunkX, chunkZ);
+ }
+ }
+ } else {
-+ if (removeTick != NO_TIMEOUT_MARKER) {
-+ this.removeTickToChunkExpireTicketCount.computeIfAbsent(removeTick, (final long keyInMap) -> {
-+ return new Long2IntOpenHashMap();
-+ }).addTo(chunk, 1);
++ if (removeDelay != NO_TIMEOUT_MARKER) {
++ this.addExpireCount(chunkX, chunkZ);
+ }
+ }
+
@@ -7432,7 +7136,9 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+
+ return current == ticket;
+ } finally {
-+ this.ticketLock.unlock();
++ if (ticketLock != null) {
++ this.ticketLockArea.unlock(ticketLock);
++ }
+ }
+ }
+
@@ -7445,77 +7151,95 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ }
+
+ public boolean removeTicketAtLevel(final TicketType type, final long chunk, final int level, final T identifier) {
++ return this.removeTicketAtLevel(type, chunk, level, identifier, true);
++ }
++
++ boolean removeTicketAtLevel(final TicketType type, final long chunk, final int level, final T identifier, final boolean lock) {
+ if (level > MAX_TICKET_LEVEL) {
+ return false;
+ }
+
-+ this.ticketLock.lock();
++ final int chunkX = CoordinateUtils.getChunkX(chunk);
++ final int chunkZ = CoordinateUtils.getChunkZ(chunk);
++ final RegionFileIOThread.ChunkCoordinate chunkCoord = new RegionFileIOThread.ChunkCoordinate(chunk);
++ final Ticket probe = new Ticket<>(type, level, identifier, PROBE_MARKER);
++
++ final ReentrantAreaLock.Node ticketLock = lock ? this.ticketLockArea.lock(chunkX, chunkZ) : null;
+ try {
-+ final SortedArraySet> ticketsAtChunk = this.tickets.get(chunk);
++ final SortedArraySet> ticketsAtChunk = this.tickets.get(chunkCoord);
+ if (ticketsAtChunk == null) {
+ return false;
+ }
+
+ final int oldLevel = getTicketLevelAt(ticketsAtChunk);
-+ final Ticket ticket = (Ticket)ticketsAtChunk.removeAndGet(new Ticket<>(type, level, identifier, -2L));
++ final Ticket ticket = (Ticket)ticketsAtChunk.removeAndGet(probe);
+
+ if (ticket == null) {
+ return false;
+ }
+
-+ if (ticketsAtChunk.isEmpty()) {
-+ this.tickets.remove(chunk);
-+ }
-+
+ final int newLevel = getTicketLevelAt(ticketsAtChunk);
-+
-+ final long removeTick = ticket.removalTick;
-+ if (removeTick != NO_TIMEOUT_MARKER) {
-+ final Long2IntOpenHashMap removeCounts = this.removeTickToChunkExpireTicketCount.get(removeTick);
-+ final int currCount = removeCounts.addTo(chunk, -1);
-+
-+ if (currCount == 1) {
-+ removeCounts.remove(chunk);
-+ if (removeCounts.isEmpty()) {
-+ this.removeTickToChunkExpireTicketCount.remove(removeTick);
-+ }
++ // we should not change the ticket levels while the target region may be ticking
++ if (oldLevel != newLevel) {
++ // Delay unload chunk patch originally by Aikar, updated to 1.20 by jpenilla
++ // these days, the patch is mostly useful to keep chunks ticking when players teleport
++ // so that their pets can teleport with them as well.
++ final long delayTimeout = this.world.paperConfig().chunks.delayChunkUnloadsBy.ticks();
++ final TicketType toAdd;
++ final long timeout;
++ if (type == RegionizedPlayerChunkLoader.REGION_PLAYER_TICKET && delayTimeout > 0) {
++ toAdd = TicketType.DELAY_UNLOAD;
++ timeout = delayTimeout;
++ } else {
++ toAdd = TicketType.UNKNOWN;
++ // always expect UNKNOWN to be > 1, but just in case
++ timeout = Math.max(1, toAdd.timeout);
++ }
++ final Ticket unknownTicket = new Ticket<>(toAdd, level, new ChunkPos(chunk), timeout);
++ if (ticketsAtChunk.add(unknownTicket)) {
++ this.addExpireCount(chunkX, chunkZ);
++ } else {
++ throw new IllegalStateException("Should have been able to add " + unknownTicket + " to " + ticketsAtChunk);
+ }
+ }
+
-+ if (oldLevel != newLevel) {
-+ this.updateTicketLevel(chunk, newLevel);
++ final long removeDelay = ticket.removeDelay;
++ if (removeDelay != NO_TIMEOUT_MARKER) {
++ this.removeExpireCount(chunkX, chunkZ);
+ }
+
+ return true;
+ } finally {
-+ this.ticketLock.unlock();
++ if (ticketLock != null) {
++ this.ticketLockArea.unlock(ticketLock);
++ }
+ }
+ }
+
+ // atomic with respect to all add/remove/addandremove ticket calls for the given chunk
+ public void addAndRemoveTickets(final long chunk, final TicketType addType, final int addLevel, final T addIdentifier,
+ final TicketType removeType, final int removeLevel, final V removeIdentifier) {
-+ this.ticketLock.lock();
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(CoordinateUtils.getChunkX(chunk), CoordinateUtils.getChunkZ(chunk));
+ try {
-+ this.addTicketAtLevel(addType, chunk, addLevel, addIdentifier);
-+ this.removeTicketAtLevel(removeType, chunk, removeLevel, removeIdentifier);
++ this.addTicketAtLevel(addType, chunk, addLevel, addIdentifier, false);
++ this.removeTicketAtLevel(removeType, chunk, removeLevel, removeIdentifier, false);
+ } finally {
-+ this.ticketLock.unlock();
++ this.ticketLockArea.unlock(ticketLock);
+ }
+ }
+
+ // atomic with respect to all add/remove/addandremove ticket calls for the given chunk
+ public boolean addIfRemovedTicket(final long chunk, final TicketType addType, final int addLevel, final T addIdentifier,
+ final TicketType removeType, final int removeLevel, final V removeIdentifier) {
-+ this.ticketLock.lock();
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(CoordinateUtils.getChunkX(chunk), CoordinateUtils.getChunkZ(chunk));
+ try {
-+ if (this.removeTicketAtLevel(removeType, chunk, removeLevel, removeIdentifier)) {
-+ this.addTicketAtLevel(addType, chunk, addLevel, addIdentifier);
++ if (this.removeTicketAtLevel(removeType, chunk, removeLevel, removeIdentifier, false)) {
++ this.addTicketAtLevel(addType, chunk, addLevel, addIdentifier, false);
+ return true;
+ }
+ return false;
+ } finally {
-+ this.ticketLock.unlock();
++ this.ticketLockArea.unlock(ticketLock);
+ }
+ }
+
@@ -7524,49 +7248,113 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ return;
+ }
+
-+ this.ticketLock.lock();
-+ try {
-+ for (final LongIterator iterator = new LongArrayList(this.tickets.keySet()).longIterator(); iterator.hasNext();) {
-+ final long chunk = iterator.nextLong();
++ final Long2ObjectOpenHashMap> sections = new Long2ObjectOpenHashMap();
++ final int sectionShift = ChunkTaskScheduler.getChunkSystemLockShift();
++ for (final RegionFileIOThread.ChunkCoordinate coord : this.tickets.keySet()) {
++ sections.computeIfAbsent(
++ CoordinateUtils.getChunkKey(
++ CoordinateUtils.getChunkX(coord.key) >> sectionShift,
++ CoordinateUtils.getChunkZ(coord.key) >> sectionShift
++ ),
++ (final long keyInMap) -> {
++ return new ArrayList<>();
++ }
++ ).add(coord);
++ }
+
-+ this.removeTicketAtLevel(ticketType, chunk, ticketLevel, ticketIdentifier);
++ for (final Iterator>> iterator = sections.long2ObjectEntrySet().fastIterator();
++ iterator.hasNext();) {
++ final Long2ObjectMap.Entry> entry = iterator.next();
++ final long sectionKey = entry.getLongKey();
++ final List coordinates = entry.getValue();
++
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(
++ CoordinateUtils.getChunkX(sectionKey) << sectionShift,
++ CoordinateUtils.getChunkZ(sectionKey) << sectionShift
++ );
++ try {
++ for (final RegionFileIOThread.ChunkCoordinate coord : coordinates) {
++ this.removeTicketAtLevel(ticketType, coord.key, ticketLevel, ticketIdentifier, false);
++ }
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
+ }
-+ } finally {
-+ this.ticketLock.unlock();
+ }
+ }
+
+ public void tick() {
-+ TickThread.ensureTickThread("Cannot tick ticket manager off-main");
++ final int sectionShift = TickRegions.getRegionChunkShift();
+
-+ this.ticketLock.lock();
-+ try {
-+ final long tick = ++this.currentTick;
++ final Predicate> expireNow = (final Ticket> ticket) -> {
++ if (ticket.removeDelay == NO_TIMEOUT_MARKER) {
++ return false;
++ }
++ return --ticket.removeDelay <= 0L;
++ };
+
-+ final Long2IntOpenHashMap toRemove = this.removeTickToChunkExpireTicketCount.remove(tick);
++ for (final Iterator iterator = this.sectionToChunkToExpireCount.keySet().iterator(); iterator.hasNext();) {
++ final RegionFileIOThread.ChunkCoordinate section = iterator.next();
++ final long sectionKey = section.key;
+
-+ if (toRemove == null) {
-+ return;
++ if (!this.sectionToChunkToExpireCount.containsKey(section)) {
++ // removed concurrently
++ continue;
+ }
+
-+ final Predicate> expireNow = (final Ticket> ticket) -> {
-+ return ticket.removalTick == tick;
-+ };
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(
++ CoordinateUtils.getChunkX(sectionKey) << sectionShift,
++ CoordinateUtils.getChunkZ(sectionKey) << sectionShift
++ );
+
-+ for (final LongIterator iterator = toRemove.keySet().longIterator(); iterator.hasNext();) {
-+ final long chunk = iterator.nextLong();
-+
-+ final SortedArraySet> tickets = this.tickets.get(chunk);
-+ tickets.removeIf(expireNow);
-+ if (tickets.isEmpty()) {
-+ this.tickets.remove(chunk);
-+ this.ticketLevelPropagator.removeSource(chunk);
-+ } else {
-+ this.ticketLevelPropagator.setSource(chunk, convertBetweenTicketLevels(tickets.first().getTicketLevel()));
++ try {
++ final Long2IntOpenHashMap chunkToExpireCount = this.sectionToChunkToExpireCount.get(section);
++ if (chunkToExpireCount == null) {
++ // lost to some race
++ continue;
+ }
++
++ for (final Iterator iterator1 = chunkToExpireCount.long2IntEntrySet().fastIterator(); iterator1.hasNext();) {
++ final Long2IntMap.Entry entry = iterator1.next();
++
++ final long chunkKey = entry.getLongKey();
++ final int expireCount = entry.getIntValue();
++
++ final RegionFileIOThread.ChunkCoordinate chunk = new RegionFileIOThread.ChunkCoordinate(chunkKey);
++
++ final SortedArraySet> tickets = this.tickets.get(chunk);
++ final int levelBefore = getTicketLevelAt(tickets);
++
++ final int sizeBefore = tickets.size();
++ tickets.removeIf(expireNow);
++ final int sizeAfter = tickets.size();
++ final int levelAfter = getTicketLevelAt(tickets);
++
++ if (tickets.isEmpty()) {
++ this.tickets.remove(chunk);
++ }
++ if (levelBefore != levelAfter) {
++ this.updateTicketLevel(chunkKey, levelAfter);
++ }
++
++ final int newExpireCount = expireCount - (sizeBefore - sizeAfter);
++
++ if (newExpireCount == expireCount) {
++ continue;
++ }
++
++ if (newExpireCount != 0) {
++ entry.setValue(newExpireCount);
++ } else {
++ iterator1.remove();
++ }
++ }
++
++ if (chunkToExpireCount.isEmpty()) {
++ this.sectionToChunkToExpireCount.remove(section);
++ }
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
+ }
-+ } finally {
-+ this.ticketLock.unlock();
+ }
+
+ this.processTicketUpdates();
@@ -7618,10 +7406,13 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ }
+
+ private NewChunkHolder getOrCreateChunkHolder(final long position) {
-+ if (!this.ticketLock.isHeldByCurrentThread()) {
++ final int chunkX = CoordinateUtils.getChunkX(position);
++ final int chunkZ = CoordinateUtils.getChunkZ(position);
++
++ if (!this.ticketLockArea.isHeldByCurrentThread(chunkX, chunkZ)) {
+ throw new IllegalStateException("Must hold ticket level update lock!");
+ }
-+ if (!this.taskScheduler.schedulingLock.isHeldByCurrentThread()) {
++ if (!this.taskScheduler.schedulingLockArea.isHeldByCurrentThread(chunkX, chunkZ)) {
+ throw new IllegalStateException("Must hold scheduler lock!!");
+ }
+
@@ -7634,12 +7425,14 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ }
+
+ current = this.createChunkHolder(position);
-+ this.chunkHolders.put(position, current);
++ synchronized (this.chunkHolders) {
++ this.chunkHolders.put(position, current);
++ }
+
+ return current;
+ }
+
-+ private long entityLoadCounter;
++ private final AtomicLong entityLoadCounter = new AtomicLong();
+
+ public ChunkEntitySlices getOrCreateEntityChunk(final int chunkX, final int chunkZ, final boolean transientChunk) {
+ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot create entity chunk off-main");
@@ -7652,13 +7445,12 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+
+ final AtomicBoolean isCompleted = new AtomicBoolean();
+ final Thread waiter = Thread.currentThread();
-+ final Long entityLoadId;
++ final Long entityLoadId = Long.valueOf(this.entityLoadCounter.getAndIncrement());
+ NewChunkHolder.GenericDataLoadTaskCallback loadTask = null;
-+ this.ticketLock.lock();
++ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(chunkX, chunkZ);
+ try {
-+ entityLoadId = Long.valueOf(this.entityLoadCounter++);
+ this.addTicketAtLevel(TicketType.ENTITY_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, entityLoadId);
-+ this.taskScheduler.schedulingLock.lock();
++ final ReentrantAreaLock.Node schedulingLock = this.taskScheduler.schedulingLockArea.lock(chunkX, chunkZ);
+ try {
+ current = this.getOrCreateChunkHolder(chunkX, chunkZ);
+ if ((ret = current.getEntityChunk()) != null && (transientChunk || !ret.isTransient())) {
@@ -7682,10 +7474,10 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ }
+ }
+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
++ this.taskScheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+ } finally {
-+ this.ticketLock.unlock();
++ this.ticketLockArea.unlock(ticketLock);
+ }
+
+ if (loadTask != null) {
@@ -7727,7 +7519,7 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ return null;
+ }
+
-+ private long poiLoadCounter;
++ private final AtomicLong poiLoadCounter = new AtomicLong();
+
+ public PoiChunk loadPoiChunk(final int chunkX, final int chunkZ) {
+ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot create poi chunk off-main");
@@ -7744,13 +7536,13 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ final AtomicReference completed = new AtomicReference<>();
+ final AtomicBoolean isCompleted = new AtomicBoolean();
+ final Thread waiter = Thread.currentThread();
-+ final Long poiLoadId;
++ final Long poiLoadId = Long.valueOf(this.poiLoadCounter.getAndIncrement());
+ NewChunkHolder.GenericDataLoadTaskCallback loadTask = null;
-+ this.ticketLock.lock();
++ final ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(chunkX, chunkZ); // Folia - use area based lock to reduce contention
+ try {
-+ poiLoadId = Long.valueOf(this.poiLoadCounter++);
++ // Folia - use area based lock to reduce contention
+ this.addTicketAtLevel(TicketType.POI_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, poiLoadId);
-+ this.taskScheduler.schedulingLock.lock();
++ final ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock.Node schedulingLock = this.taskScheduler.schedulingLockArea.lock(chunkX, chunkZ); // Folia - use area based lock to reduce contention
+ try {
+ current = this.getOrCreateChunkHolder(chunkX, chunkZ);
+ if (current.isPoiChunkLoaded()) {
@@ -7769,10 +7561,10 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ poiLoad.raisePriority(PrioritisedExecutor.Priority.BLOCKING);
+ }
+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
++ this.taskScheduler.schedulingLockArea.unlock(schedulingLock); // Folia - use area based lock to reduce contention
+ }
+ } finally {
-+ this.ticketLock.unlock();
++ this.ticketLockArea.unlock(ticketLock); // Folia - use area based lock to reduce contention
+ }
+
+ if (loadTask != null) {
@@ -7825,14 +7617,14 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ }
+ }
+
-+ final ReferenceLinkedOpenHashSet unloadQueue = new ReferenceLinkedOpenHashSet<>();
-+
+ private void removeChunkHolder(final NewChunkHolder holder) {
+ holder.killed = true;
+ holder.vanillaChunkHolder.onChunkRemove();
+ this.autoSaveQueue.remove(holder);
+ ChunkSystem.onChunkHolderDelete(this.world, holder.vanillaChunkHolder);
-+ this.chunkHolders.remove(CoordinateUtils.getChunkKey(holder.chunkX, holder.chunkZ));
++ synchronized (this.chunkHolders) {
++ this.chunkHolders.remove(CoordinateUtils.getChunkKey(holder.chunkX, holder.chunkZ));
++ }
+ }
+
+ // note: never call while inside the chunk system, this will absolutely break everything
@@ -7842,87 +7634,149 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ if (BLOCK_TICKET_UPDATES.get() == Boolean.TRUE) {
+ throw new IllegalStateException("Cannot unload chunks recursively");
+ }
-+ if (this.ticketLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Cannot hold ticket update lock while calling processUnloads");
-+ }
-+ if (this.taskScheduler.schedulingLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Cannot hold scheduling lock while calling processUnloads");
++ final int sectionShift = this.unloadQueue.coordinateShift; // sectionShift <= lock shift
++ final List unloadSectionsForRegion = this.unloadQueue.retrieveForAllRegions();
++ int unloadCountTentative = 0;
++ for (final ChunkQueue.SectionToUnload sectionRef : unloadSectionsForRegion) {
++ final ChunkQueue.UnloadSection section
++ = this.unloadQueue.getSectionUnsynchronized(sectionRef.sectionX(), sectionRef.sectionZ());
++
++ if (section == null) {
++ // removed concurrently
++ continue;
++ }
++
++ // technically reading the size field is unsafe, and it may be incorrect.
++ // We assume that the error here cumulatively goes away over many ticks. If it did not, then it is possible
++ // for chunks to never unload or not unload fast enough.
++ unloadCountTentative += section.chunks.size();
+ }
+
-+ final List unloadQueue;
-+ final List scheduleList = new ArrayList<>();
-+ this.ticketLock.lock();
-+ try {
-+ this.taskScheduler.schedulingLock.lock();
++ if (unloadCountTentative <= 0) {
++ // no work to do
++ return;
++ }
++
++ // Note: The behaviour that we process ticket updates while holding the lock has been dropped here, as it is racey behavior.
++ // But, we do need to process updates here so that any add ticket that is synchronised before this call does not go missed.
++ this.processTicketUpdates();
++
++ final int toUnloadCount = Math.max(50, (int)(unloadCountTentative * 0.05));
++ int processedCount = 0;
++
++ for (final ChunkQueue.SectionToUnload sectionRef : unloadSectionsForRegion) {
++ final List stage1 = new ArrayList<>();
++ final List stage2 = new ArrayList<>();
++
++ final int sectionLowerX = sectionRef.sectionX() << sectionShift;
++ final int sectionLowerZ = sectionRef.sectionZ() << sectionShift;
++
++ // stage 1: set up for stage 2 while holding critical locks
++ ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(sectionLowerX, sectionLowerZ);
+ try {
-+ if (this.unloadQueue.isEmpty()) {
-+ return;
-+ }
-+ // in order to ensure all chunks in the unload queue do not have a pending ticket level update,
-+ // process them now
-+ this.processTicketUpdates(false, false, scheduleList);
-+ unloadQueue = new ArrayList<>((int)(this.unloadQueue.size() * 0.05) + 1);
++ final ReentrantAreaLock.Node scheduleLock = this.taskScheduler.schedulingLockArea.lock(sectionLowerX, sectionLowerZ);
++ try {
++ final ChunkQueue.UnloadSection section
++ = this.unloadQueue.getSectionUnsynchronized(sectionRef.sectionX(), sectionRef.sectionZ());
+
-+ final int unloadCount = Math.max(50, (int)(this.unloadQueue.size() * 0.05));
-+ for (int i = 0; i < unloadCount && !this.unloadQueue.isEmpty(); ++i) {
-+ final NewChunkHolder chunkHolder = this.unloadQueue.removeFirst();
-+ if (chunkHolder.isSafeToUnload() != null) {
-+ LOGGER.error("Chunkholder " + chunkHolder + " is not safe to unload but is inside the unload queue?");
++ if (section == null) {
++ // removed concurrently
+ continue;
+ }
-+ final NewChunkHolder.UnloadState state = chunkHolder.unloadStage1();
-+ if (state == null) {
-+ // can unload immediately
-+ this.removeChunkHolder(chunkHolder);
-+ continue;
-+ }
-+ unloadQueue.add(state);
-+ }
-+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ // schedule tasks, we can't let processTicketUpdates do this because we call it holding the schedule lock
-+ for (int i = 0, len = scheduleList.size(); i < len; ++i) {
-+ scheduleList.get(i).schedule();
-+ }
+
-+ final List toRemove = new ArrayList<>(unloadQueue.size());
++ // collect the holders to run stage 1 on
++ final int sectionCount = section.chunks.size();
+
-+ final Boolean before = this.blockTicketUpdates();
-+ try {
-+ for (int i = 0, len = unloadQueue.size(); i < len; ++i) {
-+ final NewChunkHolder.UnloadState state = unloadQueue.get(i);
-+ final NewChunkHolder holder = state.holder();
++ if ((sectionCount + processedCount) <= toUnloadCount) {
++ // we can just drain the entire section
+
-+ holder.unloadStage2(state);
-+ toRemove.add(holder);
-+ }
-+ } finally {
-+ this.unblockTicketUpdates(before);
-+ }
++ for (final LongIterator iterator = section.chunks.iterator(); iterator.hasNext();) {
++ final NewChunkHolder holder = this.chunkHolders.get(iterator.nextLong());
++ if (holder == null) {
++ throw new IllegalStateException();
++ }
++ stage1.add(holder);
++ }
+
-+ this.ticketLock.lock();
-+ try {
-+ this.taskScheduler.schedulingLock.lock();
-+ try {
-+ for (int i = 0, len = toRemove.size(); i < len; ++i) {
-+ final NewChunkHolder holder = toRemove.get(i);
-+
-+ if (holder.unloadStage3()) {
-+ this.removeChunkHolder(holder);
++ // remove section
++ this.unloadQueue.removeSection(sectionRef.sectionX(), sectionRef.sectionZ());
+ } else {
-+ // add cooldown so the next unload check is not immediately next tick
-+ this.addTicketAtLevel(TicketType.UNLOAD_COOLDOWN, holder.chunkX, holder.chunkZ, MAX_TICKET_LEVEL, Unit.INSTANCE);
++ // processedCount + len = toUnloadCount
++ // we cannot drain the entire section
++ for (int i = 0, len = toUnloadCount - processedCount; i < len; ++i) {
++ final NewChunkHolder holder = this.chunkHolders.get(section.chunks.removeFirstLong());
++ if (holder == null) {
++ throw new IllegalStateException();
++ }
++ stage1.add(holder);
++ }
+ }
++
++ // run stage 1
++ for (int i = 0, len = stage1.size(); i < len; ++i) {
++ final NewChunkHolder chunkHolder = stage1.get(i);
++ if (chunkHolder.isSafeToUnload() != null) {
++ LOGGER.error("Chunkholder " + chunkHolder + " is not safe to unload but is inside the unload queue?");
++ continue;
++ }
++ final NewChunkHolder.UnloadState state = chunkHolder.unloadStage1();
++ if (state == null) {
++ // can unload immediately
++ this.removeChunkHolder(chunkHolder);
++ continue;
++ }
++ stage2.add(state);
++ }
++ } finally {
++ this.taskScheduler.schedulingLockArea.unlock(scheduleLock);
+ }
+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
++ this.ticketLockArea.unlock(ticketLock);
++ }
++
++ // stage 2: invoke expensive unload logic, designed to run without locks thanks to stage 1
++ final List stage3 = new ArrayList<>(stage2.size());
++
++ final Boolean before = this.blockTicketUpdates();
++ try {
++ for (int i = 0, len = stage2.size(); i < len; ++i) {
++ final NewChunkHolder.UnloadState state = stage2.get(i);
++ final NewChunkHolder holder = state.holder();
++
++ holder.unloadStage2(state);
++ stage3.add(holder);
++ }
++ } finally {
++ this.unblockTicketUpdates(before);
++ }
++
++ // stage 3: actually attempt to remove the chunk holders
++ ticketLock = this.ticketLockArea.lock(sectionLowerX, sectionLowerZ);
++ try {
++ final ReentrantAreaLock.Node scheduleLock = this.taskScheduler.schedulingLockArea.lock(sectionLowerX, sectionLowerZ);
++ try {
++ for (int i = 0, len = stage3.size(); i < len; ++i) {
++ final NewChunkHolder holder = stage3.get(i);
++
++ if (holder.unloadStage3()) {
++ this.removeChunkHolder(holder);
++ } else {
++ // add cooldown so the next unload check is not immediately next tick
++ this.addTicketAtLevel(TicketType.UNLOAD_COOLDOWN, CoordinateUtils.getChunkKey(holder.chunkX, holder.chunkZ), MAX_TICKET_LEVEL, Unit.INSTANCE, false);
++ }
++ }
++ } finally {
++ this.taskScheduler.schedulingLockArea.unlock(scheduleLock);
++ }
++ } finally {
++ this.ticketLockArea.unlock(ticketLock);
++ }
++
++ processedCount += stage1.size();
++
++ if (processedCount >= toUnloadCount) {
++ break;
+ }
-+ } finally {
-+ this.ticketLock.unlock();
+ }
+ }
+
@@ -7984,87 +7838,42 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ }
+ }
+
-+ private final MultiThreadedQueue> delayedTicketUpdates = new MultiThreadedQueue<>();
-+
-+ // note: MUST hold ticket lock, otherwise operation ordering is lost
-+ private boolean drainTicketUpdates() {
++ private boolean processTicketOp(TicketOperation operation) {
+ boolean ret = false;
-+
-+ TicketOperation operation;
-+ while ((operation = this.delayedTicketUpdates.poll()) != null) {
-+ switch (operation.op) {
-+ case ADD: {
-+ ret |= this.addTicketAtLevel(operation.ticketType, operation.chunkCoord, operation.ticketLevel, operation.identifier);
-+ break;
-+ }
-+ case REMOVE: {
-+ ret |= this.removeTicketAtLevel(operation.ticketType, operation.chunkCoord, operation.ticketLevel, operation.identifier);
-+ break;
-+ }
-+ case ADD_IF_REMOVED: {
-+ ret |= this.addIfRemovedTicket(
-+ operation.chunkCoord,
-+ operation.ticketType, operation.ticketLevel, operation.identifier,
-+ operation.ticketType2, operation.ticketLevel2, operation.identifier2
-+ );
-+ break;
-+ }
-+ case ADD_AND_REMOVE: {
-+ ret = true;
-+ this.addAndRemoveTickets(
-+ operation.chunkCoord,
-+ operation.ticketType, operation.ticketLevel, operation.identifier,
-+ operation.ticketType2, operation.ticketLevel2, operation.identifier2
-+ );
-+ break;
-+ }
++ switch (operation.op) {
++ case ADD: {
++ ret |= this.addTicketAtLevel(operation.ticketType, operation.chunkCoord, operation.ticketLevel, operation.identifier);
++ break;
++ }
++ case REMOVE: {
++ ret |= this.removeTicketAtLevel(operation.ticketType, operation.chunkCoord, operation.ticketLevel, operation.identifier);
++ break;
++ }
++ case ADD_IF_REMOVED: {
++ ret |= this.addIfRemovedTicket(
++ operation.chunkCoord,
++ operation.ticketType, operation.ticketLevel, operation.identifier,
++ operation.ticketType2, operation.ticketLevel2, operation.identifier2
++ );
++ break;
++ }
++ case ADD_AND_REMOVE: {
++ ret = true;
++ this.addAndRemoveTickets(
++ operation.chunkCoord,
++ operation.ticketType, operation.ticketLevel, operation.identifier,
++ operation.ticketType2, operation.ticketLevel2, operation.identifier2
++ );
++ break;
+ }
+ }
+
+ return ret;
+ }
+
-+ public Boolean tryDrainTicketUpdates() {
-+ boolean ret = false;
-+ for (;;) {
-+ final boolean acquired = this.ticketLock.tryLock();
-+ try {
-+ if (!acquired) {
-+ return ret ? Boolean.TRUE : null;
-+ }
-+
-+ ret |= this.drainTicketUpdates();
-+ } finally {
-+ if (acquired) {
-+ this.ticketLock.unlock();
-+ }
-+ }
-+ if (this.delayedTicketUpdates.isEmpty()) {
-+ return Boolean.valueOf(ret);
-+ } // else: try to re-acquire
-+ }
-+ }
-+
-+ public void pushDelayedTicketUpdate(final TicketOperation, ?> operation) {
-+ this.delayedTicketUpdates.add(operation);
-+ }
-+
-+ public void pushDelayedTicketUpdates(final Collection> operations) {
-+ this.delayedTicketUpdates.addAll(operations);
-+ }
-+
-+ public Boolean tryProcessTicketUpdates() {
-+ final boolean acquired = this.ticketLock.tryLock();
-+ try {
-+ if (!acquired) {
-+ return null;
-+ }
-+
-+ return Boolean.valueOf(this.processTicketUpdates(false, true, null));
-+ } finally {
-+ if (acquired) {
-+ this.ticketLock.unlock();
-+ }
++ public void performTicketUpdates(final Collection> operations) {
++ for (final TicketOperation, ?> operation : operations) {
++ this.processTicketOp(operation);
+ }
+ }
+
@@ -8097,12 +7906,6 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ if (BLOCK_TICKET_UPDATES.get() == Boolean.TRUE) {
+ throw new IllegalStateException("Cannot update ticket level while unloading chunks or updating entity manager");
+ }
-+ if (checkLocks && this.ticketLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Illegal recursive processTicketUpdates!");
-+ }
-+ if (checkLocks && this.taskScheduler.schedulingLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Cannot update ticket levels from a scheduler context!");
-+ }
+
+ List changedFullStatus = null;
+
@@ -8112,80 +7915,16 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ final boolean canProcessFullUpdates = processFullUpdates & isTickThread;
+ final boolean canProcessScheduling = scheduledTasks == null;
+
-+ this.ticketLock.lock();
-+ try {
-+ this.drainTicketUpdates();
-+
-+ final boolean levelsUpdated = this.ticketLevelPropagator.propagateUpdates();
-+ if (levelsUpdated) {
-+ // Unlike CB, ticket level updates cannot happen recursively. Thank god.
-+ if (!this.ticketLevelUpdates.isEmpty()) {
-+ ret = true;
-+
-+ // first the necessary chunkholders must be created, so just update the ticket levels
-+ for (final Iterator iterator = this.ticketLevelUpdates.long2IntEntrySet().fastIterator(); iterator.hasNext();) {
-+ final Long2IntMap.Entry entry = iterator.next();
-+ final long key = entry.getLongKey();
-+ final int newLevel = entry.getIntValue();
-+
-+ NewChunkHolder current = this.chunkHolders.get(key);
-+ if (current == null && newLevel > MAX_TICKET_LEVEL) {
-+ // not loaded and it shouldn't be loaded!
-+ iterator.remove();
-+ continue;
-+ }
-+
-+ final int currentLevel = current == null ? MAX_TICKET_LEVEL + 1 : current.getCurrentTicketLevel();
-+ if (currentLevel == newLevel) {
-+ // nothing to do
-+ iterator.remove();
-+ continue;
-+ }
-+
-+ if (current == null) {
-+ // must create
-+ current = this.createChunkHolder(key);
-+ this.chunkHolders.put(key, current);
-+ current.updateTicketLevel(newLevel);
-+ } else {
-+ current.updateTicketLevel(newLevel);
-+ }
-+ }
-+
-+ if (scheduledTasks == null) {
-+ scheduledTasks = new ArrayList<>();
-+ }
-+ changedFullStatus = new ArrayList<>();
-+
-+ // allow the chunkholders to process ticket level updates without needing to acquire the schedule lock every time
-+ final List prev = CURRENT_TICKET_UPDATE_SCHEDULING.get();
-+ CURRENT_TICKET_UPDATE_SCHEDULING.set(scheduledTasks);
-+ try {
-+ this.taskScheduler.schedulingLock.lock();
-+ try {
-+ for (final Iterator iterator = this.ticketLevelUpdates.long2IntEntrySet().fastIterator(); iterator.hasNext();) {
-+ final Long2IntMap.Entry entry = iterator.next();
-+ final long key = entry.getLongKey();
-+ final NewChunkHolder current = this.chunkHolders.get(key);
-+
-+ if (current == null) {
-+ throw new IllegalStateException("Expected chunk holder to be created");
-+ }
-+
-+ current.processTicketLevelUpdate(scheduledTasks, changedFullStatus);
-+ }
-+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
-+ }
-+ } finally {
-+ CURRENT_TICKET_UPDATE_SCHEDULING.set(prev);
-+ }
-+
-+ this.ticketLevelUpdates.clear();
-+ }
++ if (this.ticketLevelPropagator.hasPendingUpdates()) {
++ if (scheduledTasks == null) {
++ scheduledTasks = new ArrayList<>();
+ }
-+ } finally {
-+ this.ticketLock.unlock();
++ changedFullStatus = new ArrayList<>();
++
++ ret |= this.ticketLevelPropagator.performUpdates(
++ this.ticketLockArea, this.taskScheduler.schedulingLockArea,
++ scheduledTasks, changedFullStatus
++ );
+ }
+
+ if (changedFullStatus != null) {
@@ -8229,43 +7968,7 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ }
+
+ public JsonObject getDebugJsonForWatchdog() {
-+ // try and detect any potential deadlock that would require us to read unlocked
-+ try {
-+ if (this.ticketLock.tryLock(10, TimeUnit.SECONDS)) {
-+ try {
-+ if (this.taskScheduler.schedulingLock.tryLock(10, TimeUnit.SECONDS)) {
-+ try {
-+ return this.getDebugJsonNoLock();
-+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
-+ }
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ }
-+ } catch (final InterruptedException ignore) {}
-+
-+ LOGGER.error("Failed to acquire ticket and scheduling lock before timeout for world " + this.world.getWorld().getName());
-+
-+ // because we read without locks, it may throw exceptions for fastutil maps
-+ // so just try until it works...
-+ Throwable lastException = null;
-+ for (int count = 0;count < 1000;++count) {
-+ try {
-+ return this.getDebugJsonNoLock();
-+ } catch (final ThreadDeath death) {
-+ throw death;
-+ } catch (final Throwable thr) {
-+ lastException = thr;
-+ Thread.yield();
-+ LockSupport.parkNanos(10_000L);
-+ }
-+ }
-+
-+ // failed, return
-+ LOGGER.error("Failed to retrieve debug json for watchdog thread without locking", lastException);
-+ return null;
++ return this.getDebugJsonNoLock();
+ }
+
+ private JsonObject getDebugJsonNoLock() {
@@ -8274,12 +7977,29 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+
+ final JsonArray unloadQueue = new JsonArray();
+ ret.add("unload_queue", unloadQueue);
-+ for (final NewChunkHolder holder : this.unloadQueue) {
-+ final JsonObject coordinate = new JsonObject();
-+ unloadQueue.add(coordinate);
++ ret.addProperty("lock_shift", Integer.valueOf(ChunkTaskScheduler.getChunkSystemLockShift()));
++ ret.addProperty("ticket_shift", Integer.valueOf(ThreadedTicketLevelPropagator.SECTION_SHIFT));
++ ret.addProperty("region_shift", Integer.valueOf(TickRegions.getRegionChunkShift()));
++ for (final ChunkQueue.SectionToUnload section : this.unloadQueue.retrieveForAllRegions()) {
++ final JsonObject sectionJson = new JsonObject();
++ unloadQueue.add(sectionJson);
++ sectionJson.addProperty("sectionX", section.sectionX());
++ sectionJson.addProperty("sectionZ", section.sectionX());
++ sectionJson.addProperty("order", section.order());
+
-+ coordinate.addProperty("chunkX", Integer.valueOf(holder.chunkX));
-+ coordinate.addProperty("chunkZ", Integer.valueOf(holder.chunkZ));
++ final JsonArray coordinates = new JsonArray();
++ sectionJson.add("coordinates", coordinates);
++
++ final ChunkQueue.UnloadSection actualSection = this.unloadQueue.getSectionUnsynchronized(section.sectionX(), section.sectionZ());
++ for (final LongIterator iterator = actualSection.chunks.iterator(); iterator.hasNext();) {
++ final long coordinate = iterator.nextLong();
++
++ final JsonObject coordinateJson = new JsonObject();
++ coordinates.add(coordinateJson);
++
++ coordinateJson.addProperty("chunkX", Integer.valueOf(CoordinateUtils.getChunkX(coordinate)));
++ coordinateJson.addProperty("chunkZ", Integer.valueOf(CoordinateUtils.getChunkZ(coordinate)));
++ }
+ }
+
+ final JsonArray holders = new JsonArray();
@@ -8289,6 +8009,8 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ holders.add(holder.getDebugJson());
+ }
+
++ // TODO
++ /*
+ final JsonArray removeTickToChunkExpireTicketCount = new JsonArray();
+ ret.add("remove_tick_to_chunk_expire_ticket_count", removeTickToChunkExpireTicketCount);
+
@@ -8343,33 +8065,13 @@ index 0000000000000000000000000000000000000000..4054bf71486734d722a6a3c7b0b46381
+ ticketSerialized.addProperty("remove_tick", Long.valueOf(ticket.removalTick));
+ }
+ }
++ */
+
+ return ret;
+ }
+
+ public JsonObject getDebugJson() {
-+ final List scheduleList = new ArrayList<>();
-+ try {
-+ final JsonObject ret;
-+ this.ticketLock.lock();
-+ try {
-+ this.taskScheduler.schedulingLock.lock();
-+ try {
-+ this.processTicketUpdates(false, false, scheduleList);
-+ ret = this.getDebugJsonNoLock();
-+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ return ret;
-+ } finally {
-+ // schedule tasks, we can't let processTicketUpdates do this because we call it holding the schedule lock
-+ for (int i = 0, len = scheduleList.size(); i < len; ++i) {
-+ scheduleList.get(i).schedule();
-+ }
-+ }
++ return this.getDebugJsonNoLock(); // Folia - use area based lock to reduce contention
+ }
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLightTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLightTask.java
@@ -8561,14 +8263,15 @@ index 0000000000000000000000000000000000000000..53ddd7e9ac05e6a9eb809f329796e6d4
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLoadTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLoadTask.java
new file mode 100644
-index 0000000000000000000000000000000000000000..34dc2153e90a29bc9102d9497c3c53b5de15508e
+index 0000000000000000000000000000000000000000..e7fb084ddb88ab62f1d493a999cc82b9258d275e
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLoadTask.java
-@@ -0,0 +1,483 @@
+@@ -0,0 +1,484 @@
+package io.papermc.paper.chunk.system.scheduling;
+
+import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock;
+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
+import ca.spottedleaf.dataconverter.minecraft.MCDataConverter;
+import ca.spottedleaf.dataconverter.minecraft.datatypes.MCTypeRegistry;
@@ -8648,7 +8351,7 @@ index 0000000000000000000000000000000000000000..34dc2153e90a29bc9102d9497c3c53b5
+
+ // NOTE: it is IMPOSSIBLE for getOrLoadEntityData/getOrLoadPoiData to complete synchronously, because
+ // they must schedule a task to off main or to on main to complete
-+ this.scheduler.schedulingLock.lock();
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
+ try {
+ if (this.scheduled) {
+ throw new IllegalStateException("schedule() called twice");
@@ -8674,7 +8377,7 @@ index 0000000000000000000000000000000000000000..34dc2153e90a29bc9102d9497c3c53b5
+ this.entityLoadTask = entityLoadTask;
+ this.poiLoadTask = poiLoadTask;
+ } finally {
-+ this.scheduler.schedulingLock.unlock();
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+
+ if (entityLoadTask != null) {
@@ -8692,14 +8395,14 @@ index 0000000000000000000000000000000000000000..34dc2153e90a29bc9102d9497c3c53b5
+ public void cancel() {
+ // must be before load task access, so we can synchronise with the writes to the fields
+ final boolean scheduled;
-+ this.scheduler.schedulingLock.lock();
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
+ try {
-+ // fix cancellation of chunk load task - must read field here, as it may be written later conucrrently -
++ // must read field here, as it may be written later conucrrently -
+ // we need to know if we scheduled _before_ cancellation
+ scheduled = this.scheduled;
+ this.cancelled = true;
+ } finally {
-+ this.scheduler.schedulingLock.unlock();
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+
+ /*
@@ -9159,17 +8862,184 @@ index 0000000000000000000000000000000000000000..322675a470eacbf0e5452f4009c643f2
+ ", status: " + this.getTargetStatus().toString() + ", scheduled: " + this.isScheduled() + "}";
+ }
+}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkQueue.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkQueue.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..4cc1b3ba6d093a9683dbd8b7fe76106ae391e019
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkQueue.java
+@@ -0,0 +1,160 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import it.unimi.dsi.fastutil.HashCommon;
++import it.unimi.dsi.fastutil.longs.LongLinkedOpenHashSet;
++import java.util.ArrayList;
++import java.util.List;
++import java.util.Map;
++import java.util.concurrent.ConcurrentHashMap;
++import java.util.concurrent.atomic.AtomicLong;
++
++public final class ChunkQueue {
++
++ public final int coordinateShift;
++ private final AtomicLong orderGenerator = new AtomicLong();
++ private final ConcurrentHashMap unloadSections = new ConcurrentHashMap<>();
++
++ /*
++ * Note: write operations do not occur in parallel for any given section.
++ * Note: coordinateShift <= region shift in order for retrieveForCurrentRegion() to function correctly
++ */
++
++ public ChunkQueue(final int coordinateShift) {
++ this.coordinateShift = coordinateShift;
++ }
++
++ public static record SectionToUnload(int sectionX, int sectionZ, Coordinate coord, long order, int count) {}
++
++ public List retrieveForAllRegions() {
++ final List ret = new ArrayList<>();
++
++ for (final Map.Entry entry : this.unloadSections.entrySet()) {
++ final Coordinate coord = entry.getKey();
++ final long key = coord.key;
++ final UnloadSection section = entry.getValue();
++ final int sectionX = Coordinate.x(key);
++ final int sectionZ = Coordinate.z(key);
++
++ ret.add(new SectionToUnload(sectionX, sectionZ, coord, section.order, section.chunks.size()));
++ }
++
++ ret.sort((final SectionToUnload s1, final SectionToUnload s2) -> {
++ return Long.compare(s1.order, s2.order);
++ });
++
++ return ret;
++ }
++
++ public UnloadSection getSectionUnsynchronized(final int sectionX, final int sectionZ) {
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++ return this.unloadSections.get(coordinate);
++ }
++
++ public UnloadSection removeSection(final int sectionX, final int sectionZ) {
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++ return this.unloadSections.remove(coordinate);
++ }
++
++ // write operation
++ public boolean addChunk(final int chunkX, final int chunkZ) {
++ final int shift = this.coordinateShift;
++ final int sectionX = chunkX >> shift;
++ final int sectionZ = chunkZ >> shift;
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++ final long chunkKey = Coordinate.key(chunkX, chunkZ);
++
++ UnloadSection section = this.unloadSections.get(coordinate);
++ if (section == null) {
++ section = new UnloadSection(this.orderGenerator.getAndIncrement());
++ // write operations do not occur in parallel for a given section
++ this.unloadSections.put(coordinate, section);
++ }
++
++ return section.chunks.add(chunkKey);
++ }
++
++ // write operation
++ public boolean removeChunk(final int chunkX, final int chunkZ) {
++ final int shift = this.coordinateShift;
++ final int sectionX = chunkX >> shift;
++ final int sectionZ = chunkZ >> shift;
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++ final long chunkKey = Coordinate.key(chunkX, chunkZ);
++
++ final UnloadSection section = this.unloadSections.get(coordinate);
++
++ if (section == null) {
++ return false;
++ }
++
++ if (!section.chunks.remove(chunkKey)) {
++ return false;
++ }
++
++ if (section.chunks.isEmpty()) {
++ this.unloadSections.remove(coordinate);
++ }
++
++ return true;
++ }
++
++ public static final class UnloadSection {
++
++ public final long order;
++ public final LongLinkedOpenHashSet chunks = new LongLinkedOpenHashSet();
++
++ public UnloadSection(final long order) {
++ this.order = order;
++ }
++ }
++
++ private static final class Coordinate implements Comparable {
++
++ public final long key;
++
++ public Coordinate(final long key) {
++ this.key = key;
++ }
++
++ public Coordinate(final int x, final int z) {
++ this.key = key(x, z);
++ }
++
++ public static long key(final int x, final int z) {
++ return ((long)z << 32) | (x & 0xFFFFFFFFL);
++ }
++
++ public static int x(final long key) {
++ return (int)key;
++ }
++
++ public static int z(final long key) {
++ return (int)(key >>> 32);
++ }
++
++ @Override
++ public int hashCode() {
++ return (int)HashCommon.mix(this.key);
++ }
++
++ @Override
++ public boolean equals(final Object obj) {
++ if (this == obj) {
++ return true;
++ }
++
++ if (!(obj instanceof Coordinate other)) {
++ return false;
++ }
++
++ return this.key == other.key;
++ }
++
++ // This class is intended for HashMap/ConcurrentHashMap usage, which do treeify bin nodes if the chain
++ // is too large. So we should implement compareTo to help.
++ @Override
++ public int compareTo(final Coordinate other) {
++ return Long.compare(this.key, other.key);
++ }
++ }
++}
diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkTaskScheduler.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkTaskScheduler.java
new file mode 100644
-index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b2c744abc
+index 0000000000000000000000000000000000000000..f975cb93716e137d973ff2f9011acdbef58859a2
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkTaskScheduler.java
-@@ -0,0 +1,774 @@
+@@ -0,0 +1,880 @@
+package io.papermc.paper.chunk.system.scheduling;
+
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadPool;
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadedTaskQueue;
++import ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock;
+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
+import com.mojang.logging.LogUtils;
+import io.papermc.paper.chunk.system.scheduling.queue.RadiusAwarePrioritisedExecutor;
@@ -9182,7 +9052,6 @@ index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b
+import net.minecraft.ReportedException;
+import io.papermc.paper.util.MCUtil;
+import net.minecraft.server.MinecraftServer;
-+import net.minecraft.server.level.ChunkHolder;
+import net.minecraft.server.level.ChunkMap;
+import net.minecraft.server.level.FullChunkStatus;
+import net.minecraft.server.level.ServerLevel;
@@ -9203,7 +9072,6 @@ index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b
+import java.util.Objects;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
-+import java.util.concurrent.locks.ReentrantLock;
+import java.util.function.Consumer;
+
+public final class ChunkTaskScheduler {
@@ -9284,7 +9152,6 @@ index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b
+
+ private final PrioritisedThreadedTaskQueue mainThreadExecutor = new PrioritisedThreadedTaskQueue();
+
-+ final ReentrantLock schedulingLock = new ReentrantLock();
+ public final ChunkHolderManager chunkHolderManager;
+
+ static {
@@ -9355,6 +9222,72 @@ index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b
+ }
+ }
+
++ // must be >= region shift (in paper, doesn't exist) and must be >= ticket propagator section shift
++ // it must be >= region shift since the regioniser assumes ticket updates do not occur in parallel for the region sections
++ // it must be >= ticket propagator section shift so that the ticket propagator can assume that owning a position implies owning
++ // the entire section
++ // we just take the max, as we want the smallest shift that satifies these properties
++ private static final int LOCK_SHIFT = ThreadedTicketLevelPropagator.SECTION_SHIFT;
++ public static int getChunkSystemLockShift() {
++ return LOCK_SHIFT;
++ }
++
++ private static final int[] ACCESS_RADIUS_TABLE = new int[ChunkStatus.getStatusList().size()];
++ private static final int[] MAX_ACCESS_RADIUS_TABLE = new int[ACCESS_RADIUS_TABLE.length];
++ static {
++ Arrays.fill(ACCESS_RADIUS_TABLE, -1);
++ }
++
++ private static int getAccessRadius0(final ChunkStatus genStatus) {
++ if (genStatus == ChunkStatus.EMPTY) {
++ return 0;
++ }
++
++ final int radius = Math.max(genStatus.loadRange, genStatus.getRange());
++ int maxRange = radius;
++
++ for (int dist = 1; dist <= radius; ++dist) {
++ final ChunkStatus requiredNeighbourStatus = ChunkMap.getDependencyStatus(genStatus, radius);
++ final int rad = ACCESS_RADIUS_TABLE[requiredNeighbourStatus.getIndex()];
++ if (rad == -1) {
++ throw new IllegalStateException();
++ }
++
++ maxRange = Math.max(maxRange, dist + rad);
++ }
++
++ return maxRange;
++ }
++
++ private static int maxAccessRadius;
++
++ static {
++ final List statuses = ChunkStatus.getStatusList();
++ for (int i = 0, len = statuses.size(); i < len; ++i) {
++ ACCESS_RADIUS_TABLE[i] = getAccessRadius0(statuses.get(i));
++ }
++ int max = 0;
++ for (int i = 0, len = statuses.size(); i < len; ++i) {
++ MAX_ACCESS_RADIUS_TABLE[i] = max = Math.max(ACCESS_RADIUS_TABLE[i], max);
++ }
++ maxAccessRadius = max;
++ }
++
++ public static int getMaxAccessRadius() {
++ return maxAccessRadius;
++ }
++
++ public static int getAccessRadius(final ChunkStatus genStatus) {
++ return ACCESS_RADIUS_TABLE[genStatus.getIndex()];
++ }
++
++ public static int getAccessRadius(final FullChunkStatus status) {
++ return (status.ordinal() - 1) + getAccessRadius(ChunkStatus.FULL);
++ }
++
++ final ReentrantAreaLock schedulingLockArea = new ReentrantAreaLock(getChunkSystemLockShift());
++ // Folia end - use area based lock to reduce contention
++
+ public ChunkTaskScheduler(final ServerLevel world, final PrioritisedThreadPool workers) {
+ this.world = world;
+ this.workers = workers;
@@ -9436,10 +9369,11 @@ index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b
+ }, priority);
+ return;
+ }
-+ if (this.chunkHolderManager.ticketLock.isHeldByCurrentThread()) {
++ final int accessRadius = getAccessRadius(toStatus);
++ if (this.chunkHolderManager.ticketLockArea.isHeldByCurrentThread(chunkX, chunkZ, accessRadius)) {
+ throw new IllegalStateException("Cannot schedule chunk load during ticket level update");
+ }
-+ if (this.schedulingLock.isHeldByCurrentThread()) {
++ if (this.schedulingLockArea.isHeldByCurrentThread(chunkX, chunkZ, accessRadius)) {
+ throw new IllegalStateException("Cannot schedule chunk loading recursively");
+ }
+
@@ -9473,9 +9407,9 @@ index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b
+
+ final boolean scheduled;
+ final LevelChunk chunk;
-+ this.chunkHolderManager.ticketLock.lock();
++ final ReentrantAreaLock.Node ticketLock = this.chunkHolderManager.ticketLockArea.lock(chunkX, chunkZ, accessRadius);
+ try {
-+ this.schedulingLock.lock();
++ final ReentrantAreaLock.Node schedulingLock = this.schedulingLockArea.lock(chunkX, chunkZ, accessRadius);
+ try {
+ final NewChunkHolder chunkHolder = this.chunkHolderManager.getChunkHolder(chunkKey);
+ if (chunkHolder == null || chunkHolder.getTicketLevel() > minLevel) {
@@ -9506,10 +9440,10 @@ index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b
+ }
+ }
+ } finally {
-+ this.schedulingLock.unlock();
++ this.schedulingLockArea.unlock(schedulingLock);
+ }
+ } finally {
-+ this.chunkHolderManager.ticketLock.unlock();
++ this.chunkHolderManager.ticketLockArea.unlock(ticketLock);
+ }
+
+ if (!scheduled) {
@@ -9543,6 +9477,46 @@ index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b
+ });
+ }
+
++ // only appropriate to use with ServerLevel#syncLoadNonFull
++ public boolean beginChunkLoadForNonFullSync(final int chunkX, final int chunkZ, final ChunkStatus toStatus,
++ final PrioritisedExecutor.Priority priority) {
++ final int accessRadius = getAccessRadius(toStatus);
++ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
++ final int minLevel = 33 + ChunkStatus.getDistance(toStatus);
++ final List tasks = new ArrayList<>();
++ final ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock.Node ticketLock = this.chunkHolderManager.ticketLockArea.lock(chunkX, chunkZ, accessRadius); // Folia - use area based lock to reduce contention
++ try {
++ final ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock.Node schedulingLock = this.schedulingLockArea.lock(chunkX, chunkZ, accessRadius); // Folia - use area based lock to reduce contention
++ try {
++ final NewChunkHolder chunkHolder = this.chunkHolderManager.getChunkHolder(chunkKey);
++ if (chunkHolder == null || chunkHolder.getTicketLevel() > minLevel) {
++ return false;
++ } else {
++ final ChunkStatus genStatus = chunkHolder.getCurrentGenStatus();
++ if (genStatus != null && genStatus.isOrAfter(toStatus)) {
++ return true;
++ } else {
++ chunkHolder.raisePriority(priority);
++
++ if (!chunkHolder.upgradeGenTarget(toStatus)) {
++ this.schedule(chunkX, chunkZ, toStatus, chunkHolder, tasks);
++ }
++ }
++ }
++ } finally {
++ this.schedulingLockArea.unlock(schedulingLock);
++ }
++ } finally {
++ this.chunkHolderManager.ticketLockArea.unlock(ticketLock);
++ }
++
++ for (int i = 0, len = tasks.size(); i < len; ++i) {
++ tasks.get(i).schedule();
++ }
++
++ return true;
++ }
++
+ public void scheduleChunkLoad(final int chunkX, final int chunkZ, final ChunkStatus toStatus, final boolean addTicket,
+ final PrioritisedExecutor.Priority priority, final Consumer onComplete) {
+ if (!TickThread.isTickThread()) {
@@ -9551,10 +9525,11 @@ index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b
+ }, priority);
+ return;
+ }
-+ if (this.chunkHolderManager.ticketLock.isHeldByCurrentThread()) {
++ final int accessRadius = getAccessRadius(toStatus);
++ if (this.chunkHolderManager.ticketLockArea.isHeldByCurrentThread(chunkX, chunkZ, accessRadius)) {
+ throw new IllegalStateException("Cannot schedule chunk load during ticket level update");
+ }
-+ if (this.schedulingLock.isHeldByCurrentThread()) {
++ if (this.schedulingLockArea.isHeldByCurrentThread(chunkX, chunkZ, accessRadius)) {
+ throw new IllegalStateException("Cannot schedule chunk loading recursively");
+ }
+
@@ -9591,9 +9566,9 @@ index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b
+
+ final boolean scheduled;
+ final ChunkAccess chunk;
-+ this.chunkHolderManager.ticketLock.lock();
++ final ReentrantAreaLock.Node ticketLock = this.chunkHolderManager.ticketLockArea.lock(chunkX, chunkZ, accessRadius);
+ try {
-+ this.schedulingLock.lock();
++ final ReentrantAreaLock.Node schedulingLock = this.schedulingLockArea.lock(chunkX, chunkZ, accessRadius);
+ try {
+ final NewChunkHolder chunkHolder = this.chunkHolderManager.getChunkHolder(chunkKey);
+ if (chunkHolder == null || chunkHolder.getTicketLevel() > minLevel) {
@@ -9616,10 +9591,10 @@ index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b
+ }
+ }
+ } finally {
-+ this.schedulingLock.unlock();
++ this.schedulingLockArea.unlock(schedulingLock);
+ }
+ } finally {
-+ this.chunkHolderManager.ticketLock.unlock();
++ this.chunkHolderManager.ticketLockArea.unlock(ticketLock);
+ }
+
+ for (int i = 0, len = tasks.size(); i < len; ++i) {
@@ -9666,7 +9641,7 @@ index 0000000000000000000000000000000000000000..39411cc2e4af6edf767cc06bbca8335b
+ private ChunkProgressionTask schedule(final int chunkX, final int chunkZ, final ChunkStatus targetStatus,
+ final NewChunkHolder chunkHolder, final List allTasks,
+ final PrioritisedExecutor.Priority minPriority) {
-+ if (!this.schedulingLock.isHeldByCurrentThread()) {
++ if (!this.schedulingLockArea.isHeldByCurrentThread(chunkX, chunkZ, getAccessRadius(targetStatus))) {
+ throw new IllegalStateException("Not holding scheduling lock");
+ }
+
@@ -10911,16 +10886,17 @@ index 0000000000000000000000000000000000000000..396d72c00e47cf1669ae20dc839c1c96
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/NewChunkHolder.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/NewChunkHolder.java
new file mode 100644
-index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3faca34a4
+index 0000000000000000000000000000000000000000..51304c5cf4b0ac7646693ef97ef4a3847d3342b5
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/NewChunkHolder.java
-@@ -0,0 +1,2101 @@
+@@ -0,0 +1,2106 @@
+package io.papermc.paper.chunk.system.scheduling;
+
+import ca.spottedleaf.concurrentutil.completable.Completable;
+import ca.spottedleaf.concurrentutil.executor.Cancellable;
+import ca.spottedleaf.concurrentutil.executor.standard.DelayedPrioritisedTask;
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
++import ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock;
+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
+import com.google.gson.JsonArray;
+import com.google.gson.JsonElement;
@@ -10940,7 +10916,6 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+import net.minecraft.nbt.CompoundTag;
+import net.minecraft.server.level.ChunkHolder;
+import net.minecraft.server.level.ChunkLevel;
-+import net.minecraft.server.level.ChunkMap;
+import net.minecraft.server.level.FullChunkStatus;
+import net.minecraft.server.level.ServerLevel;
+import net.minecraft.server.level.TicketType;
@@ -10993,7 +10968,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ TickThread.ensureTickThread(this.world, this.chunkX, this.chunkZ, "Cannot sync load entity data off-main");
+ final CompoundTag entityChunk;
+ final ChunkEntitySlices ret;
-+ this.scheduler.schedulingLock.lock();
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
+ try {
+ if (this.entityChunk != null && (transientChunk || !this.entityChunk.isTransient())) {
+ return this.entityChunk;
@@ -11025,7 +11000,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ entityChunk = null;
+ }
+ } finally {
-+ this.scheduler.schedulingLock.unlock();
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+
+ if (!transientChunk) {
@@ -11064,7 +11039,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ final List completeWaiters;
+ ChunkLoadTask.EntityDataLoadTask entityDataLoadTask = null;
+ boolean scheduleEntityTask = false;
-+ this.scheduler.schedulingLock.lock();
++ ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
+ try {
+ final List waiters = this.entityDataLoadTaskWaiters;
+ this.entityDataLoadTask = null;
@@ -11075,11 +11050,9 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ LOGGER.error("Unhandled entity data load exception, data data will be lost: ", result.right());
+ }
+
-+ // Folia start - mark these tasks as completed before releasing the scheduling lock
+ for (final GenericDataLoadTaskCallback callback : waiters) {
+ callback.markCompleted();
+ }
-+ // Folia end - mark these tasks as completed before releasing the scheduling lock
+
+ completeWaiters = waiters;
+ } else {
@@ -11102,7 +11075,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ }
+ }
+ } finally {
-+ this.scheduler.schedulingLock.unlock();
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+
+ if (scheduleEntityTask) {
@@ -11112,15 +11085,15 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ // avoid holding the scheduling lock while completing
+ if (completeWaiters != null) {
+ for (final GenericDataLoadTaskCallback callback : completeWaiters) {
-+ callback.acceptCompleted(result); // Folia - mark these tasks as completed before releasing the scheduling lock
++ callback.acceptCompleted(result);
+ }
+ }
+
-+ this.scheduler.schedulingLock.lock();
++ schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
+ try {
+ this.checkUnload();
+ } finally {
-+ this.scheduler.schedulingLock.unlock();
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+ }
+
@@ -11131,7 +11104,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ throw new IllegalStateException("Cannot load entity data, it is already loaded");
+ }
+ // why not just acquire the lock? because the caller NEEDS to call isEntityChunkNBTLoaded before this!
-+ if (!this.scheduler.schedulingLock.isHeldByCurrentThread()) {
++ if (!this.scheduler.schedulingLockArea.isHeldByCurrentThread(this.chunkX, this.chunkZ)) {
+ throw new IllegalStateException("Must hold scheduling lock");
+ }
+
@@ -11187,7 +11160,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ final List completeWaiters;
+ ChunkLoadTask.PoiDataLoadTask poiDataLoadTask = null;
+ boolean schedulePoiTask = false;
-+ this.scheduler.schedulingLock.lock();
++ ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
+ try {
+ final List waiters = this.poiDataLoadTaskWaiters;
+ this.poiDataLoadTask = null;
@@ -11198,11 +11171,9 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ LOGGER.error("Unhandled poi load exception, poi data will be lost: ", result.right());
+ }
+
-+ // Folia start - mark these tasks as completed before releasing the scheduling lock
+ for (final GenericDataLoadTaskCallback callback : waiters) {
+ callback.markCompleted();
+ }
-+ // Folia end - mark these tasks as completed before releasing the scheduling lock
+
+ completeWaiters = waiters;
+ } else {
@@ -11225,7 +11196,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ }
+ }
+ } finally {
-+ this.scheduler.schedulingLock.unlock();
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+
+ if (schedulePoiTask) {
@@ -11235,14 +11206,14 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ // avoid holding the scheduling lock while completing
+ if (completeWaiters != null) {
+ for (final GenericDataLoadTaskCallback callback : completeWaiters) {
-+ callback.acceptCompleted(result); // Folia - mark these tasks as completed before releasing the scheduling lock
++ callback.acceptCompleted(result);
+ }
+ }
-+ this.scheduler.schedulingLock.lock();
++ schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
+ try {
+ this.checkUnload();
+ } finally {
-+ this.scheduler.schedulingLock.unlock();
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+ }
+
@@ -11253,7 +11224,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ throw new IllegalStateException("Cannot load poi data, it is already loaded");
+ }
+ // why not just acquire the lock? because the caller NEEDS to call isPoiChunkLoaded before this!
-+ if (!this.scheduler.schedulingLock.isHeldByCurrentThread()) {
++ if (!this.scheduler.schedulingLockArea.isHeldByCurrentThread(this.chunkX, this.chunkZ)) {
+ throw new IllegalStateException("Must hold scheduling lock");
+ }
+
@@ -11288,7 +11259,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ }
+ }
+
-+ public static abstract class GenericDataLoadTaskCallback implements Cancellable { // Folia - mark callbacks as completed before unlocking scheduling lock
++ public static abstract class GenericDataLoadTaskCallback implements Cancellable {
+
+ protected final Consumer> consumer;
+ protected final NewChunkHolder chunkHolder;
@@ -11324,7 +11295,6 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ return this.completed = true;
+ }
+
-+ // Folia start - mark callbacks as completed before unlocking scheduling lock
+ // must hold scheduling lock
+ void markCompleted() {
+ if (this.completed) {
@@ -11332,15 +11302,13 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ }
+ this.completed = true;
+ }
-+ // Folia end - mark callbacks as completed before unlocking scheduling lock
+
-+ // Folia - mark callbacks as completed before unlocking scheduling lock
+ void acceptCompleted(final GenericDataLoadTask.TaskResult, Throwable> result) {
+ if (result != null) {
-+ if (this.completed) { // Folia - mark callbacks as completed before unlocking scheduling lock
++ if (this.completed) {
+ this.consumer.accept(result);
+ } else {
-+ throw new IllegalStateException("Cannot be uncompleted at this point"); // Folia - mark callbacks as completed before unlocking scheduling lock
++ throw new IllegalStateException("Cannot be uncompleted at this point");
+ }
+ } else {
+ throw new NullPointerException("Result cannot be null (cancelled)");
@@ -11352,7 +11320,8 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+
+ @Override
+ public boolean cancel() {
-+ this.chunkHolder.scheduler.schedulingLock.lock();
++ final NewChunkHolder holder = this.chunkHolder; // Folia - use area based lock to reduce contention
++ final ReentrantAreaLock.Node schedulingLock = holder.scheduler.schedulingLockArea.lock(holder.chunkX, holder.chunkZ);
+ try {
+ if (!this.completed) {
+ this.completed = true;
@@ -11361,7 +11330,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ }
+ return false;
+ } finally {
-+ this.chunkHolder.scheduler.schedulingLock.unlock();
++ holder.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+ }
+ }
@@ -11655,10 +11624,10 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ }
+ if (this.isSafeToUnload() == null) {
+ // ensure in unload queue
-+ this.scheduler.chunkHolderManager.unloadQueue.add(this);
++ this.scheduler.chunkHolderManager.unloadQueue.addChunk(this.chunkX, this.chunkZ);
+ } else {
+ // ensure not in unload queue
-+ this.scheduler.chunkHolderManager.unloadQueue.remove(this);
++ this.scheduler.chunkHolderManager.unloadQueue.removeChunk(this.chunkX, this.chunkZ);
+ }
+ }
+
@@ -11728,13 +11697,13 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ RegionFileIOThread.scheduleSave(this.world, this.chunkX, this.chunkZ, data, RegionFileIOThread.RegionFileType.CHUNK_DATA);
+ }
+ this.chunkDataUnload.completable().complete(data);
-+ this.scheduler.schedulingLock.lock();
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
+ try {
+ // can only write to these fields while holding the schedule lock
+ this.chunkDataUnload = null;
+ this.checkUnload();
+ } finally {
-+ this.scheduler.schedulingLock.unlock();
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+ }
+
@@ -11771,12 +11740,12 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ this.lastEntityUnload = null;
+
+ if (entityChunk.unload()) {
-+ this.scheduler.schedulingLock.lock();
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
+ try {
+ entityChunk.setTransient(true);
+ this.entityChunk = entityChunk;
+ } finally {
-+ this.scheduler.schedulingLock.unlock();
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+ } else {
+ this.world.getEntityLookup().entitySectionUnload(this.chunkX, this.chunkZ);
@@ -11845,13 +11814,13 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+
+ this.oldTicketLevel = newLevel;
+
-+ final FullChunkStatus oldState = ChunkHolder.getFullChunkStatus(oldLevel);
-+ final FullChunkStatus newState = ChunkHolder.getFullChunkStatus(newLevel);
++ final FullChunkStatus oldState = ChunkLevel.fullStatus(oldLevel);
++ final FullChunkStatus newState = ChunkLevel.fullStatus(newLevel);
+ final boolean oldUnloaded = oldLevel > ChunkHolderManager.MAX_TICKET_LEVEL;
+ final boolean newUnloaded = newLevel > ChunkHolderManager.MAX_TICKET_LEVEL;
+
-+ final ChunkStatus maxGenerationStatusOld = ChunkHolder.getStatus(oldLevel);
-+ final ChunkStatus maxGenerationStatusNew = ChunkHolder.getStatus(newLevel);
++ final ChunkStatus maxGenerationStatusOld = ChunkLevel.generationStatus(oldLevel);
++ final ChunkStatus maxGenerationStatusNew = ChunkLevel.generationStatus(newLevel);
+
+ // check for cancellations from downgrading ticket level
+ if (this.requestedGenStatus != null && !newState.isOrAfter(FullChunkStatus.FULL) && newLevel > oldLevel) {
@@ -12088,7 +12057,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ protected final boolean onNeighbourChange(final long bitsetBefore, final long bitsetAfter) {
+ FullChunkStatus oldState = getStatusForBitset(bitsetBefore);
+ FullChunkStatus newState = getStatusForBitset(bitsetAfter);
-+ final FullChunkStatus currStateTicketLevel = ChunkHolder.getFullChunkStatus(this.oldTicketLevel);
++ final FullChunkStatus currStateTicketLevel = ChunkLevel.fullStatus(this.oldTicketLevel);
+ if (oldState.isOrAfter(currStateTicketLevel)) {
+ oldState = currStateTicketLevel;
+ }
@@ -12146,19 +12115,24 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+
+ // only call on main thread, must hold ticket level and scheduling lock
+ private void onFullChunkLoadChange(final boolean loaded, final List changedFullStatus) {
-+ for (int dz = -NEIGHBOUR_RADIUS; dz <= NEIGHBOUR_RADIUS; ++dz) {
-+ for (int dx = -NEIGHBOUR_RADIUS; dx <= NEIGHBOUR_RADIUS; ++dx) {
-+ final NewChunkHolder holder = (dx | dz) == 0 ? this : this.scheduler.chunkHolderManager.getChunkHolder(dx + this.chunkX, dz + this.chunkZ);
-+ if (loaded) {
-+ if (holder.setNeighbourFullLoaded(-dx, -dz)) {
-+ changedFullStatus.add(holder);
-+ }
-+ } else {
-+ if (holder != null && holder.setNeighbourFullUnloaded(-dx, -dz)) {
-+ changedFullStatus.add(holder);
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ, NEIGHBOUR_RADIUS);
++ try {
++ for (int dz = -NEIGHBOUR_RADIUS; dz <= NEIGHBOUR_RADIUS; ++dz) {
++ for (int dx = -NEIGHBOUR_RADIUS; dx <= NEIGHBOUR_RADIUS; ++dx) {
++ final NewChunkHolder holder = (dx | dz) == 0 ? this : this.scheduler.chunkHolderManager.getChunkHolder(dx + this.chunkX, dz + this.chunkZ);
++ if (loaded) {
++ if (holder.setNeighbourFullLoaded(-dx, -dz)) {
++ changedFullStatus.add(holder);
++ }
++ } else {
++ if (holder != null && holder.setNeighbourFullUnloaded(-dx, -dz)) {
++ changedFullStatus.add(holder);
++ }
+ }
+ }
+ }
++ } finally {
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+ }
+
@@ -12197,7 +12171,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ // note: use opaque reads for chunk status read since we need it to be atomic
+
+ // test if anything changed
-+ final long statusCheck = (long)CHUNK_STATUS_HANDLE.getOpaque((NewChunkHolder)this);
++ long statusCheck = (long)CHUNK_STATUS_HANDLE.getOpaque((NewChunkHolder)this);
+ if ((int)statusCheck == (int)(statusCheck >>> 32)) {
+ // nothing changed
+ return ret;
@@ -12206,14 +12180,19 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ final ChunkTaskScheduler scheduler = this.scheduler;
+ final ChunkHolderManager holderManager = scheduler.chunkHolderManager;
+ final int ticketKeep;
-+ final Long ticketId;
-+ holderManager.ticketLock.lock();
++ final Long ticketId = Long.valueOf(holderManager.getNextStatusUpgradeId());
++ final ReentrantAreaLock.Node ticketLock = holderManager.ticketLockArea.lock(this.chunkX, this.chunkZ);
+ try {
+ ticketKeep = this.currentTicketLevel;
-+ ticketId = Long.valueOf(holderManager.getNextStatusUpgradeId());
-+ holderManager.addTicketAtLevel(TicketType.STATUS_UPGRADE, this.chunkX, this.chunkZ, ticketKeep, ticketId);
++ statusCheck = (long)CHUNK_STATUS_HANDLE.getOpaque((NewChunkHolder)this);
++ // handle race condition where ticket level and target status is updated concurrently
++ if ((int)statusCheck == (int)(statusCheck >>> 32)) {
++ // nothing changed
++ return ret;
++ }
++ holderManager.addTicketAtLevel(TicketType.STATUS_UPGRADE, CoordinateUtils.getChunkKey(this.chunkX, this.chunkZ), ticketKeep, ticketId, false);
+ } finally {
-+ holderManager.ticketLock.unlock();
++ holderManager.ticketLockArea.unlock(ticketLock);
+ }
+
+ this.processingFullStatus = true;
@@ -12224,11 +12203,11 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ FullChunkStatus nextState = getPendingChunkStatus(currStateEncoded);
+ if (currState == nextState) {
+ if (nextState == FullChunkStatus.INACCESSIBLE) {
-+ this.scheduler.schedulingLock.lock();
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
+ try {
+ this.checkUnload();
+ } finally {
-+ this.scheduler.schedulingLock.unlock();
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+ }
+ break;
@@ -12534,7 +12513,7 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ this.lockPriority();
+ // must use oldTicketLevel, we hold the schedule lock but not the ticket level lock
+ // however, schedule lock needs to be held for ticket level callback, so we're fine here
-+ if (ChunkHolder.getFullChunkStatus(this.oldTicketLevel).isOrAfter(FullChunkStatus.FULL)) {
++ if (ChunkLevel.fullStatus(this.oldTicketLevel).isOrAfter(FullChunkStatus.FULL)) {
+ this.queueBorderFullStatus(true, changedLoadStatus);
+ }
+ }
@@ -12628,14 +12607,15 @@ index 0000000000000000000000000000000000000000..cfd97d48ae77d33b68e11de3140a00f3
+ // this means we have to leave the ticket level update to handle the scheduling
+ }
+ final List changedLoadStatus = new ArrayList<>();
-+ this.scheduler.schedulingLock.lock();
++ // theoretically, we could schedule a chunk at the max radius which performs another max radius access. So we need to double the radius.
++ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ, 2 * ChunkTaskScheduler.getMaxAccessRadius());
+ try {
+ for (int i = 0, len = neighbours.size(); i < len; ++i) {
+ neighbours.get(i).removeNeighbourUsingChunk();
+ }
+ this.onChunkGenComplete(access, taskStatus, tasks, changedLoadStatus);
+ } finally {
-+ this.scheduler.schedulingLock.unlock();
++ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+ this.scheduler.chunkHolderManager.addChangedStatuses(changedLoadStatus);
+
@@ -13237,6 +13217,1489 @@ index 0000000000000000000000000000000000000000..b4c56bf12dc8dd17452210ece4fd6741
+
+ protected abstract void raisePriorityScheduled(final PrioritisedExecutor.Priority priority);
+}
+diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ThreadedTicketLevelPropagator.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ThreadedTicketLevelPropagator.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..287240ed3b440f2f5733c368416e4276f626405d
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ThreadedTicketLevelPropagator.java
+@@ -0,0 +1,1477 @@
++package io.papermc.paper.chunk.system.scheduling;
++
++import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
++import ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock;
++import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
++import it.unimi.dsi.fastutil.HashCommon;
++import it.unimi.dsi.fastutil.longs.Long2ByteLinkedOpenHashMap;
++import it.unimi.dsi.fastutil.shorts.Short2ByteLinkedOpenHashMap;
++import it.unimi.dsi.fastutil.shorts.Short2ByteMap;
++import it.unimi.dsi.fastutil.shorts.ShortOpenHashSet;
++import java.lang.invoke.VarHandle;
++import java.util.ArrayDeque;
++import java.util.Arrays;
++import java.util.Iterator;
++import java.util.List;
++import java.util.concurrent.ConcurrentHashMap;
++import java.util.concurrent.locks.LockSupport;
++
++public abstract class ThreadedTicketLevelPropagator {
++
++ // sections are 64 in length
++ public static final int SECTION_SHIFT = 6;
++ public static final int SECTION_SIZE = 1 << SECTION_SHIFT;
++ private static final int LEVEL_BITS = SECTION_SHIFT;
++ private static final int LEVEL_COUNT = 1 << LEVEL_BITS;
++ private static final int MIN_SOURCE_LEVEL = 1;
++ // we limit the max source to 62 because the depropagation code _must_ attempt to depropagate
++ // a 1 level to 0; and if a source was 63 then it may cross more than 2 sections in depropagation
++ private static final int MAX_SOURCE_LEVEL = 62;
++
++ private final UpdateQueue updateQueue;
++ private final ConcurrentHashMap sections = new ConcurrentHashMap<>();
++
++ public ThreadedTicketLevelPropagator() {
++ this.updateQueue = new UpdateQueue();
++ }
++
++ // must hold ticket lock for:
++ // (posX & ~(SECTION_SIZE - 1), posZ & ~(SECTION_SIZE - 1)) to (posX | (SECTION_SIZE - 1), posZ | (SECTION_SIZE - 1))
++ public void setSource(final int posX, final int posZ, final int to) {
++ if (to < 1 || to > MAX_SOURCE_LEVEL) {
++ throw new IllegalArgumentException("Source: " + to);
++ }
++
++ final int sectionX = posX >> SECTION_SHIFT;
++ final int sectionZ = posZ >> SECTION_SHIFT;
++
++ final Coordinate coordinate = new Coordinate(sectionX, sectionZ);
++ Section section = this.sections.get(coordinate);
++ if (section == null) {
++ if (null != this.sections.putIfAbsent(coordinate, section = new Section(sectionX, sectionZ))) {
++ throw new IllegalStateException("Race condition while creating new section");
++ }
++ }
++
++ final int localIdx = (posX & (SECTION_SIZE - 1)) | ((posZ & (SECTION_SIZE - 1)) << SECTION_SHIFT);
++ final short sLocalIdx = (short)localIdx;
++
++ final short sourceAndLevel = section.levels[localIdx];
++ final int currentSource = (sourceAndLevel >>> 8) & 0xFF;
++
++ if (currentSource == to) {
++ // nothing to do
++ // make sure to kill the current update, if any
++ section.queuedSources.replace(sLocalIdx, (byte)to);
++ return;
++ }
++
++ if (section.queuedSources.put(sLocalIdx, (byte)to) == Section.NO_QUEUED_UPDATE && section.queuedSources.size() == 1) {
++ this.queueSectionUpdate(section);
++ }
++ }
++
++ // must hold ticket lock for:
++ // (posX & ~(SECTION_SIZE - 1), posZ & ~(SECTION_SIZE - 1)) to (posX | (SECTION_SIZE - 1), posZ | (SECTION_SIZE - 1))
++ public void removeSource(final int posX, final int posZ) {
++ final int sectionX = posX >> SECTION_SHIFT;
++ final int sectionZ = posZ >> SECTION_SHIFT;
++
++ final Coordinate coordinate = new Coordinate(sectionX, sectionZ);
++ final Section section = this.sections.get(coordinate);
++
++ if (section == null) {
++ return;
++ }
++
++ final int localIdx = (posX & (SECTION_SIZE - 1)) | ((posZ & (SECTION_SIZE - 1)) << SECTION_SHIFT);
++ final short sLocalIdx = (short)localIdx;
++
++ final int currentSource = (section.levels[localIdx] >>> 8) & 0xFF;
++
++ if (currentSource == 0) {
++ // we use replace here so that we do not possibly multi-queue a section for an update
++ section.queuedSources.replace(sLocalIdx, (byte)0);
++ return;
++ }
++
++ if (section.queuedSources.put(sLocalIdx, (byte)0) == Section.NO_QUEUED_UPDATE && section.queuedSources.size() == 1) {
++ this.queueSectionUpdate(section);
++ }
++ }
++
++ private void queueSectionUpdate(final Section section) {
++ this.updateQueue.append(new UpdateQueue.UpdateQueueNode(section, null));
++ }
++
++ public boolean hasPendingUpdates() {
++ return !this.updateQueue.isEmpty();
++ }
++
++ // holds ticket lock for every chunk section represented by any position in the key set
++ // updates is modifiable and passed to processSchedulingUpdates after this call
++ protected abstract void processLevelUpdates(final Long2ByteLinkedOpenHashMap updates);
++
++ // holds ticket lock for every chunk section represented by any position in the key set
++ // holds scheduling lock in max access radius for every position held by the ticket lock
++ // updates is cleared after this call
++ protected abstract void processSchedulingUpdates(final Long2ByteLinkedOpenHashMap updates, final List scheduledTasks,
++ final List changedFullStatus);
++
++ // must hold ticket lock for every position in the sections in one radius around sectionX,sectionZ
++ public boolean performUpdate(final int sectionX, final int sectionZ, final ReentrantAreaLock schedulingLock,
++ final List scheduledTasks, final List changedFullStatus) {
++ if (!this.hasPendingUpdates()) {
++ return false;
++ }
++
++ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
++ final Section section = this.sections.get(coordinate);
++
++ if (section == null || section.queuedSources.isEmpty()) {
++ // no section or no updates
++ return false;
++ }
++
++ final Propagator propagator = Propagator.acquirePropagator();
++ final boolean ret = this.performUpdate(section, null, propagator,
++ null, schedulingLock, scheduledTasks, changedFullStatus
++ );
++ Propagator.returnPropagator(propagator);
++ return ret;
++ }
++
++ private boolean performUpdate(final Section section, final UpdateQueue.UpdateQueueNode node, final Propagator propagator,
++ final ReentrantAreaLock ticketLock, final ReentrantAreaLock schedulingLock,
++ final List scheduledTasks, final List changedFullStatus) {
++ final int sectionX = section.sectionX;
++ final int sectionZ = section.sectionZ;
++
++ final int rad1MinX = (sectionX - 1) << SECTION_SHIFT;
++ final int rad1MinZ = (sectionZ - 1) << SECTION_SHIFT;
++ final int rad1MaxX = ((sectionX + 1) << SECTION_SHIFT) | (SECTION_SIZE - 1);
++ final int rad1MaxZ = ((sectionZ + 1) << SECTION_SHIFT) | (SECTION_SIZE - 1);
++
++ // set up encode offset first as we need to queue level changes _before_
++ propagator.setupEncodeOffset(sectionX, sectionZ);
++
++ final int coordinateOffset = propagator.coordinateOffset;
++
++ final ReentrantAreaLock.Node ticketNode = ticketLock == null ? null : ticketLock.lock(rad1MinX, rad1MinZ, rad1MaxX, rad1MaxZ);
++ final boolean ret;
++ try {
++ // first, check if this update was stolen
++ if (section != this.sections.get(new Coordinate(sectionX, sectionZ))) {
++ // occurs when a stolen update deletes this section
++ // it is possible that another update is scheduled, but that one will have the correct section
++ if (node != null) {
++ this.updateQueue.remove(node);
++ }
++ return false;
++ }
++
++ final int oldSourceSize = section.sources.size();
++
++ // process pending sources
++ for (final Iterator iterator = section.queuedSources.short2ByteEntrySet().fastIterator(); iterator.hasNext();) {
++ final Short2ByteMap.Entry entry = iterator.next();
++ final int pos = (int)entry.getShortKey();
++ final int posX = (pos & (SECTION_SIZE - 1)) | (sectionX << SECTION_SHIFT);
++ final int posZ = ((pos >> SECTION_SHIFT) & (SECTION_SIZE - 1)) | (sectionZ << SECTION_SHIFT);
++ final int newSource = (int)entry.getByteValue();
++
++ final short currentEncoded = section.levels[pos];
++ final int currLevel = currentEncoded & 0xFF;
++ final int prevSource = (currentEncoded >>> 8) & 0xFF;
++
++ if (prevSource == newSource) {
++ // nothing changed
++ continue;
++ }
++
++ if ((prevSource < currLevel && newSource <= currLevel) || newSource == currLevel) {
++ // just update the source, don't need to propagate change
++ section.levels[pos] = (short)(currLevel | (newSource << 8));
++ // level is unchanged, don't add to changed positions
++ } else {
++ // set current level and current source to new source
++ section.levels[pos] = (short)(newSource | (newSource << 8));
++ // must add to updated positions in case this is final
++ propagator.updatedPositions.put(Coordinate.key(posX, posZ), (byte)newSource);
++ if (newSource != 0) {
++ // queue increase with new source level
++ propagator.appendToIncreaseQueue(
++ ((long)(posX + (posZ << Propagator.COORDINATE_BITS) + coordinateOffset) & ((1L << (Propagator.COORDINATE_BITS + Propagator.COORDINATE_BITS)) - 1)) |
++ ((newSource & (LEVEL_COUNT - 1L)) << (Propagator.COORDINATE_BITS + Propagator.COORDINATE_BITS)) |
++ (Propagator.ALL_DIRECTIONS_BITSET << (Propagator.COORDINATE_BITS + Propagator.COORDINATE_BITS + LEVEL_BITS))
++ );
++ }
++ // queue decrease with previous level
++ if (newSource < currLevel) {
++ propagator.appendToDecreaseQueue(
++ ((long)(posX + (posZ << Propagator.COORDINATE_BITS) + coordinateOffset) & ((1L << (Propagator.COORDINATE_BITS + Propagator.COORDINATE_BITS)) - 1)) |
++ ((currLevel & (LEVEL_COUNT - 1L)) << (Propagator.COORDINATE_BITS + Propagator.COORDINATE_BITS)) |
++ (Propagator.ALL_DIRECTIONS_BITSET << (Propagator.COORDINATE_BITS + Propagator.COORDINATE_BITS + LEVEL_BITS))
++ );
++ }
++ }
++
++ if (newSource == 0) {
++ // prevSource != newSource, so we are removing this source
++ section.sources.remove((short)pos);
++ } else if (prevSource == 0) {
++ // prevSource != newSource, so we are adding this source
++ section.sources.add((short)pos);
++ }
++ }
++
++ section.queuedSources.clear();
++
++ final int newSourceSize = section.sources.size();
++
++ if (oldSourceSize == 0 && newSourceSize != 0) {
++ // need to make sure the sections in 1 radius are initialised
++ for (int dz = -1; dz <= 1; ++dz) {
++ for (int dx = -1; dx <= 1; ++dx) {
++ if ((dx | dz) == 0) {
++ continue;
++ }
++ final int offX = dx + sectionX;
++ final int offZ = dz + sectionZ;
++ final Coordinate coordinate = new Coordinate(offX, offZ);
++ final Section neighbour = this.sections.computeIfAbsent(coordinate, (final Coordinate keyInMap) -> {
++ return new Section(Coordinate.x(keyInMap.key), Coordinate.z(keyInMap.key));
++ });
++
++ // increase ref count
++ ++neighbour.oneRadNeighboursWithSources;
++ if (neighbour.oneRadNeighboursWithSources <= 0 || neighbour.oneRadNeighboursWithSources > 8) {
++ throw new IllegalStateException(Integer.toString(neighbour.oneRadNeighboursWithSources));
++ }
++ }
++ }
++ }
++
++ if (propagator.hasUpdates()) {
++ propagator.setupCaches(this, sectionX, sectionZ, 1);
++ propagator.performDecrease();
++ // don't need try-finally, as any exception will cause the propagator to not be returned
++ propagator.destroyCaches();
++ }
++
++ if (newSourceSize == 0) {
++ final boolean decrementRef = oldSourceSize != 0;
++ // check for section de-init
++ for (int dz = -1; dz <= 1; ++dz) {
++ for (int dx = -1; dx <= 1; ++dx) {
++ final int offX = dx + sectionX;
++ final int offZ = dz + sectionZ;
++ final Coordinate coordinate = new Coordinate(offX, offZ);
++ final Section neighbour = this.sections.get(coordinate);
++
++ if (neighbour == null) {
++ if (oldSourceSize == 0 && (dx | dz) != 0) {
++ // since we don't have sources, this section is allowed to null
++ continue;
++ }
++ throw new IllegalStateException("??");
++ }
++
++ if (decrementRef && (dx | dz) != 0) {
++ // decrease ref count, but only for neighbours
++ --neighbour.oneRadNeighboursWithSources;
++ }
++
++ // we need to check the current section for de-init as well
++ if (neighbour.oneRadNeighboursWithSources == 0) {
++ if (neighbour.queuedSources.isEmpty() && neighbour.sources.isEmpty()) {
++ // need to de-init
++ this.sections.remove(coordinate);
++ } // else: neighbour is queued for an update, and it will de-init itself
++ } else if (neighbour.oneRadNeighboursWithSources < 0 || neighbour.oneRadNeighboursWithSources > 8) {
++ throw new IllegalStateException(Integer.toString(neighbour.oneRadNeighboursWithSources));
++ }
++ }
++ }
++ }
++
++
++ ret = !propagator.updatedPositions.isEmpty();
++
++ if (ret) {
++ this.processLevelUpdates(propagator.updatedPositions);
++
++ if (!propagator.updatedPositions.isEmpty()) {
++ // now we can actually update the ticket levels in the chunk holders
++ final int maxScheduleRadius = 2 * ChunkTaskScheduler.getMaxAccessRadius();
++
++ // allow the chunkholders to process ticket level updates without needing to acquire the schedule lock every time
++ final ReentrantAreaLock.Node schedulingNode = schedulingLock.lock(
++ rad1MinX - maxScheduleRadius, rad1MinZ - maxScheduleRadius,
++ rad1MaxX + maxScheduleRadius, rad1MaxZ + maxScheduleRadius
++ );
++ try {
++ this.processSchedulingUpdates(propagator.updatedPositions, scheduledTasks, changedFullStatus);
++ } finally {
++ schedulingLock.unlock(schedulingNode);
++ }
++ }
++
++ propagator.updatedPositions.clear();
++ }
++ } finally {
++ if (ticketLock != null) {
++ ticketLock.unlock(ticketNode);
++ }
++ }
++
++ // finished
++ if (node != null) {
++ this.updateQueue.remove(node);
++ }
++
++ return ret;
++ }
++
++ public boolean performUpdates(final ReentrantAreaLock ticketLock, final ReentrantAreaLock schedulingLock,
++ final List scheduledTasks, final List changedFullStatus) {
++ if (this.updateQueue.isEmpty()) {
++ return false;
++ }
++
++ final long maxOrder = this.updateQueue.getLastOrder();
++
++ boolean updated = false;
++ Propagator propagator = null;
++
++ for (;;) {
++ final UpdateQueue.UpdateQueueNode toUpdate = this.updateQueue.acquireNextToUpdate(maxOrder);
++ if (toUpdate == null) {
++ this.updateQueue.awaitFirst(maxOrder);
++
++ if (!this.updateQueue.hasRemainingUpdates(maxOrder)) {
++ if (propagator != null) {
++ Propagator.returnPropagator(propagator);
++ }
++ return updated;
++ }
++
++ continue;
++ }
++
++ if (propagator == null) {
++ propagator = Propagator.acquirePropagator();
++ }
++
++ updated |= this.performUpdate(toUpdate.section, toUpdate, propagator, ticketLock, schedulingLock, scheduledTasks, changedFullStatus);
++ }
++ }
++
++ private static final class UpdateQueue {
++
++ private volatile UpdateQueueNode head;
++ private volatile UpdateQueueNode tail;
++ private volatile UpdateQueueNode lastUpdating;
++
++ protected static final VarHandle HEAD_HANDLE = ConcurrentUtil.getVarHandle(UpdateQueue.class, "head", UpdateQueueNode.class);
++ protected static final VarHandle TAIL_HANDLE = ConcurrentUtil.getVarHandle(UpdateQueue.class, "tail", UpdateQueueNode.class);
++ protected static final VarHandle LAST_UPDATING = ConcurrentUtil.getVarHandle(UpdateQueue.class, "lastUpdating", UpdateQueueNode.class);
++
++ /* head */
++
++ protected final void setHeadPlain(final UpdateQueueNode newHead) {
++ HEAD_HANDLE.set(this, newHead);
++ }
++
++ protected final void setHeadOpaque(final UpdateQueueNode newHead) {
++ HEAD_HANDLE.setOpaque(this, newHead);
++ }
++
++ protected final UpdateQueueNode getHeadPlain() {
++ return (UpdateQueueNode)HEAD_HANDLE.get(this);
++ }
++
++ protected final UpdateQueueNode getHeadOpaque() {
++ return (UpdateQueueNode)HEAD_HANDLE.getOpaque(this);
++ }
++
++ protected final UpdateQueueNode getHeadAcquire() {
++ return (UpdateQueueNode)HEAD_HANDLE.getAcquire(this);
++ }
++
++ /* tail */
++
++ protected final void setTailPlain(final UpdateQueueNode newTail) {
++ TAIL_HANDLE.set(this, newTail);
++ }
++
++ protected final void setTailOpaque(final UpdateQueueNode newTail) {
++ TAIL_HANDLE.setOpaque(this, newTail);
++ }
++
++ protected final UpdateQueueNode getTailPlain() {
++ return (UpdateQueueNode)TAIL_HANDLE.get(this);
++ }
++
++ protected final UpdateQueueNode getTailOpaque() {
++ return (UpdateQueueNode)TAIL_HANDLE.getOpaque(this);
++ }
++
++ /* lastUpdating */
++
++ protected final UpdateQueueNode getLastUpdatingVolatile() {
++ return (UpdateQueueNode)LAST_UPDATING.getVolatile(this);
++ }
++
++ protected final UpdateQueueNode compareAndExchangeLastUpdatingVolatile(final UpdateQueueNode expect, final UpdateQueueNode update) {
++ return (UpdateQueueNode)LAST_UPDATING.compareAndExchange(this, expect, update);
++ }
++
++ public UpdateQueue() {
++ final UpdateQueueNode dummy = new UpdateQueueNode(null, null);
++ dummy.order = -1L;
++ dummy.preventAdds();
++
++ this.setHeadPlain(dummy);
++ this.setTailPlain(dummy);
++ }
++
++ public boolean isEmpty() {
++ return this.peek() == null;
++ }
++
++ public boolean hasRemainingUpdates(final long maxUpdate) {
++ final UpdateQueueNode node = this.peek();
++ return node != null && node.order <= maxUpdate;
++ }
++
++ public long getLastOrder() {
++ for (UpdateQueueNode tail = this.getTailOpaque(), curr = tail;;) {
++ final UpdateQueueNode next = curr.getNextVolatile();
++ if (next == null) {
++ // try to update stale tail
++ if (this.getTailOpaque() == tail && curr != tail) {
++ this.setTailOpaque(curr);
++ }
++ return curr.order;
++ }
++ curr = next;
++ }
++ }
++
++ public UpdateQueueNode acquireNextToUpdate(final long maxOrder) {
++ int failures = 0;
++ for (UpdateQueueNode prev = this.getLastUpdatingVolatile();;) {
++ UpdateQueueNode next = prev == null ? this.peek() : prev.next;
++
++ if (next == null || next.order > maxOrder) {
++ return null;
++ }
++
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++
++ if (prev == (prev = this.compareAndExchangeLastUpdatingVolatile(prev, next))) {
++ return next;
++ }
++
++ ++failures;
++ }
++ }
++
++ public void awaitFirst(final long maxOrder) {
++ final UpdateQueueNode earliest = this.peek();
++ if (earliest == null || earliest.order > maxOrder) {
++ return;
++ }
++
++ final Thread currThread = Thread.currentThread();
++ // we do not use add-blocking because we use the nullability of the section to block
++ // remove() does not begin to poll from the wait queue until the section is null'd,
++ // and so provided we check the nullability before parking there is no ordering of these operations
++ // such that remove() finishes polling from the wait queue while section is not null
++ earliest.add(currThread);
++
++ // wait until completed
++ while (earliest.getSectionVolatile() != null) {
++ LockSupport.park();
++ }
++ }
++
++ public UpdateQueueNode peek() {
++ for (UpdateQueueNode head = this.getHeadOpaque(), curr = head;;) {
++ final UpdateQueueNode next = curr.getNextVolatile();
++ final Section element = curr.getSectionVolatile(); /* Likely in sync */
++
++ if (element != null) {
++ if (this.getHeadOpaque() == head && curr != head) {
++ this.setHeadOpaque(curr);
++ }
++ return curr;
++ }
++
++ if (next == null) {
++ if (this.getHeadOpaque() == head && curr != head) {
++ this.setHeadOpaque(curr);
++ }
++ return null;
++ }
++ curr = next;
++ }
++ }
++
++ public void remove(final UpdateQueueNode node) {
++ // mark as removed
++ node.setSectionVolatile(null);
++
++ // use peek to advance head
++ this.peek();
++
++ // unpark any waiters / block the wait queue
++ Thread unpark;
++ while ((unpark = node.poll()) != null) {
++ LockSupport.unpark(unpark);
++ }
++ }
++
++ public void append(final UpdateQueueNode node) {
++ int failures = 0;
++
++ for (UpdateQueueNode currTail = this.getTailOpaque(), curr = currTail;;) {
++ /* It has been experimentally shown that placing the read before the backoff results in significantly greater performance */
++ /* It is likely due to a cache miss caused by another write to the next field */
++ final UpdateQueueNode next = curr.getNextVolatile();
++
++ for (int i = 0; i < failures; ++i) {
++ ConcurrentUtil.backoff();
++ }
++
++ if (next == null) {
++ node.order = curr.order + 1L;
++ final UpdateQueueNode compared = curr.compareExchangeNextVolatile(null, node);
++
++ if (compared == null) {
++ /* Added */
++ /* Avoid CASing on tail more than we need to */
++ /* CAS to avoid setting an out-of-date tail */
++ if (this.getTailOpaque() == currTail) {
++ this.setTailOpaque(node);
++ }
++ return;
++ }
++
++ ++failures;
++ curr = compared;
++ continue;
++ }
++
++ if (curr == currTail) {
++ /* Tail is likely not up-to-date */
++ curr = next;
++ } else {
++ /* Try to update to tail */
++ if (currTail == (currTail = this.getTailOpaque())) {
++ curr = next;
++ } else {
++ curr = currTail;
++ }
++ }
++ }
++ }
++
++ // each node also represents a set of waiters, represented by the MTQ
++ // if the queue is add-blocked, then the update is complete
++ private static final class UpdateQueueNode extends MultiThreadedQueue {
++ private long order;
++ private Section section;
++ private volatile UpdateQueueNode next;
++
++ protected static final VarHandle SECTION_HANDLE = ConcurrentUtil.getVarHandle(UpdateQueueNode.class, "section", Section.class);
++ protected static final VarHandle NEXT_HANDLE = ConcurrentUtil.getVarHandle(UpdateQueueNode.class, "next", UpdateQueueNode.class);
++
++ public UpdateQueueNode(final Section section, final UpdateQueueNode next) {
++ SECTION_HANDLE.set(this, section);
++ NEXT_HANDLE.set(this, next);
++ }
++
++ /* section */
++
++ protected final Section getSectionPlain() {
++ return (Section)SECTION_HANDLE.get(this);
++ }
++
++ protected final Section getSectionVolatile() {
++ return (Section)SECTION_HANDLE.getVolatile(this);
++ }
++
++ protected final void setSectionPlain(final Section update) {
++ SECTION_HANDLE.set(this, update);
++ }
++
++ protected final void setSectionOpaque(final Section update) {
++ SECTION_HANDLE.setOpaque(this, update);
++ }
++
++ protected final void setSectionVolatile(final Section update) {
++ SECTION_HANDLE.setVolatile(this, update);
++ }
++
++ protected final Section getAndSetSectionVolatile(final Section update) {
++ return (Section)SECTION_HANDLE.getAndSet(this, update);
++ }
++
++ protected final Section compareExchangeSectionVolatile(final Section expect, final Section update) {
++ return (Section)SECTION_HANDLE.compareAndExchange(this, expect, update);
++ }
++
++ /* next */
++
++ protected final UpdateQueueNode getNextPlain() {
++ return (UpdateQueueNode)NEXT_HANDLE.get(this);
++ }
++
++ protected final UpdateQueueNode getNextOpaque() {
++ return (UpdateQueueNode)NEXT_HANDLE.getOpaque(this);
++ }
++
++ protected final UpdateQueueNode getNextAcquire() {
++ return (UpdateQueueNode)NEXT_HANDLE.getAcquire(this);
++ }
++
++ protected final UpdateQueueNode getNextVolatile() {
++ return (UpdateQueueNode)NEXT_HANDLE.getVolatile(this);
++ }
++
++ protected final void setNextPlain(final UpdateQueueNode next) {
++ NEXT_HANDLE.set(this, next);
++ }
++
++ protected final void setNextVolatile(final UpdateQueueNode next) {
++ NEXT_HANDLE.setVolatile(this, next);
++ }
++
++ protected final UpdateQueueNode compareExchangeNextVolatile(final UpdateQueueNode expect, final UpdateQueueNode set) {
++ return (UpdateQueueNode)NEXT_HANDLE.compareAndExchange(this, expect, set);
++ }
++ }
++ }
++
++ private static final class Section {
++
++ // upper 8 bits: sources, lower 8 bits: level
++ // if we REALLY wanted to get crazy, we could make the increase propagator use MethodHandles#byteArrayViewVarHandle
++ // to read and write the lower 8 bits of this array directly rather than reading, updating the bits, then writing back.
++ private final short[] levels = new short[SECTION_SIZE * SECTION_SIZE];
++ // set of local positions that represent sources
++ private final ShortOpenHashSet sources = new ShortOpenHashSet();
++ // map of local index to new source level
++ // the source level _cannot_ be updated in the backing storage immediately since the update
++ private static final byte NO_QUEUED_UPDATE = (byte)-1;
++ private final Short2ByteLinkedOpenHashMap queuedSources = new Short2ByteLinkedOpenHashMap();
++ {
++ this.queuedSources.defaultReturnValue(NO_QUEUED_UPDATE);
++ }
++ private int oneRadNeighboursWithSources = 0;
++
++ public final int sectionX;
++ public final int sectionZ;
++
++ public Section(final int sectionX, final int sectionZ) {
++ this.sectionX = sectionX;
++ this.sectionZ = sectionZ;
++ }
++
++ public boolean isZero() {
++ for (final short val : this.levels) {
++ if (val != 0) {
++ return false;
++ }
++ }
++ return true;
++ }
++
++ @Override
++ public String toString() {
++ final StringBuilder ret = new StringBuilder();
++
++ for (int x = 0; x < SECTION_SIZE; ++x) {
++ ret.append("levels x=").append(x).append("\n");
++ for (int z = 0; z < SECTION_SIZE; ++z) {
++ final short v = this.levels[x | (z << SECTION_SHIFT)];
++ ret.append(v & 0xFF).append(".");
++ }
++ ret.append("\n");
++ ret.append("sources x=").append(x).append("\n");
++ for (int z = 0; z < SECTION_SIZE; ++z) {
++ final short v = this.levels[x | (z << SECTION_SHIFT)];
++ ret.append((v >>> 8) & 0xFF).append(".");
++ }
++ ret.append("\n\n");
++ }
++
++ return ret.toString();
++ }
++ }
++
++
++ private static final class Propagator {
++
++ private static final ArrayDeque CACHED_PROPAGATORS = new ArrayDeque<>();
++ private static final int MAX_PROPAGATORS = Runtime.getRuntime().availableProcessors() * 2;
++
++ private static Propagator acquirePropagator() {
++ synchronized (CACHED_PROPAGATORS) {
++ final Propagator ret = CACHED_PROPAGATORS.pollFirst();
++ if (ret != null) {
++ return ret;
++ }
++ }
++ return new Propagator();
++ }
++
++ private static void returnPropagator(final Propagator propagator) {
++ synchronized (CACHED_PROPAGATORS) {
++ if (CACHED_PROPAGATORS.size() < MAX_PROPAGATORS) {
++ CACHED_PROPAGATORS.add(propagator);
++ }
++ }
++ }
++
++ private static final int SECTION_RADIUS = 2;
++ private static final int SECTION_CACHE_WIDTH = 2 * SECTION_RADIUS + 1;
++ // minimum number of bits to represent [0, SECTION_SIZE * SECTION_CACHE_WIDTH)
++ private static final int COORDINATE_BITS = 9;
++ private static final int COORDINATE_SIZE = 1 << COORDINATE_BITS;
++ static {
++ if ((SECTION_SIZE * SECTION_CACHE_WIDTH) > (1 << COORDINATE_BITS)) {
++ throw new IllegalStateException("Adjust COORDINATE_BITS");
++ }
++ }
++ // index = x + (z * SECTION_CACHE_WIDTH)
++ // (this requires x >= 0 and z >= 0)
++ private final Section[] sections = new Section[SECTION_CACHE_WIDTH * SECTION_CACHE_WIDTH];
++
++ private int encodeOffsetX;
++ private int encodeOffsetZ;
++
++ private int coordinateOffset;
++
++ private int encodeSectionOffsetX;
++ private int encodeSectionOffsetZ;
++
++ private int sectionIndexOffset;
++
++ public final boolean hasUpdates() {
++ return this.decreaseQueueInitialLength != 0 || this.increaseQueueInitialLength != 0;
++ }
++
++ protected final void setupEncodeOffset(final int centerSectionX, final int centerSectionZ) {
++ final int maxCoordinate = (SECTION_RADIUS * SECTION_SIZE - 1);
++ // must have that encoded >= 0
++ // coordinates can range from [-maxCoordinate + centerSection*SECTION_SIZE, maxCoordinate + centerSection*SECTION_SIZE]
++ // we want a range of [0, maxCoordinate*2]
++ // so, 0 = -maxCoordinate + centerSection*SECTION_SIZE + offset
++ this.encodeOffsetX = maxCoordinate - (centerSectionX << SECTION_SHIFT);
++ this.encodeOffsetZ = maxCoordinate - (centerSectionZ << SECTION_SHIFT);
++
++ // encoded coordinates range from [0, SECTION_SIZE * SECTION_CACHE_WIDTH)
++ // coordinate index = (x + encodeOffsetX) + ((z + encodeOffsetZ) << COORDINATE_BITS)
++ this.coordinateOffset = this.encodeOffsetX + (this.encodeOffsetZ << COORDINATE_BITS);
++
++ // need encoded values to be >= 0
++ // so, 0 = (-SECTION_RADIUS + centerSectionX) + encodeOffset
++ this.encodeSectionOffsetX = SECTION_RADIUS - centerSectionX;
++ this.encodeSectionOffsetZ = SECTION_RADIUS - centerSectionZ;
++
++ // section index = (secX + encodeSectionOffsetX) + ((secZ + encodeSectionOffsetZ) * SECTION_CACHE_WIDTH)
++ this.sectionIndexOffset = this.encodeSectionOffsetX + (this.encodeSectionOffsetZ * SECTION_CACHE_WIDTH);
++ }
++
++ // must hold ticket lock for (centerSectionX,centerSectionZ) in radius rad
++ // must call setupEncodeOffset
++ protected final void setupCaches(final ThreadedTicketLevelPropagator propagator,
++ final int centerSectionX, final int centerSectionZ,
++ final int rad) {
++ for (int dz = -rad; dz <= rad; ++dz) {
++ for (int dx = -rad; dx <= rad; ++dx) {
++ final int sectionX = centerSectionX + dx;
++ final int sectionZ = centerSectionZ + dz;
++ final Coordinate coordinate = new Coordinate(sectionX, sectionZ);
++ final Section section = propagator.sections.get(coordinate);
++
++ if (section == null) {
++ throw new IllegalStateException("Section at " + coordinate + " should not be null");
++ }
++
++ this.setSectionInCache(sectionX, sectionZ, section);
++ }
++ }
++ }
++
++ protected final void setSectionInCache(final int sectionX, final int sectionZ, final Section section) {
++ this.sections[sectionX + SECTION_CACHE_WIDTH*sectionZ + this.sectionIndexOffset] = section;
++ }
++
++ protected final Section getSection(final int sectionX, final int sectionZ) {
++ return this.sections[sectionX + SECTION_CACHE_WIDTH*sectionZ + this.sectionIndexOffset];
++ }
++
++ protected final int getLevel(final int posX, final int posZ) {
++ final Section section = this.sections[(posX >> SECTION_SHIFT) + SECTION_CACHE_WIDTH*(posZ >> SECTION_SHIFT) + this.sectionIndexOffset];
++ if (section != null) {
++ return (int)section.levels[(posX & (SECTION_SIZE - 1)) | ((posZ & (SECTION_SIZE - 1)) << SECTION_SHIFT)] & 0xFF;
++ }
++
++ return 0;
++ }
++
++ protected final void setLevel(final int posX, final int posZ, final int to) {
++ final Section section = this.sections[(posX >> SECTION_SHIFT) + SECTION_CACHE_WIDTH*(posZ >> SECTION_SHIFT) + this.sectionIndexOffset];
++ if (section != null) {
++ final int index = (posX & (SECTION_SIZE - 1)) | ((posZ & (SECTION_SIZE - 1)) << SECTION_SHIFT);
++ final short level = section.levels[index];
++ section.levels[index] = (short)((level & ~0xFF) | (to & 0xFF));
++ this.updatedPositions.put(Coordinate.key(posX, posZ), (byte)to);
++ }
++ }
++
++ protected final void destroyCaches() {
++ Arrays.fill(this.sections, null);
++ }
++
++ // contains:
++ // lower (COORDINATE_BITS(9) + COORDINATE_BITS(9) = 18) bits encoded position: (x | (z << COORDINATE_BITS))
++ // next LEVEL_BITS (6) bits: propagated level [0, 63]
++ // propagation directions bitset (16 bits):
++ protected static final long ALL_DIRECTIONS_BITSET = (
++ // z = -1
++ (1L << ((1 - 1) | ((1 - 1) << 2))) |
++ (1L << ((1 + 0) | ((1 - 1) << 2))) |
++ (1L << ((1 + 1) | ((1 - 1) << 2))) |
++
++ // z = 0
++ (1L << ((1 - 1) | ((1 + 0) << 2))) |
++ //(1L << ((1 + 0) | ((1 + 0) << 2))) | // exclude (0,0)
++ (1L << ((1 + 1) | ((1 + 0) << 2))) |
++
++ // z = 1
++ (1L << ((1 - 1) | ((1 + 1) << 2))) |
++ (1L << ((1 + 0) | ((1 + 1) << 2))) |
++ (1L << ((1 + 1) | ((1 + 1) << 2)))
++ );
++
++ private void ex(int bitset) {
++ for (int i = 0, len = Integer.bitCount(bitset); i < len; ++i) {
++ final int set = Integer.numberOfTrailingZeros(bitset);
++ final int tailingBit = (-bitset) & bitset;
++ // XOR to remove the trailing bit
++ bitset ^= tailingBit;
++
++ // the encoded value set is (x_val) | (z_val << 2), totaling 4 bits
++ // thus, the bitset is 16 bits wide where each one represents a direction to propagate and the
++ // index of the set bit is the encoded value
++ // the encoded coordinate has 3 valid states:
++ // 0b00 (0) -> -1
++ // 0b01 (1) -> 0
++ // 0b10 (2) -> 1
++ // the decode operation then is val - 1, and the encode operation is val + 1
++ final int xOff = (set & 3) - 1;
++ final int zOff = ((set >>> 2) & 3) - 1;
++ System.out.println("Encoded: (" + xOff + "," + zOff + ")");
++ }
++ }
++
++ private void ch(long bs, int shift) {
++ int bitset = (int)(bs >>> shift);
++ for (int i = 0, len = Integer.bitCount(bitset); i < len; ++i) {
++ final int set = Integer.numberOfTrailingZeros(bitset);
++ final int tailingBit = (-bitset) & bitset;
++ // XOR to remove the trailing bit
++ bitset ^= tailingBit;
++
++ // the encoded value set is (x_val) | (z_val << 2), totaling 4 bits
++ // thus, the bitset is 16 bits wide where each one represents a direction to propagate and the
++ // index of the set bit is the encoded value
++ // the encoded coordinate has 3 valid states:
++ // 0b00 (0) -> -1
++ // 0b01 (1) -> 0
++ // 0b10 (2) -> 1
++ // the decode operation then is val - 1, and the encode operation is val + 1
++ final int xOff = (set & 3) - 1;
++ final int zOff = ((set >>> 2) & 3) - 1;
++ if (Math.abs(xOff) > 1 || Math.abs(zOff) > 1 || (xOff | zOff) == 0) {
++ throw new IllegalStateException();
++ }
++ }
++ }
++
++ // whether the increase propagator needs to write the propagated level to the position, used to avoid cascading
++ // updates for sources
++ protected static final long FLAG_WRITE_LEVEL = Long.MIN_VALUE >>> 1;
++ // whether the propagation needs to check if its current level is equal to the expected level
++ // used only in increase propagation
++ protected static final long FLAG_RECHECK_LEVEL = Long.MIN_VALUE >>> 0;
++
++ protected long[] increaseQueue = new long[SECTION_SIZE * SECTION_SIZE * 2];
++ protected int increaseQueueInitialLength;
++ protected long[] decreaseQueue = new long[SECTION_SIZE * SECTION_SIZE * 2];
++ protected int decreaseQueueInitialLength;
++
++ protected final Long2ByteLinkedOpenHashMap updatedPositions = new Long2ByteLinkedOpenHashMap();
++
++ protected final long[] resizeIncreaseQueue() {
++ return this.increaseQueue = Arrays.copyOf(this.increaseQueue, this.increaseQueue.length * 2);
++ }
++
++ protected final long[] resizeDecreaseQueue() {
++ return this.decreaseQueue = Arrays.copyOf(this.decreaseQueue, this.decreaseQueue.length * 2);
++ }
++
++ protected final void appendToIncreaseQueue(final long value) {
++ final int idx = this.increaseQueueInitialLength++;
++ long[] queue = this.increaseQueue;
++ if (idx >= queue.length) {
++ queue = this.resizeIncreaseQueue();
++ queue[idx] = value;
++ return;
++ } else {
++ queue[idx] = value;
++ return;
++ }
++ }
++
++ protected final void appendToDecreaseQueue(final long value) {
++ final int idx = this.decreaseQueueInitialLength++;
++ long[] queue = this.decreaseQueue;
++ if (idx >= queue.length) {
++ queue = this.resizeDecreaseQueue();
++ queue[idx] = value;
++ return;
++ } else {
++ queue[idx] = value;
++ return;
++ }
++ }
++
++ protected final void performIncrease() {
++ long[] queue = this.increaseQueue;
++ int queueReadIndex = 0;
++ int queueLength = this.increaseQueueInitialLength;
++ this.increaseQueueInitialLength = 0;
++ final int decodeOffsetX = -this.encodeOffsetX;
++ final int decodeOffsetZ = -this.encodeOffsetZ;
++ final int encodeOffset = this.coordinateOffset;
++ final int sectionOffset = this.sectionIndexOffset;
++
++ final Long2ByteLinkedOpenHashMap updatedPositions = this.updatedPositions;
++
++ while (queueReadIndex < queueLength) {
++ final long queueValue = queue[queueReadIndex++];
++
++ final int posX = ((int)queueValue & (COORDINATE_SIZE - 1)) + decodeOffsetX;
++ final int posZ = (((int)queueValue >>> COORDINATE_BITS) & (COORDINATE_SIZE - 1)) + decodeOffsetZ;
++ final int propagatedLevel = ((int)queueValue >>> (COORDINATE_BITS + COORDINATE_BITS)) & (LEVEL_COUNT - 1);
++ // note: the above code requires coordinate bits * 2 < 32
++ // bitset is 16 bits
++ int propagateDirectionBitset = (int)(queueValue >>> (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)) & ((1 << 16) - 1);
++
++ if ((queueValue & FLAG_RECHECK_LEVEL) != 0L) {
++ if (this.getLevel(posX, posZ) != propagatedLevel) {
++ // not at the level we expect, so something changed.
++ continue;
++ }
++ } else if ((queueValue & FLAG_WRITE_LEVEL) != 0L) {
++ // these are used to restore sources after a propagation decrease
++ this.setLevel(posX, posZ, propagatedLevel);
++ }
++
++ // this bitset represents the values that we have not propagated to
++ // this bitset lets us determine what directions the neighbours we set should propagate to, in most cases
++ // significantly reducing the total number of ops
++ // since we propagate in a 1 radius, we need a 2 radius bitset to hold all possible values we would possibly need
++ // but if we use only 5x5 bits, then we need to use div/mod to retrieve coordinates from the bitset, so instead
++ // we use an 8x8 bitset and luckily that can be fit into only one long value (64 bits)
++ // to make things easy, we use positions [0, 4] in the bitset, with current position being 2
++ // index = x | (z << 3)
++
++ // to start, we eliminate everything 1 radius from the current position as the previous propagator
++ // must guarantee that either we propagate everything in 1 radius or we partially propagate for 1 radius
++ // but the rest not propagated are already handled
++ long currentPropagation = ~(
++ // z = -1
++ (1L << ((2 - 1) | ((2 - 1) << 3))) |
++ (1L << ((2 + 0) | ((2 - 1) << 3))) |
++ (1L << ((2 + 1) | ((2 - 1) << 3))) |
++
++ // z = 0
++ (1L << ((2 - 1) | ((2 + 0) << 3))) |
++ (1L << ((2 + 0) | ((2 + 0) << 3))) |
++ (1L << ((2 + 1) | ((2 + 0) << 3))) |
++
++ // z = 1
++ (1L << ((2 - 1) | ((2 + 1) << 3))) |
++ (1L << ((2 + 0) | ((2 + 1) << 3))) |
++ (1L << ((2 + 1) | ((2 + 1) << 3)))
++ );
++
++ final int toPropagate = propagatedLevel - 1;
++
++ // we could use while (propagateDirectionBitset != 0), but it's not a predictable branch. By counting
++ // the bits, the cpu loop predictor should perfectly predict the loop.
++ for (int l = 0, len = Integer.bitCount(propagateDirectionBitset); l < len; ++l) {
++ final int set = Integer.numberOfTrailingZeros(propagateDirectionBitset);
++ final int tailingBit = (-propagateDirectionBitset) & propagateDirectionBitset;
++ propagateDirectionBitset ^= tailingBit;
++
++ // pDecode is from [0, 2], and 1 must be subtracted to fully decode the offset
++ // it has been split to save some cycles via parallelism
++ final int pDecodeX = (set & 3);
++ final int pDecodeZ = ((set >>> 2) & 3);
++
++ // re-ordered -1 on the position decode into pos - 1 to occur in parallel with determining pDecodeX
++ final int offX = (posX - 1) + pDecodeX;
++ final int offZ = (posZ - 1) + pDecodeZ;
++
++ final int sectionIndex = (offX >> SECTION_SHIFT) + ((offZ >> SECTION_SHIFT) * SECTION_CACHE_WIDTH) + sectionOffset;
++ final int localIndex = (offX & (SECTION_SIZE - 1)) | ((offZ & (SECTION_SIZE - 1)) << SECTION_SHIFT);
++
++ // to retrieve a set of bits from a long value: (n_bitmask << (nstartidx)) & bitset
++ // bitset idx = x | (z << 3)
++
++ // read three bits, so we need 7L
++ // note that generally: off - pos = (pos - 1) + pDecode - pos = pDecode - 1
++ // nstartidx1 = x rel -1 for z rel -1
++ // = (offX - posX - 1 + 2) | ((offZ - posZ - 1 + 2) << 3)
++ // = (pDecodeX - 1 - 1 + 2) | ((pDecodeZ - 1 - 1 + 2) << 3)
++ // = pDecodeX | (pDecodeZ << 3) = start
++ final int start = pDecodeX | (pDecodeZ << 3);
++ final long bitsetLine1 = currentPropagation & (7L << (start));
++
++ // nstartidx2 = x rel -1 for z rel 0 = line after line1, so we can just add 8 (row length of bitset)
++ final long bitsetLine2 = currentPropagation & (7L << (start + 8));
++
++ // nstartidx2 = x rel -1 for z rel 0 = line after line2, so we can just add 8 (row length of bitset)
++ final long bitsetLine3 = currentPropagation & (7L << (start + (8 + 8)));
++
++ // remove ("take") lines from bitset
++ currentPropagation ^= (bitsetLine1 | bitsetLine2 | bitsetLine3);
++
++ // now try to propagate
++ final Section section = this.sections[sectionIndex];
++
++ // lower 8 bits are current level, next upper 7 bits are source level, next 1 bit is updated source flag
++ final short currentStoredLevel = section.levels[localIndex];
++ final int currentLevel = currentStoredLevel & 0xFF;
++
++ if (currentLevel >= toPropagate) {
++ continue; // already at the level we want
++ }
++
++ // update level
++ section.levels[localIndex] = (short)((currentStoredLevel & ~0xFF) | (toPropagate & 0xFF));
++ updatedPositions.putAndMoveToLast(Coordinate.key(offX, offZ), (byte)toPropagate);
++
++ // queue next
++ if (toPropagate > 1) {
++ // now combine into one bitset to pass to child
++ // the child bitset is 4x4, so we just shift each line by 4
++ // add the propagation bitset offset to each line to make it easy to OR it into the propagation queue value
++ final long childPropagation =
++ ((bitsetLine1 >>> (start)) << (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)) | // z = -1
++ ((bitsetLine2 >>> (start + 8)) << (4 + COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)) | // z = 0
++ ((bitsetLine3 >>> (start + (8 + 8))) << (4 + 4 + COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)); // z = 1
++
++ // don't queue update if toPropagate cannot propagate anything to neighbours
++ // (for increase, propagating 0 to neighbours is useless)
++ if (queueLength >= queue.length) {
++ queue = this.resizeIncreaseQueue();
++ }
++ queue[queueLength++] =
++ ((long)(offX + (offZ << COORDINATE_BITS) + encodeOffset) & ((1L << (COORDINATE_BITS + COORDINATE_BITS)) - 1)) |
++ ((toPropagate & (LEVEL_COUNT - 1L)) << (COORDINATE_BITS + COORDINATE_BITS)) |
++ childPropagation; //(ALL_DIRECTIONS_BITSET << (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS));
++ continue;
++ }
++ continue;
++ }
++ }
++ }
++
++ protected final void performDecrease() {
++ long[] queue = this.decreaseQueue;
++ long[] increaseQueue = this.increaseQueue;
++ int queueReadIndex = 0;
++ int queueLength = this.decreaseQueueInitialLength;
++ this.decreaseQueueInitialLength = 0;
++ int increaseQueueLength = this.increaseQueueInitialLength;
++ final int decodeOffsetX = -this.encodeOffsetX;
++ final int decodeOffsetZ = -this.encodeOffsetZ;
++ final int encodeOffset = this.coordinateOffset;
++ final int sectionOffset = this.sectionIndexOffset;
++
++ final Long2ByteLinkedOpenHashMap updatedPositions = this.updatedPositions;
++
++ while (queueReadIndex < queueLength) {
++ final long queueValue = queue[queueReadIndex++];
++
++ final int posX = ((int)queueValue & (COORDINATE_SIZE - 1)) + decodeOffsetX;
++ final int posZ = (((int)queueValue >>> COORDINATE_BITS) & (COORDINATE_SIZE - 1)) + decodeOffsetZ;
++ final int propagatedLevel = ((int)queueValue >>> (COORDINATE_BITS + COORDINATE_BITS)) & (LEVEL_COUNT - 1);
++ // note: the above code requires coordinate bits * 2 < 32
++ // bitset is 16 bits
++ int propagateDirectionBitset = (int)(queueValue >>> (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)) & ((1 << 16) - 1);
++
++ // this bitset represents the values that we have not propagated to
++ // this bitset lets us determine what directions the neighbours we set should propagate to, in most cases
++ // significantly reducing the total number of ops
++ // since we propagate in a 1 radius, we need a 2 radius bitset to hold all possible values we would possibly need
++ // but if we use only 5x5 bits, then we need to use div/mod to retrieve coordinates from the bitset, so instead
++ // we use an 8x8 bitset and luckily that can be fit into only one long value (64 bits)
++ // to make things easy, we use positions [0, 4] in the bitset, with current position being 2
++ // index = x | (z << 3)
++
++ // to start, we eliminate everything 1 radius from the current position as the previous propagator
++ // must guarantee that either we propagate everything in 1 radius or we partially propagate for 1 radius
++ // but the rest not propagated are already handled
++ long currentPropagation = ~(
++ // z = -1
++ (1L << ((2 - 1) | ((2 - 1) << 3))) |
++ (1L << ((2 + 0) | ((2 - 1) << 3))) |
++ (1L << ((2 + 1) | ((2 - 1) << 3))) |
++
++ // z = 0
++ (1L << ((2 - 1) | ((2 + 0) << 3))) |
++ (1L << ((2 + 0) | ((2 + 0) << 3))) |
++ (1L << ((2 + 1) | ((2 + 0) << 3))) |
++
++ // z = 1
++ (1L << ((2 - 1) | ((2 + 1) << 3))) |
++ (1L << ((2 + 0) | ((2 + 1) << 3))) |
++ (1L << ((2 + 1) | ((2 + 1) << 3)))
++ );
++
++ final int toPropagate = propagatedLevel - 1;
++
++ // we could use while (propagateDirectionBitset != 0), but it's not a predictable branch. By counting
++ // the bits, the cpu loop predictor should perfectly predict the loop.
++ for (int l = 0, len = Integer.bitCount(propagateDirectionBitset); l < len; ++l) {
++ final int set = Integer.numberOfTrailingZeros(propagateDirectionBitset);
++ final int tailingBit = (-propagateDirectionBitset) & propagateDirectionBitset;
++ propagateDirectionBitset ^= tailingBit;
++
++
++ // pDecode is from [0, 2], and 1 must be subtracted to fully decode the offset
++ // it has been split to save some cycles via parallelism
++ final int pDecodeX = (set & 3);
++ final int pDecodeZ = ((set >>> 2) & 3);
++
++ // re-ordered -1 on the position decode into pos - 1 to occur in parallel with determining pDecodeX
++ final int offX = (posX - 1) + pDecodeX;
++ final int offZ = (posZ - 1) + pDecodeZ;
++
++ final int sectionIndex = (offX >> SECTION_SHIFT) + ((offZ >> SECTION_SHIFT) * SECTION_CACHE_WIDTH) + sectionOffset;
++ final int localIndex = (offX & (SECTION_SIZE - 1)) | ((offZ & (SECTION_SIZE - 1)) << SECTION_SHIFT);
++
++ // to retrieve a set of bits from a long value: (n_bitmask << (nstartidx)) & bitset
++ // bitset idx = x | (z << 3)
++
++ // read three bits, so we need 7L
++ // note that generally: off - pos = (pos - 1) + pDecode - pos = pDecode - 1
++ // nstartidx1 = x rel -1 for z rel -1
++ // = (offX - posX - 1 + 2) | ((offZ - posZ - 1 + 2) << 3)
++ // = (pDecodeX - 1 - 1 + 2) | ((pDecodeZ - 1 - 1 + 2) << 3)
++ // = pDecodeX | (pDecodeZ << 3) = start
++ final int start = pDecodeX | (pDecodeZ << 3);
++ final long bitsetLine1 = currentPropagation & (7L << (start));
++
++ // nstartidx2 = x rel -1 for z rel 0 = line after line1, so we can just add 8 (row length of bitset)
++ final long bitsetLine2 = currentPropagation & (7L << (start + 8));
++
++ // nstartidx2 = x rel -1 for z rel 0 = line after line2, so we can just add 8 (row length of bitset)
++ final long bitsetLine3 = currentPropagation & (7L << (start + (8 + 8)));
++
++ // now try to propagate
++ final Section section = this.sections[sectionIndex];
++
++ // lower 8 bits are current level, next upper 7 bits are source level, next 1 bit is updated source flag
++ final short currentStoredLevel = section.levels[localIndex];
++ final int currentLevel = currentStoredLevel & 0xFF;
++ final int sourceLevel = (currentStoredLevel >>> 8) & 0xFF;
++
++ if (currentLevel == 0) {
++ continue; // already at the level we want
++ }
++
++ if (currentLevel > toPropagate) {
++ // it looks like another source propagated here, so re-propagate it
++ if (increaseQueueLength >= increaseQueue.length) {
++ increaseQueue = this.resizeIncreaseQueue();
++ }
++ increaseQueue[increaseQueueLength++] =
++ ((long)(offX + (offZ << COORDINATE_BITS) + encodeOffset) & ((1L << (COORDINATE_BITS + COORDINATE_BITS)) - 1)) |
++ ((currentLevel & (LEVEL_COUNT - 1L)) << (COORDINATE_BITS + COORDINATE_BITS)) |
++ (FLAG_RECHECK_LEVEL | (ALL_DIRECTIONS_BITSET << (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)));
++ continue;
++ }
++
++ // remove ("take") lines from bitset
++ // can't do this during decrease, TODO WHY?
++ //currentPropagation ^= (bitsetLine1 | bitsetLine2 | bitsetLine3);
++
++ // update level
++ section.levels[localIndex] = (short)((currentStoredLevel & ~0xFF));
++ updatedPositions.putAndMoveToLast(Coordinate.key(offX, offZ), (byte)0);
++
++ if (sourceLevel != 0) {
++ // re-propagate source
++ // note: do not set recheck level, or else the propagation will fail
++ if (increaseQueueLength >= increaseQueue.length) {
++ increaseQueue = this.resizeIncreaseQueue();
++ }
++ increaseQueue[increaseQueueLength++] =
++ ((long)(offX + (offZ << COORDINATE_BITS) + encodeOffset) & ((1L << (COORDINATE_BITS + COORDINATE_BITS)) - 1)) |
++ ((sourceLevel & (LEVEL_COUNT - 1L)) << (COORDINATE_BITS + COORDINATE_BITS)) |
++ (FLAG_WRITE_LEVEL | (ALL_DIRECTIONS_BITSET << (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)));
++ }
++
++ // queue next
++ // note: targetLevel > 0 here, since toPropagate >= currentLevel and currentLevel > 0
++ // now combine into one bitset to pass to child
++ // the child bitset is 4x4, so we just shift each line by 4
++ // add the propagation bitset offset to each line to make it easy to OR it into the propagation queue value
++ final long childPropagation =
++ ((bitsetLine1 >>> (start)) << (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)) | // z = -1
++ ((bitsetLine2 >>> (start + 8)) << (4 + COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)) | // z = 0
++ ((bitsetLine3 >>> (start + (8 + 8))) << (4 + 4 + COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)); // z = 1
++
++ // don't queue update if toPropagate cannot propagate anything to neighbours
++ // (for increase, propagating 0 to neighbours is useless)
++ if (queueLength >= queue.length) {
++ queue = this.resizeDecreaseQueue();
++ }
++ queue[queueLength++] =
++ ((long)(offX + (offZ << COORDINATE_BITS) + encodeOffset) & ((1L << (COORDINATE_BITS + COORDINATE_BITS)) - 1)) |
++ ((toPropagate & (LEVEL_COUNT - 1L)) << (COORDINATE_BITS + COORDINATE_BITS)) |
++ (ALL_DIRECTIONS_BITSET << (COORDINATE_BITS + COORDINATE_BITS + LEVEL_BITS)); //childPropagation;
++ continue;
++ }
++ }
++
++ // propagate sources we clobbered
++ this.increaseQueueInitialLength = increaseQueueLength;
++ this.performIncrease();
++ }
++ }
++
++ private static final class Coordinate implements Comparable {
++
++ public final long key;
++
++ public Coordinate(final long key) {
++ this.key = key;
++ }
++
++ public Coordinate(final int x, final int z) {
++ this.key = key(x, z);
++ }
++
++ public static long key(final int x, final int z) {
++ return ((long)z << 32) | (x & 0xFFFFFFFFL);
++ }
++
++ public static int x(final long key) {
++ return (int)key;
++ }
++
++ public static int z(final long key) {
++ return (int)(key >>> 32);
++ }
++
++ @Override
++ public int hashCode() {
++ return (int)HashCommon.mix(this.key);
++ }
++
++ @Override
++ public boolean equals(final Object obj) {
++ if (this == obj) {
++ return true;
++ }
++
++ if (!(obj instanceof Coordinate other)) {
++ return false;
++ }
++
++ return this.key == other.key;
++ }
++
++ // This class is intended for HashMap/ConcurrentHashMap usage, which do treeify bin nodes if the chain
++ // is too large. So we should implement compareTo to help.
++ @Override
++ public int compareTo(final Coordinate other) {
++ return Long.compare(this.key, other.key);
++ }
++
++ @Override
++ public String toString() {
++ return "[" + x(this.key) + "," + z(this.key) + "]";
++ }
++ }
++
++ /*
++ private static final java.util.Random random = new java.util.Random(4L);
++ private static final List> walkers =
++ new java.util.ArrayList<>();
++ static final int PLAYERS = 0;
++ static final int RAD_BLOCKS = 10000;
++ static final int RAD = RAD_BLOCKS >> 4;
++ static final int RAD_BIG_BLOCKS = 100_000;
++ static final int RAD_BIG = RAD_BIG_BLOCKS >> 4;
++ static final int VD = 4;
++ static final int BIG_PLAYERS = 50;
++ static final double WALK_CHANCE = 0.10;
++ static final double TP_CHANCE = 0.01;
++ static final int TP_BACK_PLAYERS = 200;
++ static final double TP_BACK_CHANCE = 0.25;
++ static final double TP_STEAL_CHANCE = 0.25;
++ private static final List> tpBack =
++ new java.util.ArrayList<>();
++
++ public static void main(final String[] args) {
++ final ReentrantAreaLock ticketLock = new ReentrantAreaLock(SECTION_SHIFT);
++ final ReentrantAreaLock schedulingLock = new ReentrantAreaLock(SECTION_SHIFT);
++ final Long2ByteLinkedOpenHashMap levelMap = new Long2ByteLinkedOpenHashMap();
++ final Long2ByteLinkedOpenHashMap refMap = new Long2ByteLinkedOpenHashMap();
++ final io.papermc.paper.util.misc.Delayed8WayDistancePropagator2D ref = new io.papermc.paper.util.misc.Delayed8WayDistancePropagator2D((final long coordinate, final byte oldLevel, final byte newLevel) -> {
++ if (newLevel == 0) {
++ refMap.remove(coordinate);
++ } else {
++ refMap.put(coordinate, newLevel);
++ }
++ });
++ final ThreadedTicketLevelPropagator propagator = new ThreadedTicketLevelPropagator() {
++ @Override
++ protected void processLevelUpdates(Long2ByteLinkedOpenHashMap updates) {
++ for (final long key : updates.keySet()) {
++ final byte val = updates.get(key);
++ if (val == 0) {
++ levelMap.remove(key);
++ } else {
++ levelMap.put(key, val);
++ }
++ }
++ }
++
++ @Override
++ protected void processSchedulingUpdates(Long2ByteLinkedOpenHashMap updates, List scheduledTasks, List changedFullStatus) {}
++ };
++
++ for (;;) {
++ if (walkers.isEmpty() && tpBack.isEmpty()) {
++ for (int i = 0; i < PLAYERS; ++i) {
++ int rad = i < BIG_PLAYERS ? RAD_BIG : RAD;
++ int posX = random.nextInt(-rad, rad + 1);
++ int posZ = random.nextInt(-rad, rad + 1);
++
++ io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap map = new io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap<>(null) {
++ @Override
++ protected void addCallback(Void parameter, int chunkX, int chunkZ) {
++ int src = 45 - 31 + 1;
++ ref.setSource(chunkX, chunkZ, src);
++ propagator.setSource(chunkX, chunkZ, src);
++ }
++
++ @Override
++ protected void removeCallback(Void parameter, int chunkX, int chunkZ) {
++ ref.removeSource(chunkX, chunkZ);
++ propagator.removeSource(chunkX, chunkZ);
++ }
++ };
++
++ map.add(posX, posZ, VD);
++
++ walkers.add(map);
++ }
++ for (int i = 0; i < TP_BACK_PLAYERS; ++i) {
++ int rad = RAD_BIG;
++ int posX = random.nextInt(-rad, rad + 1);
++ int posZ = random.nextInt(-rad, rad + 1);
++
++ io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap map = new io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap<>(null) {
++ @Override
++ protected void addCallback(Void parameter, int chunkX, int chunkZ) {
++ int src = 45 - 31 + 1;
++ ref.setSource(chunkX, chunkZ, src);
++ propagator.setSource(chunkX, chunkZ, src);
++ }
++
++ @Override
++ protected void removeCallback(Void parameter, int chunkX, int chunkZ) {
++ ref.removeSource(chunkX, chunkZ);
++ propagator.removeSource(chunkX, chunkZ);
++ }
++ };
++
++ map.add(posX, posZ, random.nextInt(1, 63));
++
++ tpBack.add(map);
++ }
++ } else {
++ for (int i = 0; i < PLAYERS; ++i) {
++ if (random.nextDouble() > WALK_CHANCE) {
++ continue;
++ }
++
++ io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap map = walkers.get(i);
++
++ int updateX = random.nextInt(-1, 2);
++ int updateZ = random.nextInt(-1, 2);
++
++ map.update(map.lastChunkX + updateX, map.lastChunkZ + updateZ, VD);
++ }
++
++ for (int i = 0; i < PLAYERS; ++i) {
++ if (random.nextDouble() > TP_CHANCE) {
++ continue;
++ }
++
++ int rad = i < BIG_PLAYERS ? RAD_BIG : RAD;
++ int posX = random.nextInt(-rad, rad + 1);
++ int posZ = random.nextInt(-rad, rad + 1);
++
++ io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap map = walkers.get(i);
++
++ map.update(posX, posZ, VD);
++ }
++
++ for (int i = 0; i < TP_BACK_PLAYERS; ++i) {
++ if (random.nextDouble() > TP_BACK_CHANCE) {
++ continue;
++ }
++
++ io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.SingleUserAreaMap map = tpBack.get(i);
++
++ map.update(-map.lastChunkX, -map.lastChunkZ, random.nextInt(1, 63));
++
++ if (random.nextDouble() > TP_STEAL_CHANCE) {
++ propagator.performUpdate(
++ map.lastChunkX >> SECTION_SHIFT, map.lastChunkZ >> SECTION_SHIFT, schedulingLock, null, null
++ );
++ propagator.performUpdate(
++ (-map.lastChunkX >> SECTION_SHIFT), (-map.lastChunkZ >> SECTION_SHIFT), schedulingLock, null, null
++ );
++ }
++ }
++ }
++
++ ref.propagateUpdates();
++ propagator.performUpdates(ticketLock, schedulingLock, null, null);
++
++ if (!refMap.equals(levelMap)) {
++ throw new IllegalStateException("Error!");
++ }
++ }
++ }
++ */
++}
diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/queue/RadiusAwarePrioritisedExecutor.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/queue/RadiusAwarePrioritisedExecutor.java
new file mode 100644
index 0000000000000000000000000000000000000000..3272f73013ea7d4efdd0ae2903925cc543be7075
@@ -14195,10 +15658,32 @@ index 0000000000000000000000000000000000000000..962d3cae6340fc11607b59355e291629
+
+}
diff --git a/src/main/java/io/papermc/paper/configuration/GlobalConfiguration.java b/src/main/java/io/papermc/paper/configuration/GlobalConfiguration.java
-index 52b02cb1f02d1c65b840f38cfc8baee500aa2259..09234062090c210227350cafeed141f8cb73108a 100644
+index 52b02cb1f02d1c65b840f38cfc8baee500aa2259..3294da27227b5a332904398afa56d21ea97d55f0 100644
--- a/src/main/java/io/papermc/paper/configuration/GlobalConfiguration.java
+++ b/src/main/java/io/papermc/paper/configuration/GlobalConfiguration.java
-@@ -274,4 +274,43 @@ public class GlobalConfiguration extends ConfigurationPart {
+@@ -116,21 +116,6 @@ public class GlobalConfiguration extends ConfigurationPart {
+ public int incomingPacketThreshold = 300;
+ }
+
+- public ChunkLoading chunkLoading;
+-
+- public class ChunkLoading extends ConfigurationPart {
+- public int minLoadRadius = 2;
+- public int maxConcurrentSends = 2;
+- public boolean autoconfigSendDistance = true;
+- public double targetPlayerChunkSendRate = 100.0;
+- public double globalMaxChunkSendRate = -1.0;
+- public boolean enableFrustumPriority = false;
+- public double globalMaxChunkLoadRate = -1.0;
+- public double playerMaxConcurrentLoads = 20.0;
+- public double globalMaxConcurrentLoads = 500.0;
+- public double playerMaxChunkLoadRate = -1.0;
+- }
+-
+ public UnsupportedSettings unsupportedSettings;
+
+ public class UnsupportedSettings extends ConfigurationPart {
+@@ -274,4 +259,43 @@ public class GlobalConfiguration extends ConfigurationPart {
public boolean useDimensionTypeForCustomSpawners = false;
public boolean strictAdvancementDimensionCheck = false;
}
@@ -14242,6 +15727,21 @@ index 52b02cb1f02d1c65b840f38cfc8baee500aa2259..09234062090c210227350cafeed141f8
+ public int playerMaxConcurrentChunkGenerates = 0;
+ }
}
+diff --git a/src/main/java/io/papermc/paper/threadedregions/TickRegions.java b/src/main/java/io/papermc/paper/threadedregions/TickRegions.java
+new file mode 100644
+index 0000000000000000000000000000000000000000..d5d39e9c1f326e91010237b0db80d527ac52f4d6
+--- /dev/null
++++ b/src/main/java/io/papermc/paper/threadedregions/TickRegions.java
+@@ -0,0 +1,9 @@
++package io.papermc.paper.threadedregions;
++
++// placeholder class for Folia
++public class TickRegions {
++
++ public static int getRegionChunkShift() {
++ return 4;
++ }
++}
diff --git a/src/main/java/io/papermc/paper/util/IntervalledCounter.java b/src/main/java/io/papermc/paper/util/IntervalledCounter.java
index cea9c098ade00ee87b8efc8164ab72f5279758f0..197224e31175252d8438a8df585bbb65f2288d7f 100644
--- a/src/main/java/io/papermc/paper/util/IntervalledCounter.java
@@ -14465,10 +15965,23 @@ index 902317d2dc198a1cbfc679810bcb2173644354cb..67064aa46043cad3ad14b1293c767e6f
return net.minecraft.server.level.ChunkMap.MAX_VIEW_DISTANCE + net.minecraft.world.level.chunk.ChunkStatus.getDistance(status);
}
diff --git a/src/main/java/io/papermc/paper/util/TickThread.java b/src/main/java/io/papermc/paper/util/TickThread.java
-index d59885ee9c8b29d5bac34dce0597e345e5358c77..fc57850b80303fcade89ca95794f63910404a407 100644
+index d59885ee9c8b29d5bac34dce0597e345e5358c77..f9063e2282f89e97a378f06822cde0a64ab03f9a 100644
--- a/src/main/java/io/papermc/paper/util/TickThread.java
+++ b/src/main/java/io/papermc/paper/util/TickThread.java
-@@ -6,7 +6,7 @@ import net.minecraft.world.entity.Entity;
+@@ -1,12 +1,20 @@
+ package io.papermc.paper.util;
+
++import net.minecraft.core.BlockPos;
+ import net.minecraft.server.MinecraftServer;
+ import net.minecraft.server.level.ServerLevel;
++import net.minecraft.server.level.ServerPlayer;
++import net.minecraft.server.network.ServerGamePacketListenerImpl;
++import net.minecraft.util.Mth;
+ import net.minecraft.world.entity.Entity;
++import net.minecraft.world.level.ChunkPos;
++import net.minecraft.world.level.Level;
++import net.minecraft.world.phys.AABB;
++import net.minecraft.world.phys.Vec3;
import org.bukkit.Bukkit;
import java.util.concurrent.atomic.AtomicInteger;
@@ -14477,7 +15990,7 @@ index d59885ee9c8b29d5bac34dce0597e345e5358c77..fc57850b80303fcade89ca95794f6391
public static final boolean STRICT_THREAD_CHECKS = Boolean.getBoolean("paper.strict-thread-checks");
-@@ -16,6 +16,10 @@ public final class TickThread extends Thread {
+@@ -16,6 +24,10 @@ public final class TickThread extends Thread {
}
}
@@ -14488,7 +16001,7 @@ index d59885ee9c8b29d5bac34dce0597e345e5358c77..fc57850b80303fcade89ca95794f6391
public static void softEnsureTickThread(final String reason) {
if (!STRICT_THREAD_CHECKS) {
return;
-@@ -23,6 +27,10 @@ public final class TickThread extends Thread {
+@@ -23,6 +35,10 @@ public final class TickThread extends Thread {
ensureTickThread(reason);
}
@@ -14499,22 +16012,100 @@ index d59885ee9c8b29d5bac34dce0597e345e5358c77..fc57850b80303fcade89ca95794f6391
public static void ensureTickThread(final String reason) {
if (!isTickThread()) {
MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable());
-@@ -66,14 +74,14 @@ public final class TickThread extends Thread {
+@@ -30,6 +46,20 @@ public final class TickThread extends Thread {
+ }
+ }
+
++ public static void ensureTickThread(final ServerLevel world, final BlockPos pos, final String reason) {
++ if (!isTickThreadFor(world, pos)) {
++ MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable());
++ throw new IllegalStateException(reason);
++ }
++ }
++
++ public static void ensureTickThread(final ServerLevel world, final ChunkPos pos, final String reason) {
++ if (!isTickThreadFor(world, pos)) {
++ MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable());
++ throw new IllegalStateException(reason);
++ }
++ }
++
+ public static void ensureTickThread(final ServerLevel world, final int chunkX, final int chunkZ, final String reason) {
+ if (!isTickThreadFor(world, chunkX, chunkZ)) {
+ MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable());
+@@ -44,6 +74,20 @@ public final class TickThread extends Thread {
+ }
+ }
+
++ public static void ensureTickThread(final ServerLevel world, final AABB aabb, final String reason) {
++ if (!isTickThreadFor(world, aabb)) {
++ MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable());
++ throw new IllegalStateException(reason);
++ }
++ }
++
++ public static void ensureTickThread(final ServerLevel world, final double blockX, final double blockZ, final String reason) {
++ if (!isTickThreadFor(world, blockX, blockZ)) {
++ MinecraftServer.LOGGER.error("Thread " + Thread.currentThread().getName() + " failed main thread check: " + reason, new Throwable());
++ throw new IllegalStateException(reason);
++ }
++ }
++
+ public final int id; /* We don't override getId as the spec requires that it be unique (with respect to all other threads) */
+
+ private static final AtomicInteger ID_GENERATOR = new AtomicInteger();
+@@ -66,14 +110,50 @@ public final class TickThread extends Thread {
}
public static boolean isTickThread() {
- return Bukkit.isPrimaryThread();
+ return Thread.currentThread() instanceof TickThread;
++ }
++
++ public static boolean isShutdownThread() {
++ return false;
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final BlockPos pos) {
++ return isTickThread();
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final ChunkPos pos) {
++ return isTickThread();
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final Vec3 pos) {
++ return isTickThread();
}
public static boolean isTickThreadFor(final ServerLevel world, final int chunkX, final int chunkZ) {
- return Bukkit.isPrimaryThread();
-+ return Thread.currentThread() instanceof TickThread;
++ return isTickThread();
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final AABB aabb) {
++ return isTickThread();
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final double blockX, final double blockZ) {
++ return isTickThread();
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final Vec3 position, final Vec3 deltaMovement, final int buffer) {
++ return isTickThread();
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final int fromChunkX, final int fromChunkZ, final int toChunkX, final int toChunkZ) {
++ return isTickThread();
++ }
++
++ public static boolean isTickThreadFor(final ServerLevel world, final int chunkX, final int chunkZ, final int radius) {
++ return isTickThread();
}
public static boolean isTickThreadFor(final Entity entity) {
- return Bukkit.isPrimaryThread();
-+ return Thread.currentThread() instanceof TickThread;
++ return isTickThread();
}
}
diff --git a/src/main/java/io/papermc/paper/world/ChunkEntitySlices.java b/src/main/java/io/papermc/paper/world/ChunkEntitySlices.java
@@ -15393,17 +16984,14 @@ index 51eac8b7177db66c005e4eaca689cf96d10edeaa..4f55f04812fe0306acfc4be45189f1f6
}
diff --git a/src/main/java/net/minecraft/server/level/ChunkHolder.java b/src/main/java/net/minecraft/server/level/ChunkHolder.java
-index 4620e64d8eb81520b75fbfbc64603e5887c7b016..84e4aea3d44cd7d5405ffc970a0568337ee5b0a7 100644
+index 4620e64d8eb81520b75fbfbc64603e5887c7b016..c5389e7f3665c06e487dfde3200b7e229694fbd2 100644
--- a/src/main/java/net/minecraft/server/level/ChunkHolder.java
+++ b/src/main/java/net/minecraft/server/level/ChunkHolder.java
-@@ -48,17 +48,15 @@ public class ChunkHolder {
+@@ -48,17 +48,12 @@ public class ChunkHolder {
private static final Either NOT_DONE_YET = Either.right(ChunkHolder.ChunkLoadingFailure.UNLOADED);
private static final CompletableFuture> UNLOADED_LEVEL_CHUNK_FUTURE = CompletableFuture.completedFuture(ChunkHolder.UNLOADED_LEVEL_CHUNK);
private static final List CHUNK_STATUSES = ChunkStatus.getStatusList();
- private final AtomicReferenceArray>> futures;
-+ // Paper - rewrite chunk system
-+ private static final FullChunkStatus[] FULL_CHUNK_STATUSES = FullChunkStatus.values();
-+ private static final int BLOCKS_BEFORE_RESEND_FUDGE = 64;
+ // Paper - rewrite chunk system
private final LevelHeightAccessor levelHeightAccessor;
- private volatile CompletableFuture> fullChunkFuture; private int fullChunkCreateCount; private volatile boolean isFullChunkReady; // Paper - cache chunk ticking stage
@@ -15420,12 +17008,12 @@ index 4620e64d8eb81520b75fbfbc64603e5887c7b016..84e4aea3d44cd7d5405ffc970a056833
public final ChunkPos pos;
private boolean hasChangedSections;
private final ShortSet[] changedBlocksPerSection;
-@@ -69,8 +67,22 @@ public class ChunkHolder {
+@@ -67,10 +62,20 @@ public class ChunkHolder {
+ private final LevelLightEngine lightEngine;
+ private final ChunkHolder.LevelChangeListener onLevelChange;
public final ChunkHolder.PlayerProvider playerProvider;
- private boolean wasAccessibleSinceLastSave;
- private CompletableFuture pendingFullStateConfirmation;
-+ // Paper - rewrite chunk system
-+ private boolean resendLight;
+- private boolean wasAccessibleSinceLastSave;
+- private CompletableFuture pendingFullStateConfirmation;
+ // Paper - rewrite chunk system
private final ChunkMap chunkMap; // Paper
@@ -15443,7 +17031,7 @@ index 4620e64d8eb81520b75fbfbc64603e5887c7b016..84e4aea3d44cd7d5405ffc970a056833
// Paper start
public void onChunkAdd() {
-@@ -82,148 +94,130 @@ public class ChunkHolder {
+@@ -82,148 +87,130 @@ public class ChunkHolder {
}
// Paper end
@@ -15655,7 +17243,7 @@ index 4620e64d8eb81520b75fbfbc64603e5887c7b016..84e4aea3d44cd7d5405ffc970a056833
if (chunk != null) {
int i = this.levelHeightAccessor.getSectionIndex(pos.getY());
-@@ -239,16 +233,17 @@ public class ChunkHolder {
+@@ -239,16 +226,17 @@ public class ChunkHolder {
}
public void sectionLightChanged(LightLayer lightType, int y) {
@@ -15678,7 +17266,7 @@ index 4620e64d8eb81520b75fbfbc64603e5887c7b016..84e4aea3d44cd7d5405ffc970a056833
int j = this.lightEngine.getMinLightSection();
int k = this.lightEngine.getMaxLightSection();
-@@ -273,7 +268,7 @@ public class ChunkHolder {
+@@ -273,7 +261,7 @@ public class ChunkHolder {
List list;
if (!this.skyChangedLightSectionFilter.isEmpty() || !this.blockChangedLightSectionFilter.isEmpty()) {
@@ -15687,7 +17275,7 @@ index 4620e64d8eb81520b75fbfbc64603e5887c7b016..84e4aea3d44cd7d5405ffc970a056833
if (!list.isEmpty()) {
ClientboundLightUpdatePacket packetplayoutlightupdate = new ClientboundLightUpdatePacket(chunk.getPos(), this.lightEngine, this.skyChangedLightSectionFilter, this.blockChangedLightSectionFilter);
-@@ -285,7 +280,7 @@ public class ChunkHolder {
+@@ -285,7 +273,7 @@ public class ChunkHolder {
}
if (this.hasChangedSections) {
@@ -15696,7 +17284,7 @@ index 4620e64d8eb81520b75fbfbc64603e5887c7b016..84e4aea3d44cd7d5405ffc970a056833
for (int i = 0; i < this.changedBlocksPerSection.length; ++i) {
ShortSet shortset = this.changedBlocksPerSection[i];
-@@ -343,67 +338,35 @@ public class ChunkHolder {
+@@ -343,67 +331,35 @@ public class ChunkHolder {
}
@@ -15782,31 +17370,25 @@ index 4620e64d8eb81520b75fbfbc64603e5887c7b016..84e4aea3d44cd7d5405ffc970a056833
}
public final ChunkPos getPos() { // Paper - final for inline
-@@ -411,240 +374,27 @@ public class ChunkHolder {
+@@ -411,240 +367,17 @@ public class ChunkHolder {
}
public final int getTicketLevel() { // Paper - final for inline
- return this.ticketLevel;
-+ return this.newChunkHolder.getTicketLevel(); // Paper - rewrite chunk system
- }
-
+- }
+-
- public int getQueueLevel() {
- return this.queueLevel;
- }
-+ // Paper - rewrite chunk system
-
+-
- private void setQueueLevel(int level) {
- this.queueLevel = level;
-+ public static ChunkStatus getStatus(int level) {
-+ return level < 33 ? ChunkStatus.FULL : ChunkStatus.getStatusAroundFullChunk(level - 33);
- }
-
+- }
+-
- public void setTicketLevel(int level) {
- this.ticketLevel = level;
-+ public static FullChunkStatus getFullChunkStatus(int distance) {
-+ return ChunkHolder.FULL_CHUNK_STATUSES[net.minecraft.util.Mth.clamp(33 - distance + 1, 0, ChunkHolder.FULL_CHUNK_STATUSES.length - 1)];
- }
-
+- }
+-
- private void scheduleFullChunkPromotion(ChunkMap playerchunkmap, CompletableFuture> completablefuture, Executor executor, FullChunkStatus fullchunkstatus) {
- this.pendingFullStateConfirmation.cancel(false);
- CompletableFuture completablefuture1 = new CompletableFuture();
@@ -15996,8 +17578,9 @@ index 4620e64d8eb81520b75fbfbc64603e5887c7b016..84e4aea3d44cd7d5405ffc970a056833
-
- public boolean wasAccessibleSinceLastSave() {
- return this.wasAccessibleSinceLastSave;
-- }
--
++ return this.newChunkHolder.getTicketLevel(); // Paper - rewrite chunk system
+ }
+
- public void refreshAccessibility() {
- this.wasAccessibleSinceLastSave = ChunkLevel.fullStatus(this.ticketLevel).isOrAfter(FullChunkStatus.FULL);
- }
@@ -16032,7 +17615,7 @@ index 4620e64d8eb81520b75fbfbc64603e5887c7b016..84e4aea3d44cd7d5405ffc970a056833
}
@FunctionalInterface
-@@ -682,15 +432,15 @@ public class ChunkHolder {
+@@ -682,15 +415,15 @@ public class ChunkHolder {
// Paper start
public final boolean isEntityTickingReady() {
@@ -17107,7 +18690,7 @@ index 19bd6f9aee3ccb1af1b010ee51a54aa2d0bf9c84..a502d293cedb2f507e6cf1792429b366
double d2 = d0 * d0;
boolean flag = d1 <= d2 && this.entity.broadcastToPlayer(player);
diff --git a/src/main/java/net/minecraft/server/level/DistanceManager.java b/src/main/java/net/minecraft/server/level/DistanceManager.java
-index f3c9a3dbb6f0e6f825b7477c89ed72ed52845419..20d600d29c2f2e47c798721d1f151e625b12acc3 100644
+index f3c9a3dbb6f0e6f825b7477c89ed72ed52845419..c716047fefb51a77ce18df243c517d80c78b6853 100644
--- a/src/main/java/net/minecraft/server/level/DistanceManager.java
+++ b/src/main/java/net/minecraft/server/level/DistanceManager.java
@@ -39,65 +39,28 @@ import org.slf4j.Logger;
@@ -17515,7 +19098,7 @@ index f3c9a3dbb6f0e6f825b7477c89ed72ed52845419..20d600d29c2f2e47c798721d1f151e62
public boolean hasTickets() {
- return !this.tickets.isEmpty();
-+ return this.getChunkHolderManager().hasTickets(); // Paper - rewrite chunk system
++ throw new UnsupportedOperationException(); // Paper - rewrite chunk system
}
// CraftBukkit start
@@ -17566,10 +19149,155 @@ index f3c9a3dbb6f0e6f825b7477c89ed72ed52845419..20d600d29c2f2e47c798721d1f151e62
+ */ // Paper - rewrite chunk system
}
diff --git a/src/main/java/net/minecraft/server/level/ServerChunkCache.java b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a928d8940 100644
+index 6d5a160a9fdaa04bb930afae8a0765910f631d23..b0687dcf8af84af627b67e7fbb68170a2fd28da0 100644
--- a/src/main/java/net/minecraft/server/level/ServerChunkCache.java
+++ b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-@@ -367,7 +367,7 @@ public class ServerChunkCache extends ChunkSource {
+@@ -141,108 +141,7 @@ public class ServerChunkCache extends ChunkSource {
+ return (LevelChunk)this.getChunk(x, z, ChunkStatus.FULL, true);
+ }
+
+- long chunkFutureAwaitCounter; // Paper - private -> package private
+-
+- public void getEntityTickingChunkAsync(int x, int z, java.util.function.Consumer onLoad) {
+- io.papermc.paper.chunk.system.ChunkSystem.scheduleTickingState(
+- this.level, x, z, FullChunkStatus.ENTITY_TICKING, true,
+- ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.NORMAL, onLoad
+- );
+- }
+-
+- public void getTickingChunkAsync(int x, int z, java.util.function.Consumer onLoad) {
+- io.papermc.paper.chunk.system.ChunkSystem.scheduleTickingState(
+- this.level, x, z, FullChunkStatus.BLOCK_TICKING, true,
+- ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.NORMAL, onLoad
+- );
+- }
+-
+- public void getFullChunkAsync(int x, int z, java.util.function.Consumer onLoad) {
+- io.papermc.paper.chunk.system.ChunkSystem.scheduleTickingState(
+- this.level, x, z, FullChunkStatus.FULL, true,
+- ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.NORMAL, onLoad
+- );
+- }
+-
+- void chunkLoadAccept(int chunkX, int chunkZ, ChunkAccess chunk, java.util.function.Consumer consumer) {
+- try {
+- consumer.accept(chunk);
+- } catch (Throwable throwable) {
+- if (throwable instanceof ThreadDeath) {
+- throw (ThreadDeath)throwable;
+- }
+- LOGGER.error("Load callback for chunk " + chunkX + "," + chunkZ + " in world '" + this.level.getWorld().getName() + "' threw an exception", throwable);
+- }
+- }
+-
+- void getChunkAtAsynchronously(int chunkX, int chunkZ, int ticketLevel,
+- java.util.function.Consumer consumer) {
+- if (ticketLevel <= 33) {
+- this.getFullChunkAsync(chunkX, chunkZ, (java.util.function.Consumer)consumer);
+- return;
+- }
+-
+- io.papermc.paper.chunk.system.ChunkSystem.scheduleChunkLoad(
+- this.level, chunkX, chunkZ, ChunkHolder.getStatus(ticketLevel), true,
+- ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.NORMAL, consumer
+- );
+- }
+-
+-
+- public final void getChunkAtAsynchronously(int chunkX, int chunkZ, ChunkStatus status, boolean gen, boolean allowSubTicketLevel, java.util.function.Consumer onLoad) {
+- // try to fire sync
+- int chunkStatusTicketLevel = 33 + ChunkStatus.getDistance(status);
+- ChunkHolder playerChunk = this.chunkMap.getUpdatingChunkIfPresent(io.papermc.paper.util.CoordinateUtils.getChunkKey(chunkX, chunkZ));
+- if (playerChunk != null) {
+- ChunkStatus holderStatus = playerChunk.getChunkHolderStatus();
+- ChunkAccess immediate = playerChunk.getAvailableChunkNow();
+- if (immediate != null) {
+- if (allowSubTicketLevel ? immediate.getStatus().isOrAfter(status) : (playerChunk.getTicketLevel() <= chunkStatusTicketLevel && holderStatus != null && holderStatus.isOrAfter(status))) {
+- this.chunkLoadAccept(chunkX, chunkZ, immediate, onLoad);
+- return;
+- } else {
+- if (gen || (!allowSubTicketLevel && immediate.getStatus().isOrAfter(status))) {
+- this.getChunkAtAsynchronously(chunkX, chunkZ, chunkStatusTicketLevel, onLoad);
+- return;
+- } else {
+- this.chunkLoadAccept(chunkX, chunkZ, null, onLoad);
+- return;
+- }
+- }
+- }
+- }
+-
+- // need to fire async
+-
+- if (gen && !allowSubTicketLevel) {
+- this.getChunkAtAsynchronously(chunkX, chunkZ, chunkStatusTicketLevel, onLoad);
+- return;
+- }
+-
+- this.getChunkAtAsynchronously(chunkX, chunkZ, io.papermc.paper.util.MCUtil.getTicketLevelFor(ChunkStatus.EMPTY), (ChunkAccess chunk) -> {
+- if (chunk == null) {
+- throw new IllegalStateException("Chunk cannot be null");
+- }
+-
+- if (!chunk.getStatus().isOrAfter(status)) {
+- if (gen) {
+- this.getChunkAtAsynchronously(chunkX, chunkZ, chunkStatusTicketLevel, onLoad);
+- return;
+- } else {
+- ServerChunkCache.this.chunkLoadAccept(chunkX, chunkZ, null, onLoad);
+- return;
+- }
+- } else {
+- if (allowSubTicketLevel) {
+- ServerChunkCache.this.chunkLoadAccept(chunkX, chunkZ, chunk, onLoad);
+- return;
+- } else {
+- this.getChunkAtAsynchronously(chunkX, chunkZ, chunkStatusTicketLevel, onLoad);
+- return;
+- }
+- }
+- });
+- }
++ final java.util.concurrent.atomic.AtomicLong chunkFutureAwaitCounter = new java.util.concurrent.atomic.AtomicLong(); // Paper - private -> package private
+ // Paper end
+
+ // Paper start
+@@ -256,34 +155,6 @@ public class ServerChunkCache extends ChunkSource {
+ return holder.getLastAvailable();
+ }
+
+- // this will try to avoid chunk neighbours for lighting
+- public final ChunkAccess getFullStatusChunkAt(int chunkX, int chunkZ) {
+- LevelChunk ifLoaded = this.getChunkAtIfLoadedImmediately(chunkX, chunkZ);
+- if (ifLoaded != null) {
+- return ifLoaded;
+- }
+-
+- ChunkAccess empty = this.getChunk(chunkX, chunkZ, ChunkStatus.EMPTY, true);
+- if (empty != null && empty.getStatus().isOrAfter(ChunkStatus.FULL)) {
+- return empty;
+- }
+- return this.getChunk(chunkX, chunkZ, ChunkStatus.FULL, true);
+- }
+-
+- public final ChunkAccess getFullStatusChunkAtIfLoaded(int chunkX, int chunkZ) {
+- LevelChunk ifLoaded = this.getChunkAtIfLoadedImmediately(chunkX, chunkZ);
+- if (ifLoaded != null) {
+- return ifLoaded;
+- }
+-
+- ChunkAccess ret = this.getChunkAtImmediately(chunkX, chunkZ);
+- if (ret != null && ret.getStatus().isOrAfter(ChunkStatus.FULL)) {
+- return ret;
+- } else {
+- return null;
+- }
+- }
+-
+ public void addTicketAtLevel(TicketType ticketType, ChunkPos chunkPos, int ticketLevel, T identifier) {
+ this.distanceManager.addTicket(ticketType, chunkPos, ticketLevel, identifier);
+ }
+@@ -367,7 +238,7 @@ public class ServerChunkCache extends ChunkSource {
public LevelChunk getChunkAtIfLoadedImmediately(int x, int z) {
long k = ChunkPos.asLong(x, z);
@@ -17578,33 +19306,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
return this.getChunkAtIfLoadedMainThread(x, z);
}
-@@ -389,11 +389,34 @@ public class ServerChunkCache extends ChunkSource {
- return ret;
- }
- // Paper end
-+ // Paper start - async chunk io
-+ public CompletableFuture> getChunkAtAsynchronously(int x, int z, boolean gen, boolean isUrgent) {
-+ CompletableFuture> ret = new CompletableFuture<>();
-+
-+ ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority;
-+ if (isUrgent) {
-+ priority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.HIGHER;
-+ } else {
-+ priority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.NORMAL;
-+ }
-+
-+ io.papermc.paper.chunk.system.ChunkSystem.scheduleChunkLoad(this.level, x, z, gen, ChunkStatus.FULL, true, priority, (chunk) -> {
-+ if (chunk == null) {
-+ ret.complete(ChunkHolder.UNLOADED_CHUNK);
-+ } else {
-+ ret.complete(Either.left(chunk));
-+ }
-+ });
-+
-+ return ret;
-+ }
-+ // Paper end - async chunk io
-
+@@ -393,7 +264,8 @@ public class ServerChunkCache extends ChunkSource {
@Nullable
@Override
public ChunkAccess getChunk(int x, int z, ChunkStatus leastStatus, boolean create) {
@@ -17614,7 +19316,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
return (ChunkAccess) CompletableFuture.supplyAsync(() -> {
return this.getChunk(x, z, leastStatus, create);
}, this.mainThreadProcessor).join();
-@@ -405,23 +428,20 @@ public class ServerChunkCache extends ChunkSource {
+@@ -405,23 +277,20 @@ public class ServerChunkCache extends ChunkSource {
ChunkAccess ichunkaccess;
@@ -17644,7 +19346,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
this.level.timings.syncChunkLoad.stopTiming(); // Paper
} // Paper
ichunkaccess = (ChunkAccess) ((Either) completablefuture.join()).map((ichunkaccess1) -> {
-@@ -441,7 +461,7 @@ public class ServerChunkCache extends ChunkSource {
+@@ -441,7 +310,7 @@ public class ServerChunkCache extends ChunkSource {
@Nullable
@Override
public LevelChunk getChunkNow(int chunkX, int chunkZ) {
@@ -17653,7 +19355,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
return null;
} else {
this.level.getProfiler().incrementCounter("getChunkNow");
-@@ -487,7 +507,7 @@ public class ServerChunkCache extends ChunkSource {
+@@ -487,7 +356,7 @@ public class ServerChunkCache extends ChunkSource {
}
public CompletableFuture> getChunkFuture(int chunkX, int chunkZ, ChunkStatus leastStatus, boolean create) {
@@ -17662,7 +19364,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
CompletableFuture completablefuture;
if (flag1) {
-@@ -508,47 +528,52 @@ public class ServerChunkCache extends ChunkSource {
+@@ -508,47 +377,52 @@ public class ServerChunkCache extends ChunkSource {
}
private CompletableFuture> getChunkFutureMainThread(int chunkX, int chunkZ, ChunkStatus leastStatus, boolean create) {
@@ -17685,7 +19387,11 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
- FullChunkStatus oldChunkState = ChunkLevel.fullStatus(playerchunk.oldTicketLevel);
- FullChunkStatus currentChunkState = ChunkLevel.fullStatus(playerchunk.getTicketLevel());
- currentlyUnloading = (oldChunkState.isOrAfter(FullChunkStatus.FULL) && !currentChunkState.isOrAfter(FullChunkStatus.FULL));
-- }
++ boolean needsFullScheduling = leastStatus == ChunkStatus.FULL && (chunkHolder == null || !chunkHolder.getChunkStatus().isOrAfter(FullChunkStatus.FULL));
++
++ if ((chunkHolder == null || chunkHolder.getTicketLevel() > minLevel || needsFullScheduling) && !create) {
++ return ChunkHolder.UNLOADED_CHUNK_FUTURE;
+ }
- if (create && !currentlyUnloading) {
- // CraftBukkit end
- this.distanceManager.addTicket(TicketType.UNKNOWN, chunkcoordintpair, l, chunkcoordintpair);
@@ -17698,16 +19404,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
- gameprofilerfiller.pop();
- if (this.chunkAbsent(playerchunk, l)) {
- throw (IllegalStateException) Util.pauseInIde(new IllegalStateException("No chunk holder after ticket has been added"));
-- }
-- }
-+ boolean needsFullScheduling = leastStatus == ChunkStatus.FULL && (chunkHolder == null || !chunkHolder.getChunkStatus().isOrAfter(FullChunkStatus.FULL));
+
-+ if ((chunkHolder == null || chunkHolder.getTicketLevel() > minLevel || needsFullScheduling) && !create) {
-+ return ChunkHolder.UNLOADED_CHUNK_FUTURE;
- }
-
-- return this.chunkAbsent(playerchunk, l) ? ChunkHolder.UNLOADED_CHUNK_FUTURE : playerchunk.getOrScheduleFuture(leastStatus, this.chunkMap);
-- }
+ io.papermc.paper.chunk.system.scheduling.NewChunkHolder.ChunkCompletion chunkCompletion = chunkHolder == null ? null : chunkHolder.getLastChunkCompletion();
+ if (needsFullScheduling || chunkCompletion == null || !chunkCompletion.genStatus().isOrAfter(leastStatus)) {
+ // schedule
@@ -17717,17 +19414,21 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
+ ret.complete(Either.right(ChunkHolder.ChunkLoadingFailure.UNLOADED));
+ } else {
+ ret.complete(Either.left(chunk));
-+ }
+ }
+- }
+- }
+ };
-- private boolean chunkAbsent(@Nullable ChunkHolder holder, int maxLevel) {
-- return holder == null || holder.oldTicketLevel > maxLevel; // CraftBukkit using oldTicketLevel for isLoaded checks
+- return this.chunkAbsent(playerchunk, l) ? ChunkHolder.UNLOADED_CHUNK_FUTURE : playerchunk.getOrScheduleFuture(leastStatus, this.chunkMap);
+- }
+ this.level.chunkTaskScheduler.scheduleChunkLoad(
+ chunkX, chunkZ, leastStatus, true,
+ isUrgent ? ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.BLOCKING : ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.NORMAL,
+ complete
+ );
-+
+
+- private boolean chunkAbsent(@Nullable ChunkHolder holder, int maxLevel) {
+- return holder == null || holder.oldTicketLevel > maxLevel; // CraftBukkit using oldTicketLevel for isLoaded checks
+ return ret;
+ } else {
+ // can return now
@@ -17748,7 +19449,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
}
@Nullable
-@@ -560,22 +585,13 @@ public class ServerChunkCache extends ChunkSource {
+@@ -560,22 +434,13 @@ public class ServerChunkCache extends ChunkSource {
if (playerchunk == null) {
return null;
} else {
@@ -17777,7 +19478,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
}
}
-@@ -589,15 +605,7 @@ public class ServerChunkCache extends ChunkSource {
+@@ -589,15 +454,7 @@ public class ServerChunkCache extends ChunkSource {
}
boolean runDistanceManagerUpdates() {
@@ -17794,7 +19495,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
}
// Paper start
-@@ -607,17 +615,10 @@ public class ServerChunkCache extends ChunkSource {
+@@ -607,17 +464,10 @@ public class ServerChunkCache extends ChunkSource {
// Paper end
public boolean isPositionTicking(long pos) {
@@ -17816,7 +19517,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
}
public void save(boolean flush) {
-@@ -633,17 +634,13 @@ public class ServerChunkCache extends ChunkSource {
+@@ -633,17 +483,13 @@ public class ServerChunkCache extends ChunkSource {
this.close(true);
}
@@ -17837,7 +19538,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
this.level.getProfiler().push("purge");
this.distanceManager.purgeStaleTickets();
this.runDistanceManagerUpdates();
-@@ -664,6 +661,7 @@ public class ServerChunkCache extends ChunkSource {
+@@ -664,6 +510,7 @@ public class ServerChunkCache extends ChunkSource {
this.level.getProfiler().popPush("chunks");
if (tickChunks) {
this.level.timings.chunks.startTiming(); // Paper - timings
@@ -17845,7 +19546,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
this.tickChunks();
this.level.timings.chunks.stopTiming(); // Paper - timings
}
-@@ -760,7 +758,12 @@ public class ServerChunkCache extends ChunkSource {
+@@ -760,7 +607,12 @@ public class ServerChunkCache extends ChunkSource {
ChunkHolder playerchunk = this.getVisibleChunkIfPresent(pos);
if (playerchunk != null) {
@@ -17859,12 +19560,11 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
}
}
-@@ -926,17 +929,11 @@ public class ServerChunkCache extends ChunkSource {
+@@ -926,17 +778,10 @@ public class ServerChunkCache extends ChunkSource {
@Override
// CraftBukkit start - process pending Chunk loadCallback() and unloadCallback() after each run task
public boolean pollTask() {
- try {
-+ // Paper - replace player chunk loader
if (ServerChunkCache.this.runDistanceManagerUpdates()) {
return true;
- } else {
@@ -17880,7 +19580,7 @@ index 6d5a160a9fdaa04bb930afae8a0765910f631d23..5c97b0069d20815c2a1d61bf34596a8a
}
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859b6652d6d 100644
+index 7cb5abfa89f842194325d26c6e95b49460c5968f..995be2fd84ce343d7430d9658f91868e653da43d 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
@@ -194,7 +194,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
@@ -17892,11 +19592,125 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
private final GameEventDispatcher gameEventDispatcher;
public boolean noSave;
private final SleepStatus sleepStatus;
-@@ -320,7 +320,150 @@ public class ServerLevel extends Level implements WorldGenLevel {
- }
- }
+@@ -260,50 +260,63 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ return true;
}
-- // Paper end
+
+- public final void loadChunksForMoveAsync(AABB axisalignedbb, ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority,
+- java.util.function.Consumer> onLoad) {
+- if (Thread.currentThread() != this.thread) {
+- this.getChunkSource().mainThreadProcessor.execute(() -> {
+- this.loadChunksForMoveAsync(axisalignedbb, priority, onLoad);
+- });
+- return;
+- }
++ public final void loadChunksAsync(BlockPos pos, int radiusBlocks,
++ ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority,
++ java.util.function.Consumer> onLoad) {
++ loadChunksAsync(
++ (pos.getX() - radiusBlocks) >> 4,
++ (pos.getX() + radiusBlocks) >> 4,
++ (pos.getZ() - radiusBlocks) >> 4,
++ (pos.getZ() + radiusBlocks) >> 4,
++ priority, onLoad
++ );
++ }
++
++ public final void loadChunksAsync(BlockPos pos, int radiusBlocks,
++ net.minecraft.world.level.chunk.ChunkStatus chunkStatus,
++ ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority,
++ java.util.function.Consumer> onLoad) {
++ loadChunksAsync(
++ (pos.getX() - radiusBlocks) >> 4,
++ (pos.getX() + radiusBlocks) >> 4,
++ (pos.getZ() - radiusBlocks) >> 4,
++ (pos.getZ() + radiusBlocks) >> 4,
++ chunkStatus, priority, onLoad
++ );
++ }
++
++ public final void loadChunksAsync(int minChunkX, int maxChunkX, int minChunkZ, int maxChunkZ,
++ ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority,
++ java.util.function.Consumer> onLoad) {
++ this.loadChunksAsync(minChunkX, maxChunkX, minChunkZ, maxChunkZ, net.minecraft.world.level.chunk.ChunkStatus.FULL, priority, onLoad);
++ }
++
++ public final void loadChunksAsync(int minChunkX, int maxChunkX, int minChunkZ, int maxChunkZ,
++ net.minecraft.world.level.chunk.ChunkStatus chunkStatus,
++ ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority,
++ java.util.function.Consumer> onLoad) {
+ List ret = new java.util.ArrayList<>();
+- IntArrayList ticketLevels = new IntArrayList();
+-
+- int minBlockX = Mth.floor(axisalignedbb.minX - 1.0E-7D) - 3;
+- int maxBlockX = Mth.floor(axisalignedbb.maxX + 1.0E-7D) + 3;
+-
+- int minBlockZ = Mth.floor(axisalignedbb.minZ - 1.0E-7D) - 3;
+- int maxBlockZ = Mth.floor(axisalignedbb.maxZ + 1.0E-7D) + 3;
+-
+- int minChunkX = minBlockX >> 4;
+- int maxChunkX = maxBlockX >> 4;
+-
+- int minChunkZ = minBlockZ >> 4;
+- int maxChunkZ = maxBlockZ >> 4;
+
+ ServerChunkCache chunkProvider = this.getChunkSource();
+
+ int requiredChunks = (maxChunkX - minChunkX + 1) * (maxChunkZ - minChunkZ + 1);
+- int[] loadedChunks = new int[1];
++ java.util.concurrent.atomic.AtomicInteger loadedChunks = new java.util.concurrent.atomic.AtomicInteger();
+
+- Long holderIdentifier = Long.valueOf(chunkProvider.chunkFutureAwaitCounter++);
++ Long holderIdentifier = Long.valueOf(chunkProvider.chunkFutureAwaitCounter.getAndIncrement());
++
++ int ticketLevel = 33 + net.minecraft.world.level.chunk.ChunkStatus.getDistance(chunkStatus);
+
+ java.util.function.Consumer consumer = (net.minecraft.world.level.chunk.ChunkAccess chunk) -> {
+ if (chunk != null) {
+- int ticketLevel = Math.max(33, chunkProvider.chunkMap.getUpdatingChunkIfPresent(chunk.getPos().toLong()).getTicketLevel());
+ ret.add(chunk);
+- ticketLevels.add(ticketLevel);
+ chunkProvider.addTicketAtLevel(TicketType.FUTURE_AWAIT, chunk.getPos(), ticketLevel, holderIdentifier);
+ }
+- if (++loadedChunks[0] == requiredChunks) {
++ if (loadedChunks.incrementAndGet() == requiredChunks) {
+ try {
+ onLoad.accept(java.util.Collections.unmodifiableList(ret));
+ } finally {
+ for (int i = 0, len = ret.size(); i < len; ++i) {
+ ChunkPos chunkPos = ret.get(i).getPos();
+- int ticketLevel = ticketLevels.getInt(i);
+
+ chunkProvider.addTicketAtLevel(TicketType.UNKNOWN, chunkPos, ticketLevel, chunkPos);
+ chunkProvider.removeTicketAtLevel(TicketType.FUTURE_AWAIT, chunkPos, ticketLevel, holderIdentifier);
+@@ -315,12 +328,223 @@ public class ServerLevel extends Level implements WorldGenLevel {
+ for (int cx = minChunkX; cx <= maxChunkX; ++cx) {
+ for (int cz = minChunkZ; cz <= maxChunkZ; ++cz) {
+ io.papermc.paper.chunk.system.ChunkSystem.scheduleChunkLoad(
+- this, cx, cz, net.minecraft.world.level.chunk.ChunkStatus.FULL, true, priority, consumer
++ this, cx, cz, chunkStatus, true, priority, consumer
++ );
++ }
++ }
++ }
++
++ public final void loadChunksForMoveAsync(AABB axisalignedbb, ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority priority,
++ java.util.function.Consumer> onLoad) {
++
++ int minBlockX = Mth.floor(axisalignedbb.minX - 1.0E-7D) - 3;
++ int maxBlockX = Mth.floor(axisalignedbb.maxX + 1.0E-7D) + 3;
++
++ int minBlockZ = Mth.floor(axisalignedbb.minZ - 1.0E-7D) - 3;
++ int maxBlockZ = Mth.floor(axisalignedbb.maxZ + 1.0E-7D) + 3;
++
++ int minChunkX = minBlockX >> 4;
++ int maxChunkX = maxBlockX >> 4;
++
++ int minChunkZ = minBlockZ >> 4;
++ int maxChunkZ = maxBlockZ >> 4;
++
++ this.loadChunksAsync(minChunkX, maxChunkX, minChunkZ, maxChunkZ, priority, onLoad);
++ }
+
+ // Paper start - rewrite chunk system
+ public final io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler chunkTaskScheduler;
@@ -17968,11 +19782,12 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
+ throw new IllegalArgumentException(
+ "Entity chunk coordinate and serialized data do not have matching coordinates, trying to serialize coordinate " + pos.toString()
+ + " but compound says coordinate is " + nbtPos + " for world: " + this
-+ );
-+ }
+ );
+ }
+ super.write(pos, nbt);
-+ }
-+ }
+ }
+ }
+- // Paper end
+
+ private void writeEntityChunk(int chunkX, int chunkZ, net.minecraft.nbt.CompoundTag compound) throws IOException {
+ if (!io.papermc.paper.chunk.system.io.RegionFileIOThread.isRegionFileThread()) {
@@ -17998,6 +19813,56 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
+ public final io.papermc.paper.chunk.system.entity.EntityLookup getEntityLookup() {
+ return this.entityLookup;
+ }
++
++ private final java.util.concurrent.atomic.AtomicLong nonFullSyncLoadIdGenerator = new java.util.concurrent.atomic.AtomicLong();
++
++ private ChunkAccess getIfAboveStatus(int chunkX, int chunkZ, net.minecraft.world.level.chunk.ChunkStatus status) {
++ io.papermc.paper.chunk.system.scheduling.NewChunkHolder loaded =
++ this.chunkTaskScheduler.chunkHolderManager.getChunkHolder(chunkX, chunkZ);
++ io.papermc.paper.chunk.system.scheduling.NewChunkHolder.ChunkCompletion loadedCompletion;
++ if (loaded != null && (loadedCompletion = loaded.getLastChunkCompletion()) != null && loadedCompletion.genStatus().isOrAfter(status)) {
++ return loadedCompletion.chunk();
++ }
++
++ return null;
++ }
++
++ @Override
++ public ChunkAccess syncLoadNonFull(int chunkX, int chunkZ, net.minecraft.world.level.chunk.ChunkStatus status) {
++ if (status == null || status.isOrAfter(net.minecraft.world.level.chunk.ChunkStatus.FULL)) {
++ throw new IllegalArgumentException("Status: " + status.toString());
++ }
++ ChunkAccess loaded = this.getIfAboveStatus(chunkX, chunkZ, status);
++ if (loaded != null) {
++ return loaded;
++ }
++
++ Long ticketId = Long.valueOf(this.nonFullSyncLoadIdGenerator.getAndIncrement());
++ int ticketLevel = 33 + net.minecraft.world.level.chunk.ChunkStatus.getDistance(status);
++ this.chunkTaskScheduler.chunkHolderManager.addTicketAtLevel(
++ TicketType.NON_FULL_SYNC_LOAD, chunkX, chunkZ, ticketLevel, ticketId
++ );
++ this.chunkTaskScheduler.chunkHolderManager.processTicketUpdates();
++
++ this.chunkTaskScheduler.beginChunkLoadForNonFullSync(chunkX, chunkZ, status, ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.BLOCKING);
++
++ // we could do a simple spinwait here, since we do not need to process tasks while performing this load
++ // but we process tasks only because it's a better use of the time spent
++ this.chunkSource.mainThreadProcessor.managedBlock(() -> {
++ return ServerLevel.this.getIfAboveStatus(chunkX, chunkZ, status) != null;
++ });
++
++ loaded = ServerLevel.this.getIfAboveStatus(chunkX, chunkZ, status);
++ if (loaded == null) {
++ throw new IllegalStateException("Expected chunk to be loaded for status " + status);
++ }
++
++ this.chunkTaskScheduler.chunkHolderManager.removeTicketAtLevel(
++ TicketType.NON_FULL_SYNC_LOAD, chunkX, chunkZ, ticketLevel, ticketId
++ );
++
++ return loaded;
++ }
+ // Paper end - rewrite chunk system
+
+ public final io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader playerChunkLoader = new io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader(this);
@@ -18044,7 +19909,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
// Add env and gen to constructor, IWorldDataServer -> WorldDataServer
public ServerLevel(MinecraftServer minecraftserver, Executor executor, LevelStorageSource.LevelStorageAccess convertable_conversionsession, PrimaryLevelData iworlddataserver, ResourceKey resourcekey, LevelStem worlddimension, ChunkProgressListener worldloadlistener, boolean flag, long i, List list, boolean flag1, @Nullable RandomSequences randomsequences, org.bukkit.World.Environment env, org.bukkit.generator.ChunkGenerator gen, org.bukkit.generator.BiomeProvider biomeProvider) {
-@@ -364,16 +507,16 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -364,16 +588,16 @@ public class ServerLevel extends Level implements WorldGenLevel {
// CraftBukkit end
boolean flag2 = minecraftserver.forceSynchronousWrites();
DataFixer datafixer = minecraftserver.getFixerUpper();
@@ -18066,7 +19931,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
return minecraftserver.overworld().getDataStorage();
});
this.chunkSource.getGeneratorState().ensureStructuresGenerated();
-@@ -410,6 +553,9 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -410,6 +634,9 @@ public class ServerLevel extends Level implements WorldGenLevel {
}, "random_sequences");
});
this.getCraftServer().addWorld(this.getWorld()); // CraftBukkit
@@ -18076,7 +19941,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
}
/** @deprecated */
-@@ -520,7 +666,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -520,7 +747,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
gameprofilerfiller.push("checkDespawn");
entity.checkDespawn();
gameprofilerfiller.pop();
@@ -18085,7 +19950,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
Entity entity1 = entity.getVehicle();
if (entity1 != null) {
-@@ -545,13 +691,16 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -545,13 +772,16 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
gameprofilerfiller.push("entityManagement");
@@ -18104,7 +19969,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
}
protected void tickTime() {
-@@ -1012,6 +1161,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1012,6 +1242,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
public void save(@Nullable ProgressListener progressListener, boolean flush, boolean savingDisabled) {
@@ -18116,7 +19981,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
ServerChunkCache chunkproviderserver = this.getChunkSource();
if (!savingDisabled) {
-@@ -1027,16 +1181,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1027,16 +1262,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
timings.worldSaveChunks.startTiming(); // Paper
@@ -18137,7 +20002,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
// CraftBukkit start - moved from MinecraftServer.saveChunks
ServerLevel worldserver1 = this;
-@@ -1172,7 +1323,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1172,7 +1404,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
this.removePlayerImmediately((ServerPlayer) entity, Entity.RemovalReason.DISCARDED);
}
@@ -18146,7 +20011,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
}
// CraftBukkit start
-@@ -1188,7 +1339,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1188,7 +1420,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
// CraftBukkit end
@@ -18155,7 +20020,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
}
}
-@@ -1200,10 +1351,10 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1200,10 +1432,10 @@ public class ServerLevel extends Level implements WorldGenLevel {
public boolean tryAddFreshEntityWithPassengers(Entity entity, org.bukkit.event.entity.CreatureSpawnEvent.SpawnReason reason) {
// CraftBukkit end
Stream stream = entity.getSelfAndPassengers().map(Entity::getUUID); // CraftBukkit - decompile error
@@ -18169,7 +20034,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
return false;
} else {
this.addFreshEntityWithPassengers(entity, reason); // CraftBukkit
-@@ -1723,7 +1874,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1723,7 +1955,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
}
@@ -18178,7 +20043,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
bufferedwriter.write(String.format(Locale.ROOT, "block_entity_tickers: %d\n", this.blockEntityTickers.size()));
bufferedwriter.write(String.format(Locale.ROOT, "block_ticks: %d\n", this.getBlockTicks().count()));
bufferedwriter.write(String.format(Locale.ROOT, "fluid_ticks: %d\n", this.getFluidTicks().count()));
-@@ -1772,7 +1923,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1772,7 +2004,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
BufferedWriter bufferedwriter2 = Files.newBufferedWriter(path1);
try {
@@ -18187,7 +20052,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
} catch (Throwable throwable4) {
if (bufferedwriter2 != null) {
try {
-@@ -1793,7 +1944,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1793,7 +2025,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
BufferedWriter bufferedwriter3 = Files.newBufferedWriter(path2);
try {
@@ -18196,7 +20061,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
} catch (Throwable throwable6) {
if (bufferedwriter3 != null) {
try {
-@@ -1935,7 +2086,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1935,7 +2167,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
@VisibleForTesting
public String getWatchdogStats() {
@@ -18205,7 +20070,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
return BuiltInRegistries.ENTITY_TYPE.getKey(entity.getType()).toString();
}), this.blockEntityTickers.size(), ServerLevel.getTypeCount(this.blockEntityTickers, TickingBlockEntity::getType), this.getBlockTicks().count(), this.getFluidTicks().count(), this.gatherChunkSourceStats());
}
-@@ -1995,15 +2146,15 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1995,15 +2227,15 @@ public class ServerLevel extends Level implements WorldGenLevel {
@Override
public LevelEntityGetter getEntities() {
org.spigotmc.AsyncCatcher.catchOp("Chunk getEntities call"); // Spigot
@@ -18224,7 +20089,7 @@ index 7cb5abfa89f842194325d26c6e95b49460c5968f..62a95a0fac59683948f34b202e6e3859
}
public void startTickingChunk(LevelChunk chunk) {
-@@ -2019,34 +2170,49 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2019,34 +2251,49 @@ public class ServerLevel extends Level implements WorldGenLevel {
@Override
public void close() throws IOException {
super.close();
@@ -18491,7 +20356,7 @@ index 481272124b7589cff0aa05b6df5b7e6f1d539414..b4be02ec4bb77059f79d3e4d6a6f1ee4
static enum TaskType {
diff --git a/src/main/java/net/minecraft/server/level/Ticket.java b/src/main/java/net/minecraft/server/level/Ticket.java
-index b346fa94b23d81da7da073f71dd12e672e0f079c..768a2667f950a635a562fa8a0c75b31a3ae9190e 100644
+index b346fa94b23d81da7da073f71dd12e672e0f079c..0edb97617f0c0da8dda901a26891b33c324715c7 100644
--- a/src/main/java/net/minecraft/server/level/Ticket.java
+++ b/src/main/java/net/minecraft/server/level/Ticket.java
@@ -6,9 +6,12 @@ public final class Ticket implements Comparable> {
@@ -18500,11 +20365,11 @@ index b346fa94b23d81da7da073f71dd12e672e0f079c..768a2667f950a635a562fa8a0c75b31a
public final T key;
- private long createdTick;
+ // Paper start - rewrite chunk system
-+ public final long removalTick;
++ public long removeDelay;
- protected Ticket(TicketType type, int level, T argument) {
-+ public Ticket(TicketType type, int level, T argument, long removalTick) {
-+ this.removalTick = removalTick;
++ public Ticket(TicketType type, int level, T argument, long removeDelay) {
++ this.removeDelay = removeDelay;
+ // Paper end - rewrite chunk system
this.type = type;
this.ticketLevel = level;
@@ -18514,7 +20379,7 @@ index b346fa94b23d81da7da073f71dd12e672e0f079c..768a2667f950a635a562fa8a0c75b31a
@Override
public String toString() {
- return "Ticket[" + this.type + " " + this.ticketLevel + " (" + this.key + ")] at " + this.createdTick;
-+ return "Ticket[" + this.type + " " + this.ticketLevel + " (" + this.key + ")] to die on " + this.removalTick; // Paper - rewrite chunk system
++ return "Ticket[" + this.type + " " + this.ticketLevel + " (" + this.key + ")] to die in " + this.removeDelay; // Paper - rewrite chunk system
}
public TicketType getType() {
@@ -18533,7 +20398,7 @@ index b346fa94b23d81da7da073f71dd12e672e0f079c..768a2667f950a635a562fa8a0c75b31a
}
}
diff --git a/src/main/java/net/minecraft/server/level/TicketType.java b/src/main/java/net/minecraft/server/level/TicketType.java
-index 6051e5f272838ef23276a90e21c2fc821ca155d1..97d1ff2af23bac14e67bca5896843325aaa5bfc1 100644
+index 6051e5f272838ef23276a90e21c2fc821ca155d1..658e63ebde81dc14c8ab5850fb246dc0aab25dea 100644
--- a/src/main/java/net/minecraft/server/level/TicketType.java
+++ b/src/main/java/net/minecraft/server/level/TicketType.java
@@ -8,6 +8,7 @@ import net.minecraft.world.level.ChunkPos;
@@ -18544,7 +20409,7 @@ index 6051e5f272838ef23276a90e21c2fc821ca155d1..97d1ff2af23bac14e67bca5896843325
private final String name;
private final Comparator comparator;
-@@ -27,6 +28,13 @@ public class TicketType {
+@@ -27,6 +28,15 @@ public class TicketType {
public static final TicketType PLUGIN = TicketType.create("plugin", (a, b) -> 0); // CraftBukkit
public static final TicketType PLUGIN_TICKET = TicketType.create("plugin_ticket", (plugin1, plugin2) -> plugin1.getClass().getName().compareTo(plugin2.getClass().getName())); // CraftBukkit
public static final TicketType CHUNK_RELIGHT = create("light_update", Long::compareTo); // Paper - ensure chunks stay loaded for lighting
@@ -18554,6 +20419,8 @@ index 6051e5f272838ef23276a90e21c2fc821ca155d1..97d1ff2af23bac14e67bca5896843325
+ public static final TicketType ENTITY_LOAD = create("entity_load", Long::compareTo);
+ public static final TicketType POI_LOAD = create("poi_load", Long::compareTo);
+ public static final TicketType UNLOAD_COOLDOWN = create("unload_cooldown", (u1, u2) -> 0, 5 * 20);
++ public static final TicketType NON_FULL_SYNC_LOAD = create("non_full_sync_load", Long::compareTo);
++ public static final TicketType DELAY_UNLOAD = create("delay_unload", Comparator.comparingLong(ChunkPos::toLong), 1);
+ // Paper end - rewrite chunk system
public static TicketType create(String name, Comparator argumentComparator) {
@@ -18645,10 +20512,25 @@ index 90c73b9075489242556a7ba749618e20c0ed0c4d..0338a6b245ee482d470f5a80da712679
while (iterator.hasNext()) {
diff --git a/src/main/java/net/minecraft/util/SortedArraySet.java b/src/main/java/net/minecraft/util/SortedArraySet.java
-index ca788f0dcec4a117b410fe8348969e056b138b1e..4f5f2c25e12ee6d977bc98d9118650cfe91e6c0e 100644
+index ca788f0dcec4a117b410fe8348969e056b138b1e..a6ac76707da39cf86113003b1f326433fdc86c86 100644
--- a/src/main/java/net/minecraft/util/SortedArraySet.java
+++ b/src/main/java/net/minecraft/util/SortedArraySet.java
-@@ -22,6 +22,41 @@ public class SortedArraySet extends AbstractSet {
+@@ -14,6 +14,14 @@ public class SortedArraySet extends AbstractSet {
+ T[] contents;
+ int size;
+
++ // Paper start - rewrite chunk system
++ public SortedArraySet(final SortedArraySet other) {
++ this.comparator = other.comparator;
++ this.size = other.size;
++ this.contents = Arrays.copyOf(other.contents, this.size);
++ }
++ // Paper end - rewrite chunk system
++
+ private SortedArraySet(int initialCapacity, Comparator comparator) {
+ this.comparator = comparator;
+ if (initialCapacity < 0) {
+@@ -22,6 +30,41 @@ public class SortedArraySet extends AbstractSet {
this.contents = (T[])castRawArray(new Object[initialCapacity]);
}
}
@@ -18690,7 +20572,7 @@ index ca788f0dcec4a117b410fe8348969e056b138b1e..4f5f2c25e12ee6d977bc98d9118650cf
public static > SortedArraySet create() {
return create(10);
-@@ -110,6 +145,31 @@ public class SortedArraySet extends AbstractSet {
+@@ -110,6 +153,31 @@ public class SortedArraySet extends AbstractSet {
}
}
@@ -19301,8 +21183,28 @@ index d87f02c748fe2e5b4ea251f6691e8907a152cb6d..5988c0847af4e8f0094328e91f736f25
+ }
+ // Paper end
}
+diff --git a/src/main/java/net/minecraft/world/level/LevelReader.java b/src/main/java/net/minecraft/world/level/LevelReader.java
+index fe76ec5b10242beb6d6057bd680484fc63b7eac3..7f0952fa312e2870f26d94344408b9dcc95f4cc3 100644
+--- a/src/main/java/net/minecraft/world/level/LevelReader.java
++++ b/src/main/java/net/minecraft/world/level/LevelReader.java
+@@ -26,6 +26,15 @@ public interface LevelReader extends BlockAndTintGetter, CollisionGetter, Signal
+ @Nullable
+ ChunkAccess getChunk(int chunkX, int chunkZ, ChunkStatus leastStatus, boolean create);
+
++ // Paper start - rewrite chunk system
++ default ChunkAccess syncLoadNonFull(int chunkX, int chunkZ, ChunkStatus status) {
++ if (status == null || status.isOrAfter(ChunkStatus.FULL)) {
++ throw new IllegalArgumentException("Status: " + status.getName());
++ }
++ return this.getChunk(chunkX, chunkZ, status, true);
++ }
++ // Paper end - rewrite chunk system
++
+ @Nullable ChunkAccess getChunkIfLoadedImmediately(int x, int z); // Paper - ifLoaded api (we need this since current impl blocks if the chunk is loading)
+
+ /** @deprecated */
diff --git a/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java b/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
-index f739a175e26f250c652c73b8985158fe37c2823a..a70b68cdff1e5a793b8b3a214cb8ea0ed3ff2e4b 100644
+index f739a175e26f250c652c73b8985158fe37c2823a..5f4fa76fe3a1a0a4fc11064fcf57bfab20bd9729 100644
--- a/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
+++ b/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
@@ -114,7 +114,7 @@ public abstract class ChunkGenerator {
@@ -19314,6 +21216,15 @@ index f739a175e26f250c652c73b8985158fe37c2823a..a70b68cdff1e5a793b8b3a214cb8ea0e
}
public abstract void applyCarvers(WorldGenRegion chunkRegion, long seed, RandomState noiseConfig, BiomeManager biomeAccess, StructureManager structureAccessor, ChunkAccess chunk, GenerationStep.Carving carverStep);
+@@ -287,7 +287,7 @@ public abstract class ChunkGenerator {
+ return Pair.of(placement.getLocatePos(pos), holder);
+ }
+
+- ChunkAccess ichunkaccess = world.getChunk(pos.x, pos.z, ChunkStatus.STRUCTURE_STARTS);
++ ChunkAccess ichunkaccess = world.syncLoadNonFull(pos.x, pos.z, ChunkStatus.STRUCTURE_STARTS); // Paper - rewrite chunk system
+
+ structurestart = structureAccessor.getStartForStructure(SectionPos.bottomOf(ichunkaccess), (Structure) holder.value(), ichunkaccess);
+ } while (structurestart == null);
diff --git a/src/main/java/net/minecraft/world/level/chunk/ChunkStatus.java b/src/main/java/net/minecraft/world/level/chunk/ChunkStatus.java
index fb5a06a908d2b42bf0530b62ed648548499d9f87..ec55711e912fe6cb8f797c0b21bcef273966a47a 100644
--- a/src/main/java/net/minecraft/world/level/chunk/ChunkStatus.java
@@ -20403,6 +22314,169 @@ index 54308f1decc3982f30bf8b7a8a9d8865bfdbb9fd..902156477bdfc9917105f1229f760c26
Iterator iterator = set.iterator();
while (iterator.hasNext()) {
+diff --git a/src/main/java/net/minecraft/world/level/levelgen/structure/StructureCheck.java b/src/main/java/net/minecraft/world/level/levelgen/structure/StructureCheck.java
+index 1ca00340aaa201dd34e5c350d23ef53e126a0ca6..16356d7f388561300e794a52f3f263b8e7d9b880 100644
+--- a/src/main/java/net/minecraft/world/level/levelgen/structure/StructureCheck.java
++++ b/src/main/java/net/minecraft/world/level/levelgen/structure/StructureCheck.java
+@@ -50,8 +50,101 @@ public class StructureCheck {
+ private final BiomeSource biomeSource;
+ private final long seed;
+ private final DataFixer fixerUpper;
+- private final Long2ObjectMap> loadedChunks = new Long2ObjectOpenHashMap<>();
+- private final Map featureChecks = new HashMap<>();
++ // Paper start - rewrite chunk system - synchronise this class
++ // additionally, make sure to purge entries from the maps so it does not leak memory
++ private static final int CHUNK_TOTAL_LIMIT = 50 * (2 * 100 + 1) * (2 * 100 + 1); // cache 50 structure lookups
++ private static final int PER_FEATURE_CHECK_LIMIT = 50 * (2 * 100 + 1) * (2 * 100 + 1); // cache 50 structure lookups
++
++ private final SynchronisedLong2ObjectMap> loadedChunksSafe = new SynchronisedLong2ObjectMap<>(CHUNK_TOTAL_LIMIT);
++ private final java.util.concurrent.ConcurrentHashMap featureChecksSafe = new java.util.concurrent.ConcurrentHashMap<>();
++
++ private static final class SynchronisedLong2ObjectMap {
++ private final it.unimi.dsi.fastutil.longs.Long2ObjectLinkedOpenHashMap map = new it.unimi.dsi.fastutil.longs.Long2ObjectLinkedOpenHashMap<>();
++ private final int limit;
++
++ public SynchronisedLong2ObjectMap(final int limit) {
++ this.limit = limit;
++ }
++
++ // must hold lock on map
++ private void purgeEntries() {
++ while (this.map.size() > this.limit) {
++ this.map.removeLast();
++ }
++ }
++
++ public V get(final long key) {
++ synchronized (this.map) {
++ return this.map.getAndMoveToFirst(key);
++ }
++ }
++
++ public V put(final long key, final V value) {
++ synchronized (this.map) {
++ final V ret = this.map.putAndMoveToFirst(key, value);
++ this.purgeEntries();
++ return ret;
++ }
++ }
++
++ public V compute(final long key, final java.util.function.BiFunction super Long, ? super V, ? extends V> remappingFunction) {
++ synchronized (this.map) {
++ // first, compute the value - if one is added, it will be at the last entry
++ this.map.compute(key, remappingFunction);
++ // move the entry to first, just in case it was added at last
++ final V ret = this.map.getAndMoveToFirst(key);
++ // now purge the last entries
++ this.purgeEntries();
++
++ return ret;
++ }
++ }
++ }
++
++ private static final class SynchronisedLong2BooleanMap {
++ private final it.unimi.dsi.fastutil.longs.Long2BooleanLinkedOpenHashMap map = new it.unimi.dsi.fastutil.longs.Long2BooleanLinkedOpenHashMap();
++ private final int limit;
++
++ public SynchronisedLong2BooleanMap(final int limit) {
++ this.limit = limit;
++ }
++
++ // must hold lock on map
++ private void purgeEntries() {
++ while (this.map.size() > this.limit) {
++ this.map.removeLastBoolean();
++ }
++ }
++
++ public boolean remove(final long key) {
++ synchronized (this.map) {
++ return this.map.remove(key);
++ }
++ }
++
++ // note:
++ public boolean getOrCompute(final long key, final it.unimi.dsi.fastutil.longs.Long2BooleanFunction ifAbsent) {
++ synchronized (this.map) {
++ if (this.map.containsKey(key)) {
++ return this.map.getAndMoveToFirst(key);
++ }
++ }
++
++ final boolean put = ifAbsent.get(key);
++
++ synchronized (this.map) {
++ if (this.map.containsKey(key)) {
++ return this.map.getAndMoveToFirst(key);
++ }
++ this.map.putAndMoveToFirst(key, put);
++
++ this.purgeEntries();
++
++ return put;
++ }
++ }
++ }
++ // Paper end - rewrite chunk system - synchronise this class
+
+ public StructureCheck(ChunkScanAccess chunkIoWorker, RegistryAccess registryManager, StructureTemplateManager structureTemplateManager, ResourceKey worldKey, ChunkGenerator chunkGenerator, RandomState noiseConfig, LevelHeightAccessor world, BiomeSource biomeSource, long seed, DataFixer dataFixer) { // Paper - fix missing CB diff
+ this.storageAccess = chunkIoWorker;
+@@ -70,7 +163,7 @@ public class StructureCheck {
+
+ public StructureCheckResult checkStart(ChunkPos pos, Structure type, boolean skipReferencedStructures) {
+ long l = pos.toLong();
+- Object2IntMap object2IntMap = this.loadedChunks.get(l);
++ Object2IntMap object2IntMap = this.loadedChunksSafe.get(l); // Paper - rewrite chunk system - synchronise this class
+ if (object2IntMap != null) {
+ return this.checkStructureInfo(object2IntMap, type, skipReferencedStructures);
+ } else {
+@@ -78,9 +171,9 @@ public class StructureCheck {
+ if (structureCheckResult != null) {
+ return structureCheckResult;
+ } else {
+- boolean bl = this.featureChecks.computeIfAbsent(type, (structure2) -> {
+- return new Long2BooleanOpenHashMap();
+- }).computeIfAbsent(l, (chunkPos) -> {
++ boolean bl = this.featureChecksSafe.computeIfAbsent(type, (structure2) -> { // Paper - rewrite chunk system - synchronise this class
++ return new SynchronisedLong2BooleanMap(PER_FEATURE_CHECK_LIMIT); // Paper - rewrite chunk system - synchronise this class
++ }).getOrCompute(l, (chunkPos) -> { // Paper - rewrite chunk system - synchronise this class
+ return this.canCreateStructure(pos, type);
+ });
+ return !bl ? StructureCheckResult.START_NOT_PRESENT : StructureCheckResult.CHUNK_LOAD_NEEDED;
+@@ -193,17 +286,26 @@ public class StructureCheck {
+ }
+
+ private void storeFullResults(long pos, Object2IntMap referencesByStructure) {
+- this.loadedChunks.put(pos, deduplicateEmptyMap(referencesByStructure));
+- this.featureChecks.values().forEach((generationPossibilityByChunkPos) -> {
+- generationPossibilityByChunkPos.remove(pos);
+- });
++ // Paper start - rewrite chunk system - synchronise this class
++ this.loadedChunksSafe.put(pos, deduplicateEmptyMap(referencesByStructure));
++ // once we insert into loadedChunks, we don't really need to be very careful about removing everything
++ // from this map, as everything that checks this map uses loadedChunks first
++ // so, one way or another it's a race condition that doesn't matter
++ for (SynchronisedLong2BooleanMap value : this.featureChecksSafe.values()) {
++ value.remove(pos);
++ }
++ // Paper end - rewrite chunk system - synchronise this class
+ }
+
+ public void incrementReference(ChunkPos pos, Structure structure) {
+- this.loadedChunks.compute(pos.toLong(), (posx, referencesByStructure) -> {
+- if (referencesByStructure == null || referencesByStructure.isEmpty()) {
++ this.loadedChunksSafe.compute(pos.toLong(), (posx, referencesByStructure) -> { // Paper start - rewrite chunk system - synchronise this class
++ // make this COW so that we do not mutate state that may be currently in use
++ if (referencesByStructure == null) {
+ referencesByStructure = new Object2IntOpenHashMap<>();
++ } else {
++ referencesByStructure = referencesByStructure instanceof Object2IntOpenHashMap fastClone ? fastClone.clone() : new Object2IntOpenHashMap<>(referencesByStructure);
+ }
++ // Paper end - rewrite chunk system - synchronise this class
+
+ referencesByStructure.computeInt(structure, (feature, references) -> {
+ return references == null ? 1 : references + 1;
diff --git a/src/main/java/net/minecraft/world/ticks/LevelChunkTicks.java b/src/main/java/net/minecraft/world/ticks/LevelChunkTicks.java
index 9f6c2e5b5d9e8d714a47c770e255d06c0ef7c190..ac807277a6b26d140ea9873d17c7aa4fb5fe37b2 100644
--- a/src/main/java/net/minecraft/world/ticks/LevelChunkTicks.java
diff --git a/patches/server/0033-Entity-Origin-API.patch b/patches/server/0033-Entity-Origin-API.patch
index fc3b33f3fd..2543d8a36b 100644
--- a/patches/server/0033-Entity-Origin-API.patch
+++ b/patches/server/0033-Entity-Origin-API.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Entity Origin API
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 62a95a0fac59683948f34b202e6e3859b6652d6d..d47e99ac96e622296d045cfcf93b53dddd314827 100644
+index 995be2fd84ce343d7430d9658f91868e653da43d..4af495424d60632b770cd1cb02157bbcf34366e8 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -2282,6 +2282,15 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2363,6 +2363,15 @@ public class ServerLevel extends Level implements WorldGenLevel {
entity.updateDynamicGameEventListener(DynamicGameEventListener::add);
entity.valid = true; // CraftBukkit
diff --git a/patches/server/0042-Disable-thunder.patch b/patches/server/0042-Disable-thunder.patch
index 66bdf8eafc..dcc92cc9a4 100644
--- a/patches/server/0042-Disable-thunder.patch
+++ b/patches/server/0042-Disable-thunder.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Disable thunder
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index d47e99ac96e622296d045cfcf93b53dddd314827..2df6cc14176465dcdc7cfc8d12382bf7edc49666 100644
+index 4af495424d60632b770cd1cb02157bbcf34366e8..a2a7568499bddb3c515ef8155e6e7e827f2a5b97 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -752,7 +752,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -833,7 +833,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
gameprofilerfiller.push("thunder");
BlockPos blockposition;
diff --git a/patches/server/0043-Disable-ice-and-snow.patch b/patches/server/0043-Disable-ice-and-snow.patch
index e2f2dccd18..2edea71126 100644
--- a/patches/server/0043-Disable-ice-and-snow.patch
+++ b/patches/server/0043-Disable-ice-and-snow.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Disable ice and snow
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 2df6cc14176465dcdc7cfc8d12382bf7edc49666..feee30aa0bccfa40765f1c4a179ba04907b11433 100644
+index a2a7568499bddb3c515ef8155e6e7e827f2a5b97..e253531f5da93d2a5b328e1af6eef2a6d9a72bc1 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -783,7 +783,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -864,7 +864,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
int l;
int i1;
diff --git a/patches/server/0071-Add-World-Util-Methods.patch b/patches/server/0071-Add-World-Util-Methods.patch
index 1610dde072..0b3d112441 100644
--- a/patches/server/0071-Add-World-Util-Methods.patch
+++ b/patches/server/0071-Add-World-Util-Methods.patch
@@ -6,7 +6,7 @@ Subject: [PATCH] Add World Util Methods
Methods that can be used for other patches to help improve logic.
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index feee30aa0bccfa40765f1c4a179ba04907b11433..4598ca4cd9da931db33ca26576bfbcdf7df99094 100644
+index e253531f5da93d2a5b328e1af6eef2a6d9a72bc1..d48ad601dc9f8b3ed3bc0e2dd068981eb7613c30 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
@@ -221,7 +221,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
diff --git a/patches/server/0075-Configurable-spawn-chances-for-skeleton-horses.patch b/patches/server/0075-Configurable-spawn-chances-for-skeleton-horses.patch
index 6b1ce4eb11..83e3e38b9b 100644
--- a/patches/server/0075-Configurable-spawn-chances-for-skeleton-horses.patch
+++ b/patches/server/0075-Configurable-spawn-chances-for-skeleton-horses.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Configurable spawn chances for skeleton horses
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 4598ca4cd9da931db33ca26576bfbcdf7df99094..f6be0fefb461c1b5fd0feb5553d36b5817b96e3a 100644
+index d48ad601dc9f8b3ed3bc0e2dd068981eb7613c30..5cde98b0225799bccdb84ce6b6cf06b904d5986c 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -756,7 +756,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -837,7 +837,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
blockposition = this.findLightningTargetAround(this.getBlockRandomPos(j, 0, k, 15));
if (this.isRainingAt(blockposition)) {
DifficultyInstance difficultydamagescaler = this.getCurrentDifficultyAt(blockposition);
diff --git a/patches/server/0077-Only-process-BlockPhysicsEvent-if-a-plugin-has-a-lis.patch b/patches/server/0077-Only-process-BlockPhysicsEvent-if-a-plugin-has-a-lis.patch
index 69cab8d0d5..42404327c2 100644
--- a/patches/server/0077-Only-process-BlockPhysicsEvent-if-a-plugin-has-a-lis.patch
+++ b/patches/server/0077-Only-process-BlockPhysicsEvent-if-a-plugin-has-a-lis.patch
@@ -18,7 +18,7 @@ index 821725460b62ebadedb789f4408ef172416c2092..81abb732e2bb3bca683028d505e74850
this.profiler.push(() -> {
return worldserver + " " + worldserver.dimension().location();
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index f6be0fefb461c1b5fd0feb5553d36b5817b96e3a..33c6a673d98323a3790b3e494841dffa9ce5f4f5 100644
+index 5cde98b0225799bccdb84ce6b6cf06b904d5986c..699fc5f11efa8240723854cd2e91877b278f9495 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
@@ -220,6 +220,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
diff --git a/patches/server/0078-Entity-AddTo-RemoveFrom-World-Events.patch b/patches/server/0078-Entity-AddTo-RemoveFrom-World-Events.patch
index 7b28d7371f..d241ca3a14 100644
--- a/patches/server/0078-Entity-AddTo-RemoveFrom-World-Events.patch
+++ b/patches/server/0078-Entity-AddTo-RemoveFrom-World-Events.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Entity AddTo/RemoveFrom World Events
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 33c6a673d98323a3790b3e494841dffa9ce5f4f5..507cab9d689b774b320fac00f7760c4143957b67 100644
+index 699fc5f11efa8240723854cd2e91877b278f9495..7da059a2e8560250021654f3d5586027c15c7dd3 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -2292,6 +2292,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2373,6 +2373,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
entity.setOrigin(entity.getOriginVector().toLocation(getWorld()));
}
// Paper end
@@ -16,7 +16,7 @@ index 33c6a673d98323a3790b3e494841dffa9ce5f4f5..507cab9d689b774b320fac00f7760c41
}
public void onTrackingEnd(Entity entity) {
-@@ -2367,6 +2368,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2448,6 +2449,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
}
// CraftBukkit end
diff --git a/patches/server/0085-Fix-Cancelling-BlockPlaceEvent-triggering-physics.patch b/patches/server/0085-Fix-Cancelling-BlockPlaceEvent-triggering-physics.patch
index 953b68fa47..e84681435a 100644
--- a/patches/server/0085-Fix-Cancelling-BlockPlaceEvent-triggering-physics.patch
+++ b/patches/server/0085-Fix-Cancelling-BlockPlaceEvent-triggering-physics.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Fix Cancelling BlockPlaceEvent triggering physics
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 507cab9d689b774b320fac00f7760c4143957b67..508c1f2874db3add98aad29bd4eee6c9f5d58006 100644
+index 7da059a2e8560250021654f3d5586027c15c7dd3..35b59c83a35bc3a4d05bc95b0caec1f1865589c0 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1517,6 +1517,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1598,6 +1598,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
@Override
public void updateNeighborsAt(BlockPos pos, Block sourceBlock) {
diff --git a/patches/server/0094-Improve-Maps-in-item-frames-performance-and-bug-fixe.patch b/patches/server/0094-Improve-Maps-in-item-frames-performance-and-bug-fixe.patch
index 824248e1f9..08c6e3775d 100644
--- a/patches/server/0094-Improve-Maps-in-item-frames-performance-and-bug-fixe.patch
+++ b/patches/server/0094-Improve-Maps-in-item-frames-performance-and-bug-fixe.patch
@@ -13,10 +13,10 @@ custom renderers are in use, defaulting to the much simpler Vanilla system.
Additionally, numerous issues to player position tracking on maps has been fixed.
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 508c1f2874db3add98aad29bd4eee6c9f5d58006..90c19a08b51dee98701fc397ab95bf93cb1a2102 100644
+index 35b59c83a35bc3a4d05bc95b0caec1f1865589c0..17d9c5a6c357ac5e3eacd9e4d2cff9a05ec0f262 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -2313,6 +2313,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2394,6 +2394,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
{
if ( iter.next().player == entity )
{
diff --git a/patches/server/0117-Bound-Treasure-Maps-to-World-Border.patch b/patches/server/0117-Bound-Treasure-Maps-to-World-Border.patch
index 3b7f0e0fa4..a546d2ee4a 100644
--- a/patches/server/0117-Bound-Treasure-Maps-to-World-Border.patch
+++ b/patches/server/0117-Bound-Treasure-Maps-to-World-Border.patch
@@ -34,7 +34,7 @@ index 52325a99ea38530ad69a39ac0215233139f35268..dd74e8a034022fe72a1652f92712182b
return (double) pos.getMaxBlockX() > this.getMinX() && (double) pos.getMinBlockX() < this.getMaxX() && (double) pos.getMaxBlockZ() > this.getMinZ() && (double) pos.getMinBlockZ() < this.getMaxZ();
}
diff --git a/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java b/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
-index a70b68cdff1e5a793b8b3a214cb8ea0ed3ff2e4b..9347d321eaba21e0ef9662ebcacae64c19149e1d 100644
+index 5f4fa76fe3a1a0a4fc11064fcf57bfab20bd9729..4da303d7e15496f04f0e27bfb613176bc2a72b76 100644
--- a/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
+++ b/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
@@ -217,6 +217,7 @@ public abstract class ChunkGenerator {
diff --git a/patches/server/0170-PlayerNaturallySpawnCreaturesEvent.patch b/patches/server/0170-PlayerNaturallySpawnCreaturesEvent.patch
index 0d17f70c5f..2014fed77e 100644
--- a/patches/server/0170-PlayerNaturallySpawnCreaturesEvent.patch
+++ b/patches/server/0170-PlayerNaturallySpawnCreaturesEvent.patch
@@ -40,10 +40,10 @@ index a502d293cedb2f507e6cf1792429b36685ed1910..e50af28f806593a0171ad7cee5805f74
return true;
diff --git a/src/main/java/net/minecraft/server/level/ServerChunkCache.java b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-index 5c97b0069d20815c2a1d61bf34596a8a928d8940..bf802a791e857b0018cc760a24c32981eb732f68 100644
+index b0687dcf8af84af627b67e7fbb68170a2fd28da0..5cb151a7d89c7281b03f24c5f79afb7edf7cbfea 100644
--- a/src/main/java/net/minecraft/server/level/ServerChunkCache.java
+++ b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-@@ -716,6 +716,15 @@ public class ServerChunkCache extends ChunkSource {
+@@ -565,6 +565,15 @@ public class ServerChunkCache extends ChunkSource {
boolean flag2 = this.level.getGameRules().getBoolean(GameRules.RULE_DOMOBSPAWNING) && !this.level.players().isEmpty(); // CraftBukkit
Collections.shuffle(list);
diff --git a/patches/server/0192-Block-Enderpearl-Travel-Exploit.patch b/patches/server/0192-Block-Enderpearl-Travel-Exploit.patch
index 632ddd9717..be2799b457 100644
--- a/patches/server/0192-Block-Enderpearl-Travel-Exploit.patch
+++ b/patches/server/0192-Block-Enderpearl-Travel-Exploit.patch
@@ -16,10 +16,10 @@ public net.minecraft.world.entity.projectile.Projectile cachedOwner
public net.minecraft.world.entity.projectile.Projectile ownerUUID
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 90c19a08b51dee98701fc397ab95bf93cb1a2102..317ba116e52c193fbd0a9cd853bc03ed640cc70f 100644
+index 17d9c5a6c357ac5e3eacd9e4d2cff9a05ec0f262..f2eefdeab44ad4f7c0abd0e55e688e74a83abfff 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -2246,6 +2246,12 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2327,6 +2327,12 @@ public class ServerLevel extends Level implements WorldGenLevel {
public void onTickingEnd(Entity entity) {
ServerLevel.this.entityTickList.remove(entity);
diff --git a/patches/server/0193-Expand-World.spawnParticle-API-and-add-Builder.patch b/patches/server/0193-Expand-World.spawnParticle-API-and-add-Builder.patch
index 479a9d9307..39416db812 100644
--- a/patches/server/0193-Expand-World.spawnParticle-API-and-add-Builder.patch
+++ b/patches/server/0193-Expand-World.spawnParticle-API-and-add-Builder.patch
@@ -10,10 +10,10 @@ Adds an option to control the force mode of the particle.
This adds a new Builder API which is much friendlier to use.
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 317ba116e52c193fbd0a9cd853bc03ed640cc70f..68d32251091d103cb2d5afca8f4461631eff83c1 100644
+index f2eefdeab44ad4f7c0abd0e55e688e74a83abfff..68257f257dd3b167e237482c8d149590103896b2 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1636,12 +1636,17 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1717,12 +1717,17 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
public int sendParticles(ServerPlayer sender, T t0, double d0, double d1, double d2, int i, double d3, double d4, double d5, double d6, boolean force) {
diff --git a/patches/server/0216-InventoryCloseEvent-Reason-API.patch b/patches/server/0216-InventoryCloseEvent-Reason-API.patch
index a025c6baf9..25ebe1a7df 100644
--- a/patches/server/0216-InventoryCloseEvent-Reason-API.patch
+++ b/patches/server/0216-InventoryCloseEvent-Reason-API.patch
@@ -7,10 +7,10 @@ Allows you to determine why an inventory was closed, enabling plugin developers
to "confirm" things based on if it was player triggered close or not.
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 68d32251091d103cb2d5afca8f4461631eff83c1..0dd9b622f652cc67e365032a948df4c40c315a80 100644
+index 68257f257dd3b167e237482c8d149590103896b2..33ce550ea68d4862e0966ed827200cf426909d85 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1368,7 +1368,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1449,7 +1449,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
for (net.minecraft.world.level.block.entity.BlockEntity tileentity : chunk.getBlockEntities().values()) {
if (tileentity instanceof net.minecraft.world.Container) {
for (org.bukkit.entity.HumanEntity h : Lists.newArrayList(((net.minecraft.world.Container) tileentity).getViewers())) {
@@ -19,7 +19,7 @@ index 68d32251091d103cb2d5afca8f4461631eff83c1..0dd9b622f652cc67e365032a948df4c4
}
}
}
-@@ -2336,7 +2336,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2417,7 +2417,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
// Spigot Start
if (entity.getBukkitEntity() instanceof org.bukkit.inventory.InventoryHolder && (!(entity instanceof ServerPlayer) || entity.getRemovalReason() != Entity.RemovalReason.KILLED)) { // SPIGOT-6876: closeInventory clears death message
for (org.bukkit.entity.HumanEntity h : Lists.newArrayList(((org.bukkit.inventory.InventoryHolder) entity.getBukkitEntity()).getInventory().getViewers())) {
diff --git a/patches/server/0236-Add-Debug-Entities-option-to-debug-dupe-uuid-issues.patch b/patches/server/0236-Add-Debug-Entities-option-to-debug-dupe-uuid-issues.patch
index 24971d0867..dbc0856023 100644
--- a/patches/server/0236-Add-Debug-Entities-option-to-debug-dupe-uuid-issues.patch
+++ b/patches/server/0236-Add-Debug-Entities-option-to-debug-dupe-uuid-issues.patch
@@ -29,7 +29,7 @@ index e50af28f806593a0171ad7cee5805f74b25fec89..7495bd988a48cbb977ebac25854547ae
protected void tick() {
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 0dd9b622f652cc67e365032a948df4c40c315a80..203dcc314b20a427a827eabc1713dc3abdcca467 100644
+index 33ce550ea68d4862e0966ed827200cf426909d85..6b157b362cffedae26133fc0f0af1094655ee11f 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
@@ -221,6 +221,9 @@ public class ServerLevel extends Level implements WorldGenLevel {
@@ -42,7 +42,7 @@ index 0dd9b622f652cc67e365032a948df4c40c315a80..203dcc314b20a427a827eabc1713dc3a
@Override public LevelChunk getChunkIfLoaded(int x, int z) { // Paper - this was added in world too but keeping here for NMS ABI
return this.chunkSource.getChunk(x, z, false);
-@@ -1330,7 +1333,28 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1411,7 +1414,28 @@ public class ServerLevel extends Level implements WorldGenLevel {
// CraftBukkit start
private boolean addEntity(Entity entity, CreatureSpawnEvent.SpawnReason spawnReason) {
org.spigotmc.AsyncCatcher.catchOp("entity add"); // Spigot
diff --git a/patches/server/0301-Entity-getEntitySpawnReason.patch b/patches/server/0301-Entity-getEntitySpawnReason.patch
index f27f6c573c..f3a62a43de 100644
--- a/patches/server/0301-Entity-getEntitySpawnReason.patch
+++ b/patches/server/0301-Entity-getEntitySpawnReason.patch
@@ -10,10 +10,10 @@ persistenting Living Entity, SPAWNER for spawners,
or DEFAULT since data was not stored.
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 203dcc314b20a427a827eabc1713dc3abdcca467..80c3b79db7ee83467de839444aeac4cfad734564 100644
+index 6b157b362cffedae26133fc0f0af1094655ee11f..986a509998d217228eb1dc2b5815787599e02d6b 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1348,6 +1348,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1429,6 +1429,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
return true;
}
// Paper end
diff --git a/patches/server/0313-Configurable-Keep-Spawn-Loaded-range-per-world.patch b/patches/server/0313-Configurable-Keep-Spawn-Loaded-range-per-world.patch
index 7d248a8ab3..12b4b9b421 100644
--- a/patches/server/0313-Configurable-Keep-Spawn-Loaded-range-per-world.patch
+++ b/patches/server/0313-Configurable-Keep-Spawn-Loaded-range-per-world.patch
@@ -63,10 +63,10 @@ index ac0c25ec9a06163f0f7290f9813fd5177b7ff87d..798a9083d78d49bc7c9e1d3dfb70c30e
// this.updateMobSpawningFlags();
worldserver.setSpawnSettings(this.isSpawningMonsters(), this.isSpawningAnimals());
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 80c3b79db7ee83467de839444aeac4cfad734564..e6fcc3f39e99e817405776fc05efce9605000af2 100644
+index 986a509998d217228eb1dc2b5815787599e02d6b..773fea9c2c4bef931439b5663471c010d9a1297c 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1785,12 +1785,84 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1866,12 +1866,84 @@ public class ServerLevel extends Level implements WorldGenLevel {
return ((MapIndex) this.getServer().overworld().getDataStorage().computeIfAbsent(MapIndex::load, MapIndex::new, "idcounts")).getFreeAuxValueForMap();
}
diff --git a/patches/server/0332-Optimise-EntityGetter-getPlayerByUUID.patch b/patches/server/0332-Optimise-EntityGetter-getPlayerByUUID.patch
index db849ce185..23d9cd2d02 100644
--- a/patches/server/0332-Optimise-EntityGetter-getPlayerByUUID.patch
+++ b/patches/server/0332-Optimise-EntityGetter-getPlayerByUUID.patch
@@ -6,10 +6,10 @@ Subject: [PATCH] Optimise EntityGetter#getPlayerByUUID
Use the PlayerList map instead of iterating over all players
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index e6fcc3f39e99e817405776fc05efce9605000af2..15fdb6d6307bad251be9272d44bea9fbad90e55f 100644
+index 773fea9c2c4bef931439b5663471c010d9a1297c..0efc377743e93a0120843cab192753d037e88a73 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -469,6 +469,15 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -550,6 +550,15 @@ public class ServerLevel extends Level implements WorldGenLevel {
});
}
diff --git a/patches/server/0336-Entity-Activation-Range-2.0.patch b/patches/server/0336-Entity-Activation-Range-2.0.patch
index ec17675e05..03914ad08e 100644
--- a/patches/server/0336-Entity-Activation-Range-2.0.patch
+++ b/patches/server/0336-Entity-Activation-Range-2.0.patch
@@ -18,7 +18,7 @@ public net.minecraft.world.entity.Entity isInsidePortal
public net.minecraft.world.entity.LivingEntity jumping
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 15fdb6d6307bad251be9272d44bea9fbad90e55f..826634d50d8d537b01c1cfa545e82c92744066cd 100644
+index 0efc377743e93a0120843cab192753d037e88a73..b12e9da3eebda396769b30f4b7e37a78f3bcb060 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
@@ -2,7 +2,6 @@ package net.minecraft.server.level;
@@ -29,7 +29,7 @@ index 15fdb6d6307bad251be9272d44bea9fbad90e55f..826634d50d8d537b01c1cfa545e82c92
import com.google.common.collect.Lists;
import com.mojang.datafixers.DataFixer;
import com.mojang.datafixers.util.Pair;
-@@ -1105,17 +1104,17 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1186,17 +1185,17 @@ public class ServerLevel extends Level implements WorldGenLevel {
++TimingHistory.entityTicks; // Paper - timings
// Spigot start
co.aikar.timings.Timing timer; // Paper
@@ -51,7 +51,7 @@ index 15fdb6d6307bad251be9272d44bea9fbad90e55f..826634d50d8d537b01c1cfa545e82c92
try {
// Paper end - timings
entity.setOldPosAndRot();
-@@ -1126,9 +1125,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1207,9 +1206,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
return BuiltInRegistries.ENTITY_TYPE.getKey(entity.getType()).toString();
});
gameprofilerfiller.incrementCounter("tickNonPassenger");
@@ -65,7 +65,7 @@ index 15fdb6d6307bad251be9272d44bea9fbad90e55f..826634d50d8d537b01c1cfa545e82c92
Iterator iterator = entity.getPassengers().iterator();
while (iterator.hasNext()) {
-@@ -1136,13 +1139,18 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1217,13 +1220,18 @@ public class ServerLevel extends Level implements WorldGenLevel {
this.tickPassenger(entity, entity1);
}
@@ -85,7 +85,7 @@ index 15fdb6d6307bad251be9272d44bea9fbad90e55f..826634d50d8d537b01c1cfa545e82c92
passenger.setOldPosAndRot();
++passenger.tickCount;
ProfilerFiller gameprofilerfiller = this.getProfiler();
-@@ -1151,8 +1159,17 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1232,8 +1240,17 @@ public class ServerLevel extends Level implements WorldGenLevel {
return BuiltInRegistries.ENTITY_TYPE.getKey(passenger.getType()).toString();
});
gameprofilerfiller.incrementCounter("tickPassenger");
@@ -103,7 +103,7 @@ index 15fdb6d6307bad251be9272d44bea9fbad90e55f..826634d50d8d537b01c1cfa545e82c92
gameprofilerfiller.pop();
Iterator iterator = passenger.getPassengers().iterator();
-@@ -1162,6 +1179,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1243,6 +1260,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
this.tickPassenger(passenger, entity2);
}
diff --git a/patches/server/0340-implement-optional-per-player-mob-spawns.patch b/patches/server/0340-implement-optional-per-player-mob-spawns.patch
index 0e857b9024..091939ea4b 100644
--- a/patches/server/0340-implement-optional-per-player-mob-spawns.patch
+++ b/patches/server/0340-implement-optional-per-player-mob-spawns.patch
@@ -338,10 +338,10 @@ index 8c4d2b2f206d7662c0aceb30f49fa58f9426ec5c..1711170ef98831dacfbf30ac22e19f47
double d0 = (double) SectionPos.sectionToBlockCoord(pos.x, 8);
double d1 = (double) SectionPos.sectionToBlockCoord(pos.z, 8);
diff --git a/src/main/java/net/minecraft/server/level/ServerChunkCache.java b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-index bf802a791e857b0018cc760a24c32981eb732f68..8e183df0603d3abbf09301d71758c9cbb4cf413f 100644
+index 5cb151a7d89c7281b03f24c5f79afb7edf7cbfea..d9743139d1cb932c6aac56da85f073e4dfe2933c 100644
--- a/src/main/java/net/minecraft/server/level/ServerChunkCache.java
+++ b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-@@ -694,7 +694,18 @@ public class ServerChunkCache extends ChunkSource {
+@@ -543,7 +543,18 @@ public class ServerChunkCache extends ChunkSource {
gameprofilerfiller.push("naturalSpawnCount");
this.level.timings.countNaturalMobs.startTiming(); // Paper - timings
int l = this.distanceManager.getNaturalSpawnChunkCount();
diff --git a/patches/server/0343-Optimise-getChunkAt-calls-for-loaded-chunks.patch b/patches/server/0343-Optimise-getChunkAt-calls-for-loaded-chunks.patch
index f4554f1c28..a5aab28fc1 100644
--- a/patches/server/0343-Optimise-getChunkAt-calls-for-loaded-chunks.patch
+++ b/patches/server/0343-Optimise-getChunkAt-calls-for-loaded-chunks.patch
@@ -7,10 +7,10 @@ bypass the need to get a player chunk, then get the either,
then unwrap it...
diff --git a/src/main/java/net/minecraft/server/level/ServerChunkCache.java b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-index 8e183df0603d3abbf09301d71758c9cbb4cf413f..feb494177f8bfc6eb343aa29f5a7ffd8c47f9cb7 100644
+index d9743139d1cb932c6aac56da85f073e4dfe2933c..603f9d1f501a18214f11a6e401f2c43d9c3cf8eb 100644
--- a/src/main/java/net/minecraft/server/level/ServerChunkCache.java
+++ b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-@@ -421,6 +421,12 @@ public class ServerChunkCache extends ChunkSource {
+@@ -270,6 +270,12 @@ public class ServerChunkCache extends ChunkSource {
return this.getChunk(x, z, leastStatus, create);
}, this.mainThreadProcessor).join();
} else {
@@ -23,7 +23,7 @@ index 8e183df0603d3abbf09301d71758c9cbb4cf413f..feb494177f8bfc6eb343aa29f5a7ffd8
ProfilerFiller gameprofilerfiller = this.level.getProfiler();
gameprofilerfiller.incrementCounter("getChunk");
-@@ -464,39 +470,7 @@ public class ServerChunkCache extends ChunkSource {
+@@ -313,39 +319,7 @@ public class ServerChunkCache extends ChunkSource {
if (!io.papermc.paper.util.TickThread.isTickThread()) { // Paper - rewrite chunk system
return null;
} else {
diff --git a/patches/server/0344-Add-debug-for-sync-chunk-loads.patch b/patches/server/0344-Add-debug-for-sync-chunk-loads.patch
index 08f080361a..8aefa1cb4a 100644
--- a/patches/server/0344-Add-debug-for-sync-chunk-loads.patch
+++ b/patches/server/0344-Add-debug-for-sync-chunk-loads.patch
@@ -300,10 +300,10 @@ index 0000000000000000000000000000000000000000..95d6022c9cfb2e36ec5a71be6e343540
+ }
+}
diff --git a/src/main/java/net/minecraft/server/level/ServerChunkCache.java b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-index feb494177f8bfc6eb343aa29f5a7ffd8c47f9cb7..f6fd35300324f931c92f546b76dc16883b0f791d 100644
+index 603f9d1f501a18214f11a6e401f2c43d9c3cf8eb..1c327067d488cc916d082a797b161cb7836ffa2e 100644
--- a/src/main/java/net/minecraft/server/level/ServerChunkCache.java
+++ b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-@@ -445,6 +445,7 @@ public class ServerChunkCache extends ChunkSource {
+@@ -294,6 +294,7 @@ public class ServerChunkCache extends ChunkSource {
// Paper start - async chunk io/loading
io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler.pushChunkWait(this.level, x1, z1); // Paper - rewrite chunk system
// Paper end
@@ -312,10 +312,10 @@ index feb494177f8bfc6eb343aa29f5a7ffd8c47f9cb7..f6fd35300324f931c92f546b76dc1688
chunkproviderserver_b.managedBlock(completablefuture::isDone);
io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler.popChunkWait(); // Paper - async chunk debug // Paper - rewrite chunk system
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 826634d50d8d537b01c1cfa545e82c92744066cd..179e95ebc8d9b87339f8daaf232c61e54ac99d88 100644
+index b12e9da3eebda396769b30f4b7e37a78f3bcb060..e61781d7ac9ec8828b4968e6e3824f5212bf6dea 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -570,6 +570,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -651,6 +651,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
this.entityLookup = new io.papermc.paper.chunk.system.entity.EntityLookup(this, new EntityCallbacks()); // Paper - rewrite chunk system
}
diff --git a/patches/server/0362-Prevent-Double-PlayerChunkMap-adds-crashing-server.patch b/patches/server/0362-Prevent-Double-PlayerChunkMap-adds-crashing-server.patch
index 6735d96dc2..40a3a96620 100644
--- a/patches/server/0362-Prevent-Double-PlayerChunkMap-adds-crashing-server.patch
+++ b/patches/server/0362-Prevent-Double-PlayerChunkMap-adds-crashing-server.patch
@@ -25,10 +25,10 @@ index 1711170ef98831dacfbf30ac22e19f47b3c4c413..67317919d86ca4e0aa11d9f0625851fd
EntityType> entitytypes = entity.getType();
int i = entitytypes.clientTrackingRange() * 16;
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 179e95ebc8d9b87339f8daaf232c61e54ac99d88..4a8d4c92ba97d224d8ccd6a9232623ec66ef40a9 100644
+index e61781d7ac9ec8828b4968e6e3824f5212bf6dea..faba22ba5b45fbd9463ed2172f5aa9096ed84ba0 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -2392,7 +2392,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2473,7 +2473,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
public void onTrackingStart(Entity entity) {
org.spigotmc.AsyncCatcher.catchOp("entity register"); // Spigot
@@ -37,7 +37,7 @@ index 179e95ebc8d9b87339f8daaf232c61e54ac99d88..4a8d4c92ba97d224d8ccd6a9232623ec
if (entity instanceof ServerPlayer) {
ServerPlayer entityplayer = (ServerPlayer) entity;
-@@ -2426,6 +2426,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2507,6 +2507,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
entity.updateDynamicGameEventListener(DynamicGameEventListener::add);
entity.valid = true; // CraftBukkit
diff --git a/patches/server/0375-Don-t-crash-if-player-is-attempted-to-be-removed-fro.patch b/patches/server/0375-Don-t-crash-if-player-is-attempted-to-be-removed-fro.patch
index 9e2501f454..1ab6988612 100644
--- a/patches/server/0375-Don-t-crash-if-player-is-attempted-to-be-removed-fro.patch
+++ b/patches/server/0375-Don-t-crash-if-player-is-attempted-to-be-removed-fro.patch
@@ -7,7 +7,7 @@ Subject: [PATCH] Don't crash if player is attempted to be removed from
I suspect it deals with teleporting as it uses players current x/y/z
diff --git a/src/main/java/net/minecraft/server/level/DistanceManager.java b/src/main/java/net/minecraft/server/level/DistanceManager.java
-index 20d600d29c2f2e47c798721d1f151e625b12acc3..fcbdf311e981e010adc78342f0865d3f803354f9 100644
+index c716047fefb51a77ce18df243c517d80c78b6853..0a926afa06a5e37cf2650afa1b5099a2a9ffa659 100644
--- a/src/main/java/net/minecraft/server/level/DistanceManager.java
+++ b/src/main/java/net/minecraft/server/level/DistanceManager.java
@@ -147,8 +147,8 @@ public abstract class DistanceManager {
diff --git a/patches/server/0389-Deobfuscate-stacktraces-in-log-messages-crash-report.patch b/patches/server/0389-Deobfuscate-stacktraces-in-log-messages-crash-report.patch
index a8c4e3369d..6db1739574 100644
--- a/patches/server/0389-Deobfuscate-stacktraces-in-log-messages-crash-report.patch
+++ b/patches/server/0389-Deobfuscate-stacktraces-in-log-messages-crash-report.patch
@@ -516,7 +516,7 @@ index 71b395db734c257a64ec3297eebbe52883ea4cc7..072888f891c8e25a2b4daaf561e12493
paperConfigurations.initializeWorldDefaultsConfiguration();
org.spigotmc.WatchdogThread.doStart(org.spigotmc.SpigotConfig.timeoutTime, org.spigotmc.SpigotConfig.restartOnCrash);
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 4a8d4c92ba97d224d8ccd6a9232623ec66ef40a9..e065a559a0ccccf76c27bc465137016472607762 100644
+index faba22ba5b45fbd9463ed2172f5aa9096ed84ba0..345db2f58f52a465d04f46e43403ced57a1bae34 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
@@ -221,7 +221,9 @@ public class ServerLevel extends Level implements WorldGenLevel {
@@ -530,7 +530,7 @@ index 4a8d4c92ba97d224d8ccd6a9232623ec66ef40a9..e065a559a0ccccf76c27bc4651370164
}
@Override public LevelChunk getChunkIfLoaded(int x, int z) { // Paper - this was added in world too but keeping here for NMS ABI
-@@ -1386,7 +1388,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1467,7 +1469,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
if (entity.isRemoved()) {
// Paper start
if (DEBUG_ENTITIES) {
diff --git a/patches/server/0422-incremental-chunk-and-player-saving.patch b/patches/server/0422-incremental-chunk-and-player-saving.patch
index 816ec1f46f..7eede8ea15 100644
--- a/patches/server/0422-incremental-chunk-and-player-saving.patch
+++ b/patches/server/0422-incremental-chunk-and-player-saving.patch
@@ -53,10 +53,10 @@ index 2a55f9e0ab6fa07ba913203bb62acd54add450a0..7bd02abf039f7e047b6b6b1de0bc4788
// Paper start - move executeAll() into full server tick timing
try (co.aikar.timings.Timing ignored = MinecraftTimings.processTasksTimer.startTiming()) {
diff --git a/src/main/java/net/minecraft/server/level/ServerChunkCache.java b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-index f6fd35300324f931c92f546b76dc16883b0f791d..a8efb80b1408e2c4598e3baee3fe3f51618c9e63 100644
+index 1c327067d488cc916d082a797b161cb7836ffa2e..3f5d572994bc8b3f1e106105dc0bb202ad005b8c 100644
--- a/src/main/java/net/minecraft/server/level/ServerChunkCache.java
+++ b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-@@ -603,6 +603,15 @@ public class ServerChunkCache extends ChunkSource {
+@@ -452,6 +452,15 @@ public class ServerChunkCache extends ChunkSource {
} // Paper - Timings
}
@@ -73,10 +73,10 @@ index f6fd35300324f931c92f546b76dc16883b0f791d..a8efb80b1408e2c4598e3baee3fe3f51
public void close() throws IOException {
// CraftBukkit start
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index e065a559a0ccccf76c27bc465137016472607762..e1535ad949ead050ebb1813b3f4cd38597e1924c 100644
+index 345db2f58f52a465d04f46e43403ced57a1bae34..d02f9030d47764f8f3f71f27f133b676b42efe59 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1200,6 +1200,37 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1281,6 +1281,37 @@ public class ServerLevel extends Level implements WorldGenLevel {
return !this.server.isUnderSpawnProtection(this, pos, player) && this.getWorldBorder().isWithinBounds(pos);
}
diff --git a/patches/server/0449-Fix-SpawnChangeEvent-not-firing-for-all-use-cases.patch b/patches/server/0449-Fix-SpawnChangeEvent-not-firing-for-all-use-cases.patch
index 8be44d1ac2..19083aa14b 100644
--- a/patches/server/0449-Fix-SpawnChangeEvent-not-firing-for-all-use-cases.patch
+++ b/patches/server/0449-Fix-SpawnChangeEvent-not-firing-for-all-use-cases.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Fix SpawnChangeEvent not firing for all use-cases
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index e1535ad949ead050ebb1813b3f4cd38597e1924c..e0574ad1e5d07f58649cb580a9b4f4d11cacc2b1 100644
+index d02f9030d47764f8f3f71f27f133b676b42efe59..4fc75299b65b2d298f6f8397e56c44221ed4c3e9 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1922,9 +1922,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2003,9 +2003,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
public void setDefaultSpawnPos(BlockPos pos, float angle) {
// Paper - configurable spawn radius
BlockPos prevSpawn = this.getSharedSpawnPos();
diff --git a/patches/server/0466-Extend-block-drop-capture-to-capture-all-items-added.patch b/patches/server/0466-Extend-block-drop-capture-to-capture-all-items-added.patch
index bdcb4b3151..82cf373acb 100644
--- a/patches/server/0466-Extend-block-drop-capture-to-capture-all-items-added.patch
+++ b/patches/server/0466-Extend-block-drop-capture-to-capture-all-items-added.patch
@@ -6,10 +6,10 @@ Subject: [PATCH] Extend block drop capture to capture all items added to the
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index e0574ad1e5d07f58649cb580a9b4f4d11cacc2b1..a62e33af3a7e4cb038fdbe9e50bf448958b1c170 100644
+index 4fc75299b65b2d298f6f8397e56c44221ed4c3e9..9b7a87b0ba537d1e92d0c02149f97dc41975c53f 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1426,6 +1426,12 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1507,6 +1507,12 @@ public class ServerLevel extends Level implements WorldGenLevel {
// WorldServer.LOGGER.warn("Tried to add entity {} but it was marked as removed already", EntityTypes.getKey(entity.getType())); // CraftBukkit
return false;
} else {
diff --git a/patches/server/0532-Remove-stale-POIs.patch b/patches/server/0532-Remove-stale-POIs.patch
index 282a8dd114..7c4674cb50 100644
--- a/patches/server/0532-Remove-stale-POIs.patch
+++ b/patches/server/0532-Remove-stale-POIs.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Remove stale POIs
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index a62e33af3a7e4cb038fdbe9e50bf448958b1c170..885498368c8b725259bd63af79b05784a20aa864 100644
+index 9b7a87b0ba537d1e92d0c02149f97dc41975c53f..7cb5dc1d0badba864705386d70b69dc3a6790284 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1991,6 +1991,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2072,6 +2072,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
});
optional1.ifPresent((holder) -> {
this.getServer().execute(() -> {
diff --git a/patches/server/0536-Add-StructuresLocateEvent.patch b/patches/server/0536-Add-StructuresLocateEvent.patch
index ca1f4c0fef..f7bb5440db 100644
--- a/patches/server/0536-Add-StructuresLocateEvent.patch
+++ b/patches/server/0536-Add-StructuresLocateEvent.patch
@@ -47,7 +47,7 @@ index 0000000000000000000000000000000000000000..09837f6e6c6ab8a1df2aacdb86646993
+ }
+}
diff --git a/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java b/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
-index 9347d321eaba21e0ef9662ebcacae64c19149e1d..b975cca39e18fd274702543066971fcf0cc24186 100644
+index 4da303d7e15496f04f0e27bfb613176bc2a72b76..3c7920721914588a3e7eaf1faff46f7305823416 100644
--- a/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
+++ b/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
@@ -121,6 +121,24 @@ public abstract class ChunkGenerator {
diff --git a/patches/server/0549-EntityMoveEvent.patch b/patches/server/0549-EntityMoveEvent.patch
index 78a2b7a9fc..390bdf2537 100644
--- a/patches/server/0549-EntityMoveEvent.patch
+++ b/patches/server/0549-EntityMoveEvent.patch
@@ -17,7 +17,7 @@ index 0ed954f83a7a045c964930247ea393cbaafcbf12..5ac5937c72286d96c394a4da90cbc443
this.profiler.push(() -> {
return worldserver + " " + worldserver.dimension().location();
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 885498368c8b725259bd63af79b05784a20aa864..e6b7bbf3c805b314d2473131770cc4791829a6d0 100644
+index 7cb5dc1d0badba864705386d70b69dc3a6790284..896168b532cf42cae4911a35c0d5d6dd5e37f128 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
@@ -220,6 +220,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
diff --git a/patches/server/0550-added-option-to-disable-pathfinding-updates-on-block.patch b/patches/server/0550-added-option-to-disable-pathfinding-updates-on-block.patch
index 4782606e05..43fe6bb9e4 100644
--- a/patches/server/0550-added-option-to-disable-pathfinding-updates-on-block.patch
+++ b/patches/server/0550-added-option-to-disable-pathfinding-updates-on-block.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] added option to disable pathfinding updates on block changes
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index e6b7bbf3c805b314d2473131770cc4791829a6d0..64f79684824e5f709b2bf66da600d18860c9f3c4 100644
+index 896168b532cf42cae4911a35c0d5d6dd5e37f128..02723e2d34804d911dcf093a3a6fd6af7ce4c14b 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1571,6 +1571,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1652,6 +1652,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
this.getChunkSource().blockChanged(pos);
@@ -16,7 +16,7 @@ index e6b7bbf3c805b314d2473131770cc4791829a6d0..64f79684824e5f709b2bf66da600d188
VoxelShape voxelshape = oldState.getCollisionShape(this, pos);
VoxelShape voxelshape1 = newState.getCollisionShape(this, pos);
-@@ -1612,6 +1613,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1693,6 +1694,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
}
diff --git a/patches/server/0612-Add-cause-to-Weather-ThunderChangeEvents.patch b/patches/server/0612-Add-cause-to-Weather-ThunderChangeEvents.patch
index 0321d262c6..d70a82fb88 100644
--- a/patches/server/0612-Add-cause-to-Weather-ThunderChangeEvents.patch
+++ b/patches/server/0612-Add-cause-to-Weather-ThunderChangeEvents.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Add cause to Weather/ThunderChangeEvents
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 64f79684824e5f709b2bf66da600d18860c9f3c4..3454cc832f2c09425336d9627a6ece74622080a9 100644
+index 02723e2d34804d911dcf093a3a6fd6af7ce4c14b..746713d742cf6c375353570e86e5e1802e9aec60 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -591,8 +591,8 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -672,8 +672,8 @@ public class ServerLevel extends Level implements WorldGenLevel {
this.serverLevelData.setClearWeatherTime(clearDuration);
this.serverLevelData.setRainTime(rainDuration);
this.serverLevelData.setThunderTime(rainDuration);
@@ -19,7 +19,7 @@ index 64f79684824e5f709b2bf66da600d18860c9f3c4..3454cc832f2c09425336d9627a6ece74
}
@Override
-@@ -1004,8 +1004,8 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1085,8 +1085,8 @@ public class ServerLevel extends Level implements WorldGenLevel {
this.serverLevelData.setThunderTime(j);
this.serverLevelData.setRainTime(k);
this.serverLevelData.setClearWeatherTime(i);
@@ -30,7 +30,7 @@ index 64f79684824e5f709b2bf66da600d18860c9f3c4..3454cc832f2c09425336d9627a6ece74
}
this.oThunderLevel = this.thunderLevel;
-@@ -1071,14 +1071,14 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1152,14 +1152,14 @@ public class ServerLevel extends Level implements WorldGenLevel {
private void resetWeatherCycle() {
// CraftBukkit start
diff --git a/patches/server/0634-Use-getChunkIfLoadedImmediately-in-places.patch b/patches/server/0634-Use-getChunkIfLoadedImmediately-in-places.patch
index 7931c755b9..c6622fc571 100644
--- a/patches/server/0634-Use-getChunkIfLoadedImmediately-in-places.patch
+++ b/patches/server/0634-Use-getChunkIfLoadedImmediately-in-places.patch
@@ -8,7 +8,7 @@ ticket level 33 (yes getChunkIfLoaded will actually perform a chunk
load in that case).
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 3454cc832f2c09425336d9627a6ece74622080a9..d35de35f0a72bf9080b48028010379426d50c5bc 100644
+index 746713d742cf6c375353570e86e5e1802e9aec60..712002e1b2effb08daee0e0204000e32b7df5b27 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
@@ -228,7 +228,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
diff --git a/patches/server/0666-Add-methods-to-find-targets-for-lightning-strikes.patch b/patches/server/0666-Add-methods-to-find-targets-for-lightning-strikes.patch
index cceedc1658..a32e1cf46f 100644
--- a/patches/server/0666-Add-methods-to-find-targets-for-lightning-strikes.patch
+++ b/patches/server/0666-Add-methods-to-find-targets-for-lightning-strikes.patch
@@ -7,10 +7,10 @@ Subject: [PATCH] Add methods to find targets for lightning strikes
public net.minecraft.server.level.ServerLevel findLightningRod(Lnet/minecraft/core/BlockPos;)Ljava/util/Optional;
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index d35de35f0a72bf9080b48028010379426d50c5bc..5795836844691ce4bcfaf3df8ae6dc28b80df47a 100644
+index 712002e1b2effb08daee0e0204000e32b7df5b27..0f0c19fcf7eb273adcee9bf181d0ad1ad06037f0 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -893,6 +893,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -974,6 +974,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
protected BlockPos findLightningTargetAround(BlockPos pos) {
@@ -22,7 +22,7 @@ index d35de35f0a72bf9080b48028010379426d50c5bc..5795836844691ce4bcfaf3df8ae6dc28
BlockPos blockposition1 = this.getHeightmapPos(Heightmap.Types.MOTION_BLOCKING, pos);
Optional optional = this.findLightningRod(blockposition1);
-@@ -907,6 +912,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -988,6 +993,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
if (!list.isEmpty()) {
return ((LivingEntity) list.get(this.random.nextInt(list.size()))).blockPosition();
} else {
diff --git a/patches/server/0678-Do-not-run-close-logic-for-inventories-on-chunk-unlo.patch b/patches/server/0678-Do-not-run-close-logic-for-inventories-on-chunk-unlo.patch
index 0e4125e124..e91f310b7f 100644
--- a/patches/server/0678-Do-not-run-close-logic-for-inventories-on-chunk-unlo.patch
+++ b/patches/server/0678-Do-not-run-close-logic-for-inventories-on-chunk-unlo.patch
@@ -9,10 +9,10 @@ chunk through it. This should also be OK from a leak prevention/
state desync POV because the TE is getting unloaded anyways.
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 5795836844691ce4bcfaf3df8ae6dc28b80df47a..00508122ab504b1a84ef050145749fa9fea7a7d1 100644
+index 0f0c19fcf7eb273adcee9bf181d0ad1ad06037f0..e8dd1f2c354d2031076b7c96bfcc5bec9249fb67 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1472,9 +1472,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1553,9 +1553,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
// Spigot Start
for (net.minecraft.world.level.block.entity.BlockEntity tileentity : chunk.getBlockEntities().values()) {
if (tileentity instanceof net.minecraft.world.Container) {
diff --git a/patches/server/0685-Optimize-anyPlayerCloseEnoughForSpawning-to-use-dist.patch b/patches/server/0685-Optimize-anyPlayerCloseEnoughForSpawning-to-use-dist.patch
index 2989b3ca98..bef625b150 100644
--- a/patches/server/0685-Optimize-anyPlayerCloseEnoughForSpawning-to-use-dist.patch
+++ b/patches/server/0685-Optimize-anyPlayerCloseEnoughForSpawning-to-use-dist.patch
@@ -6,10 +6,10 @@ Subject: [PATCH] Optimize anyPlayerCloseEnoughForSpawning to use distance maps
Use a distance map to find the players in range quickly
diff --git a/src/main/java/net/minecraft/server/level/ChunkHolder.java b/src/main/java/net/minecraft/server/level/ChunkHolder.java
-index 84e4aea3d44cd7d5405ffc970a0568337ee5b0a7..e7c81056e9f4da0f89cf411afd446444bb40958c 100644
+index c5389e7f3665c06e487dfde3200b7e229694fbd2..4164204ba80f68a768de0ed1721c6447b972a631 100644
--- a/src/main/java/net/minecraft/server/level/ChunkHolder.java
+++ b/src/main/java/net/minecraft/server/level/ChunkHolder.java
-@@ -86,16 +86,29 @@ public class ChunkHolder {
+@@ -79,16 +79,29 @@ public class ChunkHolder {
// Paper start
public void onChunkAdd() {
@@ -42,14 +42,13 @@ index 84e4aea3d44cd7d5405ffc970a0568337ee5b0a7..e7c81056e9f4da0f89cf411afd446444
private final com.destroystokyo.paper.util.maplist.ReferenceList playersSentChunkTo = new com.destroystokyo.paper.util.maplist.ReferenceList<>();
diff --git a/src/main/java/net/minecraft/server/level/ChunkMap.java b/src/main/java/net/minecraft/server/level/ChunkMap.java
-index 8d8bb430e44d7608a8aa44c7feb41797b8bbfb06..7d80cfd701d910badf1feaecaa4ce5129584e21d 100644
+index 8d8bb430e44d7608a8aa44c7feb41797b8bbfb06..f400f95309a22a4bfcef899c39e9589105e6a6f0 100644
--- a/src/main/java/net/minecraft/server/level/ChunkMap.java
+++ b/src/main/java/net/minecraft/server/level/ChunkMap.java
-@@ -157,12 +157,25 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+@@ -157,12 +157,24 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
// Paper start - distance maps
private final com.destroystokyo.paper.util.misc.PooledLinkedHashSets pooledLinkedPlayerHashSets = new com.destroystokyo.paper.util.misc.PooledLinkedHashSets<>();
-+ public final io.papermc.paper.chunk.PlayerChunkLoader playerChunkManager = new io.papermc.paper.chunk.PlayerChunkLoader(this, this.pooledLinkedPlayerHashSets); // Paper - replace chunk loader
+ // Paper start - optimise ChunkMap#anyPlayerCloseEnoughForSpawning
+ // A note about the naming used here:
+ // Previously, mojang used a "spawn range" of 8 for controlling both ticking and
@@ -71,7 +70,7 @@ index 8d8bb430e44d7608a8aa44c7feb41797b8bbfb06..7d80cfd701d910badf1feaecaa4ce512
// Paper start - per player mob spawning
if (this.playerMobDistanceMap != null) {
this.playerMobDistanceMap.add(player, chunkX, chunkZ, io.papermc.paper.chunk.system.ChunkSystem.getTickViewDistance(player));
-@@ -173,6 +186,10 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+@@ -173,6 +185,10 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
void removePlayerFromDistanceMaps(ServerPlayer player) {
this.level.playerChunkLoader.removePlayer(player); // Paper - replace chunk loader
@@ -82,7 +81,7 @@ index 8d8bb430e44d7608a8aa44c7feb41797b8bbfb06..7d80cfd701d910badf1feaecaa4ce512
// Paper start - per player mob spawning
if (this.playerMobDistanceMap != null) {
this.playerMobDistanceMap.remove(player);
-@@ -185,6 +202,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+@@ -185,6 +201,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
int chunkZ = MCUtil.getChunkCoordinate(player.getZ());
// Note: players need to be explicitly added to distance maps before they can be updated
this.level.playerChunkLoader.updatePlayer(player); // Paper - replace chunk loader
@@ -90,7 +89,7 @@ index 8d8bb430e44d7608a8aa44c7feb41797b8bbfb06..7d80cfd701d910badf1feaecaa4ce512
// Paper start - per player mob spawning
if (this.playerMobDistanceMap != null) {
this.playerMobDistanceMap.update(player, chunkX, chunkZ, io.papermc.paper.chunk.system.ChunkSystem.getTickViewDistance(player));
-@@ -276,6 +294,38 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+@@ -276,6 +293,38 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
this.regionManagers.add(this.dataRegionManager);
// Paper end
this.playerMobDistanceMap = this.level.paperConfig().entities.spawning.perPlayerMobSpawns ? new com.destroystokyo.paper.util.misc.PlayerAreaMap(this.pooledLinkedPlayerHashSets) : null; // Paper
@@ -129,7 +128,7 @@ index 8d8bb430e44d7608a8aa44c7feb41797b8bbfb06..7d80cfd701d910badf1feaecaa4ce512
}
protected ChunkGenerator generator() {
-@@ -850,43 +900,48 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+@@ -850,43 +899,48 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
return this.anyPlayerCloseEnoughForSpawning(pos, false);
}
@@ -210,7 +209,7 @@ index 8d8bb430e44d7608a8aa44c7feb41797b8bbfb06..7d80cfd701d910badf1feaecaa4ce512
public List getPlayersCloseForSpawning(ChunkPos pos) {
diff --git a/src/main/java/net/minecraft/server/level/DistanceManager.java b/src/main/java/net/minecraft/server/level/DistanceManager.java
-index fcbdf311e981e010adc78342f0865d3f803354f9..40e17a8f182fea7c99b64cd074ce1757e48758bf 100644
+index 0a926afa06a5e37cf2650afa1b5099a2a9ffa659..ae4a4710ba07614be42cdcbf52cee04cfa08466b 100644
--- a/src/main/java/net/minecraft/server/level/DistanceManager.java
+++ b/src/main/java/net/minecraft/server/level/DistanceManager.java
@@ -50,7 +50,7 @@ public abstract class DistanceManager {
@@ -263,10 +262,10 @@ index fcbdf311e981e010adc78342f0865d3f803354f9..40e17a8f182fea7c99b64cd074ce1757
public String getDebugStatus() {
diff --git a/src/main/java/net/minecraft/server/level/ServerChunkCache.java b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-index a8efb80b1408e2c4598e3baee3fe3f51618c9e63..cd260728e95197f4b0cde35f3d6111366bd979db 100644
+index 3f5d572994bc8b3f1e106105dc0bb202ad005b8c..5c5fe2087a7617324ab8e18389e3ffa9ac413026 100644
--- a/src/main/java/net/minecraft/server/level/ServerChunkCache.java
+++ b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-@@ -668,6 +668,37 @@ public class ServerChunkCache extends ChunkSource {
+@@ -517,6 +517,37 @@ public class ServerChunkCache extends ChunkSource {
if (flag) {
this.chunkMap.tick();
} else {
@@ -304,7 +303,7 @@ index a8efb80b1408e2c4598e3baee3fe3f51618c9e63..cd260728e95197f4b0cde35f3d611136
LevelData worlddata = this.level.getLevelData();
ProfilerFiller gameprofilerfiller = this.level.getProfiler();
-@@ -711,15 +742,7 @@ public class ServerChunkCache extends ChunkSource {
+@@ -560,15 +591,7 @@ public class ServerChunkCache extends ChunkSource {
boolean flag2 = this.level.getGameRules().getBoolean(GameRules.RULE_DOMOBSPAWNING) && !this.level.players().isEmpty(); // CraftBukkit
Collections.shuffle(list);
@@ -321,7 +320,7 @@ index a8efb80b1408e2c4598e3baee3fe3f51618c9e63..cd260728e95197f4b0cde35f3d611136
Iterator iterator1 = list.iterator();
while (iterator1.hasNext()) {
-@@ -727,9 +750,9 @@ public class ServerChunkCache extends ChunkSource {
+@@ -576,9 +599,9 @@ public class ServerChunkCache extends ChunkSource {
LevelChunk chunk1 = chunkproviderserver_a.chunk;
ChunkPos chunkcoordintpair = chunk1.getPos();
diff --git a/patches/server/0686-Optimise-chunk-tick-iteration.patch b/patches/server/0686-Optimise-chunk-tick-iteration.patch
index cd53b49f3f..5f8687c16c 100644
--- a/patches/server/0686-Optimise-chunk-tick-iteration.patch
+++ b/patches/server/0686-Optimise-chunk-tick-iteration.patch
@@ -6,10 +6,10 @@ Subject: [PATCH] Optimise chunk tick iteration
Use a dedicated list of entity ticking chunks to reduce the cost
diff --git a/src/main/java/net/minecraft/server/level/ChunkHolder.java b/src/main/java/net/minecraft/server/level/ChunkHolder.java
-index e7c81056e9f4da0f89cf411afd446444bb40958c..ec0419e2d895d08d4ba069c98f994839d9da6a05 100644
+index 4164204ba80f68a768de0ed1721c6447b972a631..4ae1ba645d9fdc1eb6d5a3e4f8ceed9b4841e003 100644
--- a/src/main/java/net/minecraft/server/level/ChunkHolder.java
+++ b/src/main/java/net/minecraft/server/level/ChunkHolder.java
-@@ -91,6 +91,11 @@ public class ChunkHolder {
+@@ -84,6 +84,11 @@ public class ChunkHolder {
this.playersInMobSpawnRange = this.chunkMap.playerMobSpawnMap.getObjectsInRange(key);
this.playersInChunkTickRange = this.chunkMap.playerChunkTickRangeMap.getObjectsInRange(key);
// Paper end - optimise anyPlayerCloseEnoughForSpawning
@@ -21,7 +21,7 @@ index e7c81056e9f4da0f89cf411afd446444bb40958c..ec0419e2d895d08d4ba069c98f994839
}
public void onChunkRemove() {
-@@ -98,6 +103,11 @@ public class ChunkHolder {
+@@ -91,6 +96,11 @@ public class ChunkHolder {
this.playersInMobSpawnRange = null;
this.playersInChunkTickRange = null;
// Paper end - optimise anyPlayerCloseEnoughForSpawning
@@ -33,7 +33,7 @@ index e7c81056e9f4da0f89cf411afd446444bb40958c..ec0419e2d895d08d4ba069c98f994839
}
// Paper end
-@@ -237,7 +247,7 @@ public class ChunkHolder {
+@@ -230,7 +240,7 @@ public class ChunkHolder {
if (i < 0 || i >= this.changedBlocksPerSection.length) return; // CraftBukkit - SPIGOT-6086, SPIGOT-6296
if (this.changedBlocksPerSection[i] == null) {
@@ -42,7 +42,7 @@ index e7c81056e9f4da0f89cf411afd446444bb40958c..ec0419e2d895d08d4ba069c98f994839
this.changedBlocksPerSection[i] = new ShortOpenHashSet();
}
-@@ -261,6 +271,7 @@ public class ChunkHolder {
+@@ -254,6 +264,7 @@ public class ChunkHolder {
int k = this.lightEngine.getMaxLightSection();
if (y >= j && y <= k) {
@@ -50,7 +50,7 @@ index e7c81056e9f4da0f89cf411afd446444bb40958c..ec0419e2d895d08d4ba069c98f994839
int l = y - j;
if (lightType == LightLayer.SKY) {
-@@ -275,8 +286,19 @@ public class ChunkHolder {
+@@ -268,8 +279,19 @@ public class ChunkHolder {
}
}
@@ -72,7 +72,7 @@ index e7c81056e9f4da0f89cf411afd446444bb40958c..ec0419e2d895d08d4ba069c98f994839
List list;
diff --git a/src/main/java/net/minecraft/server/level/ChunkMap.java b/src/main/java/net/minecraft/server/level/ChunkMap.java
-index 7d80cfd701d910badf1feaecaa4ce5129584e21d..03b802f9f6e31b1ab23af0ff7b235f64c72ec462 100644
+index f400f95309a22a4bfcef899c39e9589105e6a6f0..da2fa26ddb2b801feff962d96f488239058c4be7 100644
--- a/src/main/java/net/minecraft/server/level/ChunkMap.java
+++ b/src/main/java/net/minecraft/server/level/ChunkMap.java
@@ -115,6 +115,8 @@ import org.bukkit.craftbukkit.generator.CustomChunkGenerator;
@@ -93,7 +93,7 @@ index 7d80cfd701d910badf1feaecaa4ce5129584e21d..03b802f9f6e31b1ab23af0ff7b235f64
// Paper - rewrite chunk system
diff --git a/src/main/java/net/minecraft/server/level/ServerChunkCache.java b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-index cd260728e95197f4b0cde35f3d6111366bd979db..d9d3def899ee3dfea6df0c97e2be3fad9764776a 100644
+index 5c5fe2087a7617324ab8e18389e3ffa9ac413026..828de28f2777e2477a9c6545c8af96c4ca4e352b 100644
--- a/src/main/java/net/minecraft/server/level/ServerChunkCache.java
+++ b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
@@ -48,6 +48,7 @@ import net.minecraft.world.level.levelgen.structure.templatesystem.StructureTemp
@@ -104,7 +104,7 @@ index cd260728e95197f4b0cde35f3d6111366bd979db..d9d3def899ee3dfea6df0c97e2be3fad
public class ServerChunkCache extends ChunkSource {
-@@ -725,42 +726,59 @@ public class ServerChunkCache extends ChunkSource {
+@@ -574,42 +575,59 @@ public class ServerChunkCache extends ChunkSource {
this.lastSpawnState = spawnercreature_d;
gameprofilerfiller.popPush("filteringLoadedChunks");
@@ -181,7 +181,7 @@ index cd260728e95197f4b0cde35f3d6111366bd979db..d9d3def899ee3dfea6df0c97e2be3fad
this.level.timings.chunkTicks.stopTiming(); // Paper
gameprofilerfiller.popPush("customSpawners");
if (flag2) {
-@@ -768,15 +786,24 @@ public class ServerChunkCache extends ChunkSource {
+@@ -617,15 +635,24 @@ public class ServerChunkCache extends ChunkSource {
this.level.tickCustomSpawners(this.spawnEnemies, this.spawnFriendlies);
} // Paper - timings
}
diff --git a/patches/server/0687-Execute-chunk-tasks-mid-tick.patch b/patches/server/0687-Execute-chunk-tasks-mid-tick.patch
index a424976f7e..d7150b35c6 100644
--- a/patches/server/0687-Execute-chunk-tasks-mid-tick.patch
+++ b/patches/server/0687-Execute-chunk-tasks-mid-tick.patch
@@ -106,10 +106,10 @@ index b800249823e413933a5d469e431a003f977f59e7..d8fa1cb0b340f97debceb7e5b90051d2
+ // Paper end - execute chunk tasks mid tick
}
diff --git a/src/main/java/net/minecraft/server/level/ServerChunkCache.java b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-index d9d3def899ee3dfea6df0c97e2be3fad9764776a..85d5c712f48c12079c6bf6a23d295746fe5f2550 100644
+index 828de28f2777e2477a9c6545c8af96c4ca4e352b..2a31265ac49b7a6e32105530d00952ee0c0d4331 100644
--- a/src/main/java/net/minecraft/server/level/ServerChunkCache.java
+++ b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-@@ -750,6 +750,8 @@ public class ServerChunkCache extends ChunkSource {
+@@ -599,6 +599,8 @@ public class ServerChunkCache extends ChunkSource {
Collections.shuffle(shuffled);
iterator1 = shuffled.iterator();
}
@@ -118,7 +118,7 @@ index d9d3def899ee3dfea6df0c97e2be3fad9764776a..85d5c712f48c12079c6bf6a23d295746
try {
while (iterator1.hasNext()) {
LevelChunk chunk1 = iterator1.next();
-@@ -767,6 +769,7 @@ public class ServerChunkCache extends ChunkSource {
+@@ -616,6 +618,7 @@ public class ServerChunkCache extends ChunkSource {
if (true || this.level.shouldTickBlocksAt(chunkcoordintpair.toLong())) { // Paper - the chunk is known ticking
this.level.tickChunk(chunk1, k);
@@ -127,7 +127,7 @@ index d9d3def899ee3dfea6df0c97e2be3fad9764776a..85d5c712f48c12079c6bf6a23d295746
}
// Paper start - optimise chunk tick iteration
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 00508122ab504b1a84ef050145749fa9fea7a7d1..418bf659d31c5810d786064a76779cfa39943020 100644
+index e8dd1f2c354d2031076b7c96bfcc5bec9249fb67..0b12c1f2fab60f5832ff6a968d3a4545f42042fc 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
@@ -215,6 +215,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
@@ -138,7 +138,7 @@ index 00508122ab504b1a84ef050145749fa9fea7a7d1..418bf659d31c5810d786064a76779cfa
// CraftBukkit start
public final LevelStorageSource.LevelStorageAccess convertable;
-@@ -1104,6 +1105,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1185,6 +1186,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
if (fluid1.is(fluid)) {
fluid1.tick(this, pos);
}
@@ -146,7 +146,7 @@ index 00508122ab504b1a84ef050145749fa9fea7a7d1..418bf659d31c5810d786064a76779cfa
}
-@@ -1113,6 +1115,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1194,6 +1196,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
if (iblockdata.is(block)) {
iblockdata.tick(this, pos, this.random);
}
diff --git a/patches/server/0690-Detail-more-information-in-watchdog-dumps.patch b/patches/server/0690-Detail-more-information-in-watchdog-dumps.patch
index e8f666aad8..9b5d3c39f6 100644
--- a/patches/server/0690-Detail-more-information-in-watchdog-dumps.patch
+++ b/patches/server/0690-Detail-more-information-in-watchdog-dumps.patch
@@ -76,10 +76,10 @@ index 4a1148a76020089caf01f888f87afdbb35788dc0..52a84eeb3b7df782cbf91aac6df42fb8
});
throw RunningOnDifferentThreadException.RUNNING_ON_DIFFERENT_THREAD;
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 418bf659d31c5810d786064a76779cfa39943020..f8bcf1239c18a6334936cec483f2ae316429a894 100644
+index 0b12c1f2fab60f5832ff6a968d3a4545f42042fc..25ffcafd27b0119301faf81817e3072cd6fd8de6 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1119,7 +1119,26 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1200,7 +1200,26 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
@@ -106,7 +106,7 @@ index 418bf659d31c5810d786064a76779cfa39943020..f8bcf1239c18a6334936cec483f2ae31
++TimingHistory.entityTicks; // Paper - timings
// Spigot start
co.aikar.timings.Timing timer; // Paper
-@@ -1159,7 +1178,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1240,7 +1259,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
this.tickPassenger(entity, entity1);
}
// } finally { timer.stopTiming(); } // Paper - timings - move up
diff --git a/patches/server/0692-Distance-manager-tick-timings.patch b/patches/server/0692-Distance-manager-tick-timings.patch
index 898d2c8806..1ea0186cb3 100644
--- a/patches/server/0692-Distance-manager-tick-timings.patch
+++ b/patches/server/0692-Distance-manager-tick-timings.patch
@@ -19,10 +19,10 @@ index efbf77024d235d8af9f7efc938c17afd76a51b0c..670dcfa32d003870091b75937f1603a5
public static final Timing midTickChunkTasks = Timings.ofSafe("Mid Tick Chunk Tasks");
diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java
-index 4054bf71486734d722a6a3c7b0b4638188798609..ae0c01539e068e2cc851d2ad52baccf5ebc9545f 100644
+index 8e52ebe8d12f5da3d877b0e4ff3723229fb47db1..abd0217cf0bff183c8e262edc173a53403797c1a 100644
--- a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java
+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java
-@@ -1082,7 +1082,9 @@ public final class ChunkHolderManager {
+@@ -1315,7 +1315,9 @@ public final class ChunkHolderManager {
}
public boolean processTicketUpdates() {
diff --git a/patches/server/0697-Consolidate-flush-calls-for-entity-tracker-packets.patch b/patches/server/0697-Consolidate-flush-calls-for-entity-tracker-packets.patch
index 6ed5f87b97..6846952891 100644
--- a/patches/server/0697-Consolidate-flush-calls-for-entity-tracker-packets.patch
+++ b/patches/server/0697-Consolidate-flush-calls-for-entity-tracker-packets.patch
@@ -22,10 +22,10 @@ With this change I could get all 200 on at 0ms ping.
So in general this patch should reduce Netty I/O thread load.
diff --git a/src/main/java/net/minecraft/server/level/ServerChunkCache.java b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-index 85d5c712f48c12079c6bf6a23d295746fe5f2550..1409db8d73a2ed43efbba7f0932bd6d497d9009e 100644
+index 2a31265ac49b7a6e32105530d00952ee0c0d4331..488a253e218409b5f0b4a872cee0928578fa7582 100644
--- a/src/main/java/net/minecraft/server/level/ServerChunkCache.java
+++ b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
-@@ -807,7 +807,24 @@ public class ServerChunkCache extends ChunkSource {
+@@ -656,7 +656,24 @@ public class ServerChunkCache extends ChunkSource {
this.level.timings.broadcastChunkUpdates.stopTiming(); // Paper - timing
gameprofilerfiller.pop();
// Paper end - use set of chunks requiring updates, rather than iterating every single one loaded
diff --git a/patches/server/0702-Oprimise-map-impl-for-tracked-players.patch b/patches/server/0702-Oprimise-map-impl-for-tracked-players.patch
index 36cb1e67f3..dc2f958295 100644
--- a/patches/server/0702-Oprimise-map-impl-for-tracked-players.patch
+++ b/patches/server/0702-Oprimise-map-impl-for-tracked-players.patch
@@ -7,10 +7,10 @@ Reference2BooleanOpenHashMap is going to have
better lookups than HashMap.
diff --git a/src/main/java/net/minecraft/server/level/ChunkMap.java b/src/main/java/net/minecraft/server/level/ChunkMap.java
-index 03b802f9f6e31b1ab23af0ff7b235f64c72ec462..84dfa7efa4be86558c38ee9e6f70f87b5638173a 100644
+index da2fa26ddb2b801feff962d96f488239058c4be7..85d5f8b52cbdcf39c0d62c3005712aa5f0bdde93 100644
--- a/src/main/java/net/minecraft/server/level/ChunkMap.java
+++ b/src/main/java/net/minecraft/server/level/ChunkMap.java
-@@ -1343,7 +1343,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+@@ -1342,7 +1342,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
final Entity entity;
private final int range;
SectionPos lastSectionPos;
diff --git a/patches/server/0704-Optimise-random-block-ticking.patch b/patches/server/0704-Optimise-random-block-ticking.patch
index e8e21584b1..effe884c6f 100644
--- a/patches/server/0704-Optimise-random-block-ticking.patch
+++ b/patches/server/0704-Optimise-random-block-ticking.patch
@@ -90,10 +90,10 @@ index 0000000000000000000000000000000000000000..7d93652c1abbb6aee6eb7c26cf35d4d0
+ }
+}
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index f8bcf1239c18a6334936cec483f2ae316429a894..b1cc896a3f5d7e59a15969308d78d2ef036b0cb1 100644
+index 25ffcafd27b0119301faf81817e3072cd6fd8de6..f1ae9c3b3db3a7540650365b1c8a9b8274b25644 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -764,6 +764,10 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -845,6 +845,10 @@ public class ServerLevel extends Level implements WorldGenLevel {
entityplayer.stopSleepInBed(false, false);
});
}
@@ -104,7 +104,7 @@ index f8bcf1239c18a6334936cec483f2ae316429a894..b1cc896a3f5d7e59a15969308d78d2ef
public void tickChunk(LevelChunk chunk, int randomTickSpeed) {
ChunkPos chunkcoordintpair = chunk.getPos();
-@@ -773,10 +777,10 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -854,10 +858,10 @@ public class ServerLevel extends Level implements WorldGenLevel {
ProfilerFiller gameprofilerfiller = this.getProfiler();
gameprofilerfiller.push("thunder");
@@ -117,7 +117,7 @@ index f8bcf1239c18a6334936cec483f2ae316429a894..b1cc896a3f5d7e59a15969308d78d2ef
if (this.isRainingAt(blockposition)) {
DifficultyInstance difficultydamagescaler = this.getCurrentDifficultyAt(blockposition);
boolean flag1 = this.getGameRules().getBoolean(GameRules.RULE_DOMOBSPAWNING) && this.random.nextDouble() < (double) difficultydamagescaler.getEffectiveDifficulty() * this.paperConfig().entities.spawning.skeletonHorseThunderSpawnChance.or(0.01D) && !this.getBlockState(blockposition.below()).is(Blocks.LIGHTNING_ROD); // Paper
-@@ -807,16 +811,25 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -888,16 +892,25 @@ public class ServerLevel extends Level implements WorldGenLevel {
int i1;
if (!this.paperConfig().environment.disableIceAndSnow && this.random.nextInt(16) == 0) { // Paper - Disable ice and snow
@@ -147,7 +147,7 @@ index f8bcf1239c18a6334936cec483f2ae316429a894..b1cc896a3f5d7e59a15969308d78d2ef
if (l > 0 && biomebase.shouldSnow(this, blockposition)) {
BlockState iblockdata = this.getBlockState(blockposition);
-@@ -832,51 +845,54 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -913,51 +926,54 @@ public class ServerLevel extends Level implements WorldGenLevel {
org.bukkit.craftbukkit.event.CraftEventFactory.handleBlockFormEvent(this, blockposition, Blocks.SNOW.defaultBlockState(), null); // CraftBukkit
}
}
diff --git a/patches/server/0706-Optimise-nearby-player-lookups.patch b/patches/server/0706-Optimise-nearby-player-lookups.patch
index 12a1c7f902..925c3d4420 100644
--- a/patches/server/0706-Optimise-nearby-player-lookups.patch
+++ b/patches/server/0706-Optimise-nearby-player-lookups.patch
@@ -9,10 +9,10 @@ since the penalty of a map lookup could outweigh the benefits of
searching less players (as it basically did in the outside range patch).
diff --git a/src/main/java/net/minecraft/server/level/ChunkHolder.java b/src/main/java/net/minecraft/server/level/ChunkHolder.java
-index ec0419e2d895d08d4ba069c98f994839d9da6a05..ed6a7cd6874e6692c60c5faedd2a86ef9c9425ed 100644
+index 4ae1ba645d9fdc1eb6d5a3e4f8ceed9b4841e003..e2202389a2c4133a183cca59c4e909fc419379ab 100644
--- a/src/main/java/net/minecraft/server/level/ChunkHolder.java
+++ b/src/main/java/net/minecraft/server/level/ChunkHolder.java
-@@ -96,6 +96,12 @@ public class ChunkHolder {
+@@ -89,6 +89,12 @@ public class ChunkHolder {
this.chunkMap.needsChangeBroadcasting.add(this);
}
// Paper end - optimise chunk tick iteration
@@ -25,7 +25,7 @@ index ec0419e2d895d08d4ba069c98f994839d9da6a05..ed6a7cd6874e6692c60c5faedd2a86ef
}
public void onChunkRemove() {
-@@ -108,6 +114,12 @@ public class ChunkHolder {
+@@ -101,6 +107,12 @@ public class ChunkHolder {
this.chunkMap.needsChangeBroadcasting.remove(this);
}
// Paper end - optimise chunk tick iteration
@@ -39,7 +39,7 @@ index ec0419e2d895d08d4ba069c98f994839d9da6a05..ed6a7cd6874e6692c60c5faedd2a86ef
// Paper end
diff --git a/src/main/java/net/minecraft/server/level/ChunkMap.java b/src/main/java/net/minecraft/server/level/ChunkMap.java
-index 84dfa7efa4be86558c38ee9e6f70f87b5638173a..c2dec99102fa4c64c3c874f725cdc65845cd98d2 100644
+index 85d5f8b52cbdcf39c0d62c3005712aa5f0bdde93..4f2d6dcbd2f2e94cb82f723f88cb020bf75a7ab9 100644
--- a/src/main/java/net/minecraft/server/level/ChunkMap.java
+++ b/src/main/java/net/minecraft/server/level/ChunkMap.java
@@ -157,6 +157,12 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
@@ -55,7 +55,7 @@ index 84dfa7efa4be86558c38ee9e6f70f87b5638173a..c2dec99102fa4c64c3c874f725cdc658
// Paper start - distance maps
private final com.destroystokyo.paper.util.misc.PooledLinkedHashSets pooledLinkedPlayerHashSets = new com.destroystokyo.paper.util.misc.PooledLinkedHashSets<>();
-@@ -184,6 +190,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+@@ -183,6 +189,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
this.playerMobDistanceMap.add(player, chunkX, chunkZ, io.papermc.paper.chunk.system.ChunkSystem.getTickViewDistance(player));
}
// Paper end - per player mob spawning
@@ -63,7 +63,7 @@ index 84dfa7efa4be86558c38ee9e6f70f87b5638173a..c2dec99102fa4c64c3c874f725cdc658
}
void removePlayerFromDistanceMaps(ServerPlayer player) {
-@@ -193,6 +200,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+@@ -192,6 +199,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
this.playerMobSpawnMap.remove(player);
this.playerChunkTickRangeMap.remove(player);
// Paper end - optimise ChunkMap#anyPlayerCloseEnoughForSpawning
@@ -71,7 +71,7 @@ index 84dfa7efa4be86558c38ee9e6f70f87b5638173a..c2dec99102fa4c64c3c874f725cdc658
// Paper start - per player mob spawning
if (this.playerMobDistanceMap != null) {
this.playerMobDistanceMap.remove(player);
-@@ -211,6 +219,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+@@ -210,6 +218,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
this.playerMobDistanceMap.update(player, chunkX, chunkZ, io.papermc.paper.chunk.system.ChunkSystem.getTickViewDistance(player));
}
// Paper end - per player mob spawning
@@ -79,7 +79,7 @@ index 84dfa7efa4be86558c38ee9e6f70f87b5638173a..c2dec99102fa4c64c3c874f725cdc658
}
// Paper end
// Paper start
-@@ -329,6 +338,23 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
+@@ -328,6 +337,23 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
}
});
// Paper end - optimise ChunkMap#anyPlayerCloseEnoughForSpawning
@@ -104,10 +104,10 @@ index 84dfa7efa4be86558c38ee9e6f70f87b5638173a..c2dec99102fa4c64c3c874f725cdc658
protected ChunkGenerator generator() {
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index b1cc896a3f5d7e59a15969308d78d2ef036b0cb1..4bec4a6955d6c38c8bd8fb9a10d153209d693a01 100644
+index f1ae9c3b3db3a7540650365b1c8a9b8274b25644..ac3d978b81f0f238ace9724f817c25cbdd866e83 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -481,6 +481,84 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -562,6 +562,84 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
// Paper end
@@ -192,7 +192,7 @@ index b1cc896a3f5d7e59a15969308d78d2ef036b0cb1..4bec4a6955d6c38c8bd8fb9a10d15320
// Add env and gen to constructor, IWorldDataServer -> WorldDataServer
public ServerLevel(MinecraftServer minecraftserver, Executor executor, LevelStorageSource.LevelStorageAccess convertable_conversionsession, PrimaryLevelData iworlddataserver, ResourceKey resourcekey, LevelStem worlddimension, ChunkProgressListener worldloadlistener, boolean flag, long i, List list, boolean flag1, @Nullable RandomSequences randomsequences, org.bukkit.World.Environment env, org.bukkit.generator.ChunkGenerator gen, org.bukkit.generator.BiomeProvider biomeProvider) {
// IRegistryCustom.Dimension iregistrycustom_dimension = minecraftserver.registryAccess(); // CraftBukkit - decompile error
-@@ -606,6 +684,14 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -687,6 +765,14 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
public void tick(BooleanSupplier shouldKeepTicking) {
diff --git a/patches/server/0711-Fix-merchant-inventory-not-closing-on-entity-removal.patch b/patches/server/0711-Fix-merchant-inventory-not-closing-on-entity-removal.patch
index 90528c5135..78c7bc6940 100644
--- a/patches/server/0711-Fix-merchant-inventory-not-closing-on-entity-removal.patch
+++ b/patches/server/0711-Fix-merchant-inventory-not-closing-on-entity-removal.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Fix merchant inventory not closing on entity removal
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 4bec4a6955d6c38c8bd8fb9a10d153209d693a01..b89544fdfceee67fb452c37294131efca007e2fe 100644
+index ac3d978b81f0f238ace9724f817c25cbdd866e83..e03810a2c4dea09f52236e7d25877cd803ed764f 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -2656,6 +2656,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2737,6 +2737,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
// Spigot end
// Spigot Start
if (entity.getBukkitEntity() instanceof org.bukkit.inventory.InventoryHolder && (!(entity instanceof ServerPlayer) || entity.getRemovalReason() != Entity.RemovalReason.KILLED)) { // SPIGOT-6876: closeInventory clears death message
diff --git a/patches/server/0726-Configurable-feature-seeds.patch b/patches/server/0726-Configurable-feature-seeds.patch
index d218a7d34a..2e7ba7f297 100644
--- a/patches/server/0726-Configurable-feature-seeds.patch
+++ b/patches/server/0726-Configurable-feature-seeds.patch
@@ -19,7 +19,7 @@ index 1080e1f67afe5574baca0df50cdb1d029a7a586a..a2f71a6d1a9e98133dff6cd0f625da94
}
final Object val = config.get(key);
diff --git a/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java b/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
-index b975cca39e18fd274702543066971fcf0cc24186..a2abdcd161bae048f1c7fd40b3a93d909ebbd0b4 100644
+index 3c7920721914588a3e7eaf1faff46f7305823416..eee2239cd715d01c5adbf1cd79282e115f42cd2e 100644
--- a/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
+++ b/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
@@ -431,7 +431,14 @@ public abstract class ChunkGenerator {
diff --git a/patches/server/0743-Optimise-collision-checking-in-player-move-packet-ha.patch b/patches/server/0743-Optimise-collision-checking-in-player-move-packet-ha.patch
index 88fe02609d..a7240adf81 100644
--- a/patches/server/0743-Optimise-collision-checking-in-player-move-packet-ha.patch
+++ b/patches/server/0743-Optimise-collision-checking-in-player-move-packet-ha.patch
@@ -8,7 +8,7 @@ Move collision logic to just the hasNewCollision call instead of getCubes + hasN
CHECK ME
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index 13e73042653909f194cfc909a96370656cbcf1ca..be79308ac6d30afe7b626f325a44b607969477fe 100644
+index 13e73042653909f194cfc909a96370656cbcf1ca..0dc3cfb53a34cb7ecfa0a31c4c0faa247dda66e6 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
@@ -647,7 +647,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
@@ -116,7 +116,7 @@ index 13e73042653909f194cfc909a96370656cbcf1ca..be79308ac6d30afe7b626f325a44b607
// Paper start - prevent position desync
if (this.awaitingPositionFromClient != null) {
return; // ... thanks Mojang for letting move calls teleport across dimensions.
-@@ -1500,11 +1535,22 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -1500,11 +1535,23 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
boolean flag2 = false;
if (!this.player.isChangingDimension() && d10 > org.spigotmc.SpigotConfig.movedWronglyThreshold && !this.player.isSleeping() && !this.player.gameMode.isCreative() && this.player.gameMode.getGameModeForPlayer() != GameType.SPECTATOR) { // Spigot
@@ -127,6 +127,7 @@ index 13e73042653909f194cfc909a96370656cbcf1ca..be79308ac6d30afe7b626f325a44b607
- if (!this.player.noPhysics && !this.player.isSleeping() && (flag2 && worldserver.noCollision(this.player, axisalignedbb) || this.isPlayerCollidingWithAnythingNew(worldserver, axisalignedbb, d0, d1, d2))) {
+ // Paper start - optimise out extra getCubes
++ this.player.absMoveTo(d0, d1, d2, f, f1); // prevent desync by tping to the set position, dropped for unknown reasons by mojang
+ // Original for reference:
+ // boolean teleportBack = flag2 && worldserver.getCubes(this.player, axisalignedbb) || (didCollide && this.a((IWorldReader) worldserver, axisalignedbb));
+ boolean teleportBack = flag2; // violating this is always a fail
@@ -141,7 +142,7 @@ index 13e73042653909f194cfc909a96370656cbcf1ca..be79308ac6d30afe7b626f325a44b607
this.internalTeleport(d3, d4, d5, f, f1, Collections.emptySet()); // CraftBukkit - SPIGOT-1807: Don't call teleport event, when the client thinks the player is falling, because the chunks are not loaded on the client yet.
this.player.doCheckFallDamage(this.player.getX() - d3, this.player.getY() - d4, this.player.getZ() - d5, packet.isOnGround());
} else {
-@@ -1587,6 +1633,26 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -1587,6 +1634,26 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
}
}
diff --git a/patches/server/0769-Kick-on-main-for-illegal-chat.patch b/patches/server/0769-Kick-on-main-for-illegal-chat.patch
index 04ae872ee0..2d9e6f1b13 100644
--- a/patches/server/0769-Kick-on-main-for-illegal-chat.patch
+++ b/patches/server/0769-Kick-on-main-for-illegal-chat.patch
@@ -7,10 +7,10 @@ Makes the PlayerKickEvent fire on the main thread for
illegal characters or chat out-of-order errors.
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index be79308ac6d30afe7b626f325a44b607969477fe..37a7d0189ccceb114b1f0f82d2ccb420e0b251cc 100644
+index 0dc3cfb53a34cb7ecfa0a31c4c0faa247dda66e6..52885e9a9b761465a6f65026b19b2757bce38c82 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -2161,7 +2161,9 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -2162,7 +2162,9 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
}
// CraftBukkit end
if (ServerGamePacketListenerImpl.isChatMessageIllegal(packet.message())) {
@@ -20,7 +20,7 @@ index be79308ac6d30afe7b626f325a44b607969477fe..37a7d0189ccceb114b1f0f82d2ccb420
} else {
Optional optional = this.tryHandleChat(packet.message(), packet.timeStamp(), packet.lastSeenMessages());
-@@ -2195,7 +2197,9 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -2196,7 +2198,9 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
@Override
public void handleChatCommand(ServerboundChatCommandPacket packet) {
if (ServerGamePacketListenerImpl.isChatMessageIllegal(packet.command())) {
@@ -30,7 +30,7 @@ index be79308ac6d30afe7b626f325a44b607969477fe..37a7d0189ccceb114b1f0f82d2ccb420
} else {
Optional optional = this.tryHandleChat(packet.command(), packet.timeStamp(), packet.lastSeenMessages());
-@@ -2281,7 +2285,9 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -2282,7 +2286,9 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
private Optional tryHandleChat(String message, Instant timestamp, LastSeenMessages.Update acknowledgment) {
if (!this.updateChatOrder(timestamp)) {
ServerGamePacketListenerImpl.LOGGER.warn("{} sent out-of-order chat: '{}'", this.player.getName().getString(), message);
diff --git a/patches/server/0777-Add-missing-structure-set-seed-configs.patch b/patches/server/0777-Add-missing-structure-set-seed-configs.patch
index 1a2817e32b..55251f679a 100644
--- a/patches/server/0777-Add-missing-structure-set-seed-configs.patch
+++ b/patches/server/0777-Add-missing-structure-set-seed-configs.patch
@@ -20,7 +20,7 @@ seeds/salts to the frequency reducer which has a similar effect.
Co-authored-by: William Blake Galbreath
diff --git a/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java b/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
-index a2abdcd161bae048f1c7fd40b3a93d909ebbd0b4..287c7a210df1f9d260b2c4bafe85e01943fc792d 100644
+index eee2239cd715d01c5adbf1cd79282e115f42cd2e..8bab3fcfc6aa6c0b37621474a69f15e94bda2113 100644
--- a/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
+++ b/src/main/java/net/minecraft/world/level/chunk/ChunkGenerator.java
@@ -568,7 +568,7 @@ public abstract class ChunkGenerator {
diff --git a/patches/server/0796-Don-t-allow-vehicle-movement-from-players-while-tele.patch b/patches/server/0796-Don-t-allow-vehicle-movement-from-players-while-tele.patch
index 7ab4298aa7..ffa7f4ca14 100644
--- a/patches/server/0796-Don-t-allow-vehicle-movement-from-players-while-tele.patch
+++ b/patches/server/0796-Don-t-allow-vehicle-movement-from-players-while-tele.patch
@@ -7,7 +7,7 @@ Bring the vehicle move packet behavior in line with the
regular player move packet.
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index 37a7d0189ccceb114b1f0f82d2ccb420e0b251cc..912c831ca4c52810ff16d4c8f4659d71347ddfa5 100644
+index 52885e9a9b761465a6f65026b19b2757bce38c82..8527025729614a2828e5c39e40747b3fc0cc0c61 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
@@ -576,6 +576,11 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
diff --git a/patches/server/0811-Prevent-tile-entity-copies-loading-chunks.patch b/patches/server/0811-Prevent-tile-entity-copies-loading-chunks.patch
index 772180c58b..67d72b6d89 100644
--- a/patches/server/0811-Prevent-tile-entity-copies-loading-chunks.patch
+++ b/patches/server/0811-Prevent-tile-entity-copies-loading-chunks.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Prevent tile entity copies loading chunks
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index 912c831ca4c52810ff16d4c8f4659d71347ddfa5..52021f110bb873cfedacd874a9d7a74055a537bb 100644
+index 8527025729614a2828e5c39e40747b3fc0cc0c61..e9fc80f4d934bb485781b3213708e37792b50be1 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -3307,7 +3307,12 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -3308,7 +3308,12 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
BlockPos blockposition = BlockEntity.getPosFromTag(nbttagcompound);
if (this.player.level().isLoaded(blockposition)) {
diff --git a/patches/server/0814-Pass-ServerLevel-for-gamerule-callbacks.patch b/patches/server/0814-Pass-ServerLevel-for-gamerule-callbacks.patch
index 5d2738f33b..a290325458 100644
--- a/patches/server/0814-Pass-ServerLevel-for-gamerule-callbacks.patch
+++ b/patches/server/0814-Pass-ServerLevel-for-gamerule-callbacks.patch
@@ -18,10 +18,10 @@ index 9951e999b1440ef623f14bdd46b5e42a90387f1e..91e6161449dc5625331e467d9e837575
if (dedicatedserverproperties.enableQuery) {
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index 52021f110bb873cfedacd874a9d7a74055a537bb..67c39acfc811eaef9ea58f76c9da9dcb10a031a2 100644
+index e9fc80f4d934bb485781b3213708e37792b50be1..079fc18274102dea52b14bf39e2f72f89a014fed 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -2897,7 +2897,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -2898,7 +2898,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
this.player = this.server.getPlayerList().respawn(this.player, false, RespawnReason.DEATH);
if (this.server.isHardcore()) {
this.player.setGameMode(GameType.SPECTATOR, org.bukkit.event.player.PlayerGameModeChangeEvent.Cause.HARDCORE_DEATH, null); // Paper
diff --git a/patches/server/0823-Don-t-tick-markers.patch b/patches/server/0823-Don-t-tick-markers.patch
index a9efa9917b..aeeeff22c9 100644
--- a/patches/server/0823-Don-t-tick-markers.patch
+++ b/patches/server/0823-Don-t-tick-markers.patch
@@ -23,10 +23,10 @@ index ff99336e0b8131ae161cfa5c4fc83c6905e3dbc8..5f43aedc6596e2b1ac7af97115157147
}
});
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index b89544fdfceee67fb452c37294131efca007e2fe..2737bd5f1915466a73dd4e093e35301c4353ddea 100644
+index e03810a2c4dea09f52236e7d25877cd803ed764f..6ed2833f89fd159daa81cd8b577be31a886469eb 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -2566,6 +2566,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2647,6 +2647,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
}
public void onTickingStart(Entity entity) {
diff --git a/patches/server/0824-Do-not-accept-invalid-client-settings.patch b/patches/server/0824-Do-not-accept-invalid-client-settings.patch
index bf0cf0e10e..fe09e6926f 100644
--- a/patches/server/0824-Do-not-accept-invalid-client-settings.patch
+++ b/patches/server/0824-Do-not-accept-invalid-client-settings.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Do not accept invalid client settings
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index 67c39acfc811eaef9ea58f76c9da9dcb10a031a2..7675bb14e02322ea58cfde1dd6837c0ddefdd798 100644
+index 079fc18274102dea52b14bf39e2f72f89a014fed..af9e34f5f07626121f7df16d077360940bda7e43 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -3449,6 +3449,13 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -3450,6 +3450,13 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
@Override
public void handleClientInformation(ServerboundClientInformationPacket packet) {
PacketUtils.ensureRunningOnSameThread(packet, this, this.player.serverLevel());
diff --git a/patches/server/0832-Add-Alternate-Current-redstone-implementation.patch b/patches/server/0832-Add-Alternate-Current-redstone-implementation.patch
index eec0d30ba0..a25de7286a 100644
--- a/patches/server/0832-Add-Alternate-Current-redstone-implementation.patch
+++ b/patches/server/0832-Add-Alternate-Current-redstone-implementation.patch
@@ -2008,7 +2008,7 @@ index 0000000000000000000000000000000000000000..33cd90c30c22200a4e1ae64f40a0bf78
+ }
+}
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 2737bd5f1915466a73dd4e093e35301c4353ddea..4e2b2223c90f1e994dcd584dfa570953caf37a55 100644
+index 6ed2833f89fd159daa81cd8b577be31a886469eb..90b69060de8bf17b3414a79b901ee7c652f646a8 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
@@ -222,6 +222,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
@@ -2019,7 +2019,7 @@ index 2737bd5f1915466a73dd4e093e35301c4353ddea..4e2b2223c90f1e994dcd584dfa570953
public static Throwable getAddToWorldStackTrace(Entity entity) {
final Throwable thr = new Throwable(entity + " Added to world at " + new java.util.Date());
io.papermc.paper.util.StacktraceDeobfuscator.INSTANCE.deobfuscateThrowable(thr);
-@@ -2555,6 +2556,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2636,6 +2637,13 @@ public class ServerLevel extends Level implements WorldGenLevel {
return this.randomSequences;
}
diff --git a/patches/server/0839-Prevent-empty-items-from-being-added-to-world.patch b/patches/server/0839-Prevent-empty-items-from-being-added-to-world.patch
index 2ded85641e..202f89a88e 100644
--- a/patches/server/0839-Prevent-empty-items-from-being-added-to-world.patch
+++ b/patches/server/0839-Prevent-empty-items-from-being-added-to-world.patch
@@ -7,10 +7,10 @@ The previous solution caused a bunch of bandaid fixes inorder to resolve edge ca
Just simply prevent them from being added to the world instead.
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 4e2b2223c90f1e994dcd584dfa570953caf37a55..6369fba72d7bf2db00f7df064242561681f35a41 100644
+index 90b69060de8bf17b3414a79b901ee7c652f646a8..1ee55cb0e1a75f45e5d10feb8a611b45653df60a 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1564,6 +1564,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1645,6 +1645,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
// WorldServer.LOGGER.warn("Tried to add entity {} but it was marked as removed already", EntityTypes.getKey(entity.getType())); // CraftBukkit
return false;
} else {
diff --git a/patches/server/0841-Don-t-print-component-in-resource-pack-rejection-mes.patch b/patches/server/0841-Don-t-print-component-in-resource-pack-rejection-mes.patch
index 5a5bd60f2a..1473317b2a 100644
--- a/patches/server/0841-Don-t-print-component-in-resource-pack-rejection-mes.patch
+++ b/patches/server/0841-Don-t-print-component-in-resource-pack-rejection-mes.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Don't print component in resource pack rejection message
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index 7675bb14e02322ea58cfde1dd6837c0ddefdd798..cb7146db52b59e7d51146d79a1655292e52ab286 100644
+index af9e34f5f07626121f7df16d077360940bda7e43..c10aba1a414eaa0349ab0b325b6673eb2142bb05 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -2031,7 +2031,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -2032,7 +2032,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
public void handleResourcePackResponse(ServerboundResourcePackPacket packet) {
PacketUtils.ensureRunningOnSameThread(packet, this, this.player.serverLevel());
if (packet.getAction() == ServerboundResourcePackPacket.Action.DECLINED && this.server.isResourcePackRequired()) {
diff --git a/patches/server/0845-Add-some-minimal-debug-information-to-chat-packet-er.patch b/patches/server/0845-Add-some-minimal-debug-information-to-chat-packet-er.patch
index 0614843ce5..7cd663d9a6 100644
--- a/patches/server/0845-Add-some-minimal-debug-information-to-chat-packet-er.patch
+++ b/patches/server/0845-Add-some-minimal-debug-information-to-chat-packet-er.patch
@@ -6,10 +6,10 @@ Subject: [PATCH] Add some minimal debug information to chat packet errors
TODO: potentially add some kick leeway
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index cb7146db52b59e7d51146d79a1655292e52ab286..3e8d06db9f182ce7a09887f850f4d0a8556e5ae1 100644
+index c10aba1a414eaa0349ab0b325b6673eb2142bb05..e348a3da6f743b211aaef8af738f31bb4d196370 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -2289,7 +2289,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -2290,7 +2290,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
private Optional tryHandleChat(String message, Instant timestamp, LastSeenMessages.Update acknowledgment) {
if (!this.updateChatOrder(timestamp)) {
diff --git a/patches/server/0847-Fix-Spigot-Config-not-using-commands.spam-exclusions.patch b/patches/server/0847-Fix-Spigot-Config-not-using-commands.spam-exclusions.patch
index 0c14e78441..924f4afdf4 100644
--- a/patches/server/0847-Fix-Spigot-Config-not-using-commands.spam-exclusions.patch
+++ b/patches/server/0847-Fix-Spigot-Config-not-using-commands.spam-exclusions.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Fix Spigot Config not using commands.spam-exclusions
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index 3e8d06db9f182ce7a09887f850f4d0a8556e5ae1..e197c14ff64ef38a2c7a911906c62178b83002fd 100644
+index e348a3da6f743b211aaef8af738f31bb4d196370..2c8e77df54167ac6b3ae3b17f86a9e147aa27a05 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -2536,7 +2536,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -2537,7 +2537,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
}
// Spigot end
// this.chatSpamTickCount += 20;
diff --git a/patches/server/0848-More-Teleport-API.patch b/patches/server/0848-More-Teleport-API.patch
index 091ae1381c..7454d6fa5f 100644
--- a/patches/server/0848-More-Teleport-API.patch
+++ b/patches/server/0848-More-Teleport-API.patch
@@ -7,10 +7,10 @@ Subject: [PATCH] More Teleport API
public net.minecraft.server.network.ServerGamePacketListenerImpl internalTeleport(DDDFFLjava/util/Set;Z)V
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index e197c14ff64ef38a2c7a911906c62178b83002fd..6c132a65916520ad7c4f09c65aed1ce5d0cc6f49 100644
+index 2c8e77df54167ac6b3ae3b17f86a9e147aa27a05..45f28f27304373733128b4e686d0b1b21876d5c1 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -1707,11 +1707,17 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -1708,11 +1708,17 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
return false; // CraftBukkit - Return event status
}
diff --git a/patches/server/0851-Send-block-entities-after-destroy-prediction.patch b/patches/server/0851-Send-block-entities-after-destroy-prediction.patch
index f6527ac1ca..4e35b798ad 100644
--- a/patches/server/0851-Send-block-entities-after-destroy-prediction.patch
+++ b/patches/server/0851-Send-block-entities-after-destroy-prediction.patch
@@ -57,10 +57,10 @@ index 0f0cf4fdfcbf8537696f15f98f3fb7e68baeb27c..c38268b11dd5a76d5b3c2013c241063c
}
}
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index 6c132a65916520ad7c4f09c65aed1ce5d0cc6f49..8011128c540f964cebec0b8730a0caffb007c138 100644
+index 45f28f27304373733128b4e686d0b1b21876d5c1..4d89165216883e58f6bfc732f6a47eb607ef569c 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -1853,8 +1853,28 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -1854,8 +1854,28 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
return;
}
// Paper end - Don't allow digging in unloaded chunks
diff --git a/patches/server/0857-Remove-invalid-signature-login-stacktrace.patch b/patches/server/0857-Remove-invalid-signature-login-stacktrace.patch
index b542de176b..eb9c7faec2 100644
--- a/patches/server/0857-Remove-invalid-signature-login-stacktrace.patch
+++ b/patches/server/0857-Remove-invalid-signature-login-stacktrace.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Remove invalid signature login stacktrace
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index 8011128c540f964cebec0b8730a0caffb007c138..ee2043891c04b95fccccbd19c79b91d7b8fcb436 100644
+index 4d89165216883e58f6bfc732f6a47eb607ef569c..4e0397b59e3c11dc8362610bbc90a841a624394d 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -3584,7 +3584,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -3585,7 +3585,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
this.resetPlayerChatState(remotechatsession_a.validate(this.player.getGameProfile(), signaturevalidator, Duration.ZERO));
} catch (ProfilePublicKey.ValidationException profilepublickey_b) {
diff --git a/patches/server/0870-Configurable-chat-thread-limit.patch b/patches/server/0870-Configurable-chat-thread-limit.patch
index 9aeac6e02c..1ebc6454e1 100644
--- a/patches/server/0870-Configurable-chat-thread-limit.patch
+++ b/patches/server/0870-Configurable-chat-thread-limit.patch
@@ -22,10 +22,10 @@ is actually processed, this is honestly really just exposed for the misnomers or
who just wanna ensure that this won't grow over a specific size if chat gets stupidly active
diff --git a/src/main/java/io/papermc/paper/configuration/GlobalConfiguration.java b/src/main/java/io/papermc/paper/configuration/GlobalConfiguration.java
-index 09234062090c210227350cafeed141f8cb73108a..9f5f0d8ddc8f480b48079c70e38c9c08eff403f6 100644
+index 3294da27227b5a332904398afa56d21ea97d55f0..77d05f7efdcdceef681a75692c208075d873d368 100644
--- a/src/main/java/io/papermc/paper/configuration/GlobalConfiguration.java
+++ b/src/main/java/io/papermc/paper/configuration/GlobalConfiguration.java
-@@ -254,13 +254,26 @@ public class GlobalConfiguration extends ConfigurationPart {
+@@ -239,13 +239,26 @@ public class GlobalConfiguration extends ConfigurationPart {
public Misc misc;
public class Misc extends ConfigurationPart {
diff --git a/patches/server/0875-Fix-a-bunch-of-vanilla-bugs.patch b/patches/server/0875-Fix-a-bunch-of-vanilla-bugs.patch
index 70db915647..978419875c 100644
--- a/patches/server/0875-Fix-a-bunch-of-vanilla-bugs.patch
+++ b/patches/server/0875-Fix-a-bunch-of-vanilla-bugs.patch
@@ -79,10 +79,10 @@ index 6cd6d69a20e95e344fc18ab67dc300824537a59b..2e2a7c2cf3081187da817479a9da3eb1
}
}
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 6369fba72d7bf2db00f7df064242561681f35a41..3a5686086e113eed8b80bb65f5b05d4b81138b00 100644
+index 1ee55cb0e1a75f45e5d10feb8a611b45653df60a..b4eae79189397995cbc81d449fcd82c7a6ea8327 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1010,7 +1010,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1091,7 +1091,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
} else {
AABB axisalignedbb = (new AABB(blockposition1, new BlockPos(blockposition1.getX(), this.getMaxBuildHeight(), blockposition1.getZ()))).inflate(3.0D);
List list = this.getEntitiesOfClass(LivingEntity.class, axisalignedbb, (entityliving) -> {
diff --git a/patches/server/0876-Remove-unnecessary-onTrackingStart-during-navigation.patch b/patches/server/0876-Remove-unnecessary-onTrackingStart-during-navigation.patch
index 53d6383999..b22d5c46dc 100644
--- a/patches/server/0876-Remove-unnecessary-onTrackingStart-during-navigation.patch
+++ b/patches/server/0876-Remove-unnecessary-onTrackingStart-during-navigation.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Remove unnecessary onTrackingStart during navigation warning
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 3a5686086e113eed8b80bb65f5b05d4b81138b00..83267ae18c606116cf0c0b55549dc732f269a5d7 100644
+index b4eae79189397995cbc81d449fcd82c7a6ea8327..d3e2005f4f1abd9ad5bda6acc8860bd74a2d2d25 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -2602,7 +2602,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2683,7 +2683,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
if (entity instanceof Mob) {
Mob entityinsentient = (Mob) entity;
@@ -17,7 +17,7 @@ index 3a5686086e113eed8b80bb65f5b05d4b81138b00..83267ae18c606116cf0c0b55549dc732
String s = "onTrackingStart called during navigation iteration";
Util.logAndPauseIfInIde("onTrackingStart called during navigation iteration", new IllegalStateException("onTrackingStart called during navigation iteration"));
-@@ -2687,7 +2687,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2768,7 +2768,7 @@ public class ServerLevel extends Level implements WorldGenLevel {
if (entity instanceof Mob) {
Mob entityinsentient = (Mob) entity;
diff --git a/patches/server/0901-check-global-player-list-where-appropriate.patch b/patches/server/0901-check-global-player-list-where-appropriate.patch
index 2206846285..d9a259b62c 100644
--- a/patches/server/0901-check-global-player-list-where-appropriate.patch
+++ b/patches/server/0901-check-global-player-list-where-appropriate.patch
@@ -7,10 +7,10 @@ Makes certain entities check all players when searching for a player
instead of just checking players in their world.
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index 83267ae18c606116cf0c0b55549dc732f269a5d7..a97926b5d36c92d5696c6b7a547ada30d652afbd 100644
+index d3e2005f4f1abd9ad5bda6acc8860bd74a2d2d25..5932cfd855ee08442e0f8efb52bd9f2de8d6fcc7 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -2724,4 +2724,12 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -2805,4 +2805,12 @@ public class ServerLevel extends Level implements WorldGenLevel {
entity.updateDynamicGameEventListener(DynamicGameEventListener::move);
}
}
diff --git a/patches/server/0910-Properly-resend-entities.patch b/patches/server/0910-Properly-resend-entities.patch
index 3855189110..cc4362ad60 100644
--- a/patches/server/0910-Properly-resend-entities.patch
+++ b/patches/server/0910-Properly-resend-entities.patch
@@ -66,10 +66,10 @@ index d088479d160dbd2fc90b48a30553be141db8eef2..bf6a70a69bb695ec1a202cd1e863c468
public static class DataItem {
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index ee2043891c04b95fccccbd19c79b91d7b8fcb436..d4c683b99bdc8fa76749b8edd1136303f7367101 100644
+index 4e0397b59e3c11dc8362610bbc90a841a624394d..61aa5fec06b2c9a0986596790fc735caec1cc564 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -2795,7 +2795,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -2796,7 +2796,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
// Entity in bucket - SPIGOT-4048 and SPIGOT-6859a
if ((entity instanceof Bucketable && entity instanceof LivingEntity && origItem != null && origItem.asItem() == Items.WATER_BUCKET) && (event.isCancelled() || ServerGamePacketListenerImpl.this.player.getInventory().getSelected() == null || ServerGamePacketListenerImpl.this.player.getInventory().getSelected().getItem() != origItem)) {
diff --git a/patches/server/0920-Add-missing-SpigotConfig-logCommands-check.patch b/patches/server/0920-Add-missing-SpigotConfig-logCommands-check.patch
index eb49640d47..89a3428a62 100644
--- a/patches/server/0920-Add-missing-SpigotConfig-logCommands-check.patch
+++ b/patches/server/0920-Add-missing-SpigotConfig-logCommands-check.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Add missing SpigotConfig logCommands check
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index d4c683b99bdc8fa76749b8edd1136303f7367101..60d1a2ec1f7b800d66f923ae3149db2c6a62691b 100644
+index 61aa5fec06b2c9a0986596790fc735caec1cc564..06ea0185c7eb6bbb2ece220db4c0795ddabc465b 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -2253,7 +2253,9 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -2254,7 +2254,9 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
private void performChatCommand(ServerboundChatCommandPacket packet, LastSeenMessages lastSeenMessages) {
// CraftBukkit start
String command = "/" + packet.command();
diff --git a/patches/server/0926-Use-single-player-info-update-packet-on-join.patch b/patches/server/0926-Use-single-player-info-update-packet-on-join.patch
index e18b9aec88..720ef45ae7 100644
--- a/patches/server/0926-Use-single-player-info-update-packet-on-join.patch
+++ b/patches/server/0926-Use-single-player-info-update-packet-on-join.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Use single player info update packet on join
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index 60d1a2ec1f7b800d66f923ae3149db2c6a62691b..9ca6a2e610b6a82286bb617177bec657fbb13c72 100644
+index 06ea0185c7eb6bbb2ece220db4c0795ddabc465b..47e61033ccebe5cd2a850207aaed6235cc14b599 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -3599,7 +3599,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -3600,7 +3600,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
this.signedMessageDecoder = session.createMessageDecoder(this.player.getUUID());
this.chatMessageChain.append((executor) -> {
this.player.setChatSession(session);
diff --git a/patches/server/0948-Treat-sequence-violations-like-they-should-be.patch b/patches/server/0948-Treat-sequence-violations-like-they-should-be.patch
index 33bf73e6a7..828ff0dd57 100644
--- a/patches/server/0948-Treat-sequence-violations-like-they-should-be.patch
+++ b/patches/server/0948-Treat-sequence-violations-like-they-should-be.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Treat sequence violations like they should be
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index 9ca6a2e610b6a82286bb617177bec657fbb13c72..ed54d3f9ab92b3391d459bcd92b7b00adc6e6238 100644
+index 47e61033ccebe5cd2a850207aaed6235cc14b599..307d5eed074df91ed4746881ffeae22041ecde0d 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-@@ -2123,6 +2123,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -2124,6 +2124,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
public void ackBlockChangesUpTo(int sequence) {
if (sequence < 0) {
diff --git a/patches/server/0950-Prevent-causing-expired-keys-from-impacting-new-join.patch b/patches/server/0950-Prevent-causing-expired-keys-from-impacting-new-join.patch
index fd51cb23be..a91f26199c 100644
--- a/patches/server/0950-Prevent-causing-expired-keys-from-impacting-new-join.patch
+++ b/patches/server/0950-Prevent-causing-expired-keys-from-impacting-new-join.patch
@@ -24,7 +24,7 @@ index 23e0e6937e28f09271a4ec7c35e0076a576cf3d3..4aa8b483841028fbcc43f9ed47730881
UPDATE_GAME_MODE((serialized, buf) -> {
serialized.gameMode = GameType.byId(buf.readVarInt());
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
-index ed54d3f9ab92b3391d459bcd92b7b00adc6e6238..705181a7cc203b7f60d7038d1f341c2d52cec6b1 100644
+index 307d5eed074df91ed4746881ffeae22041ecde0d..10c387c813d5af1c9522d361f2c843c096fcbe6e 100644
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
@@ -296,6 +296,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
@@ -49,7 +49,7 @@ index ed54d3f9ab92b3391d459bcd92b7b00adc6e6238..705181a7cc203b7f60d7038d1f341c2d
}
public void resetPosition() {
-@@ -3597,6 +3605,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
+@@ -3598,6 +3606,7 @@ public class ServerGamePacketListenerImpl implements ServerPlayerConnection, Tic
private void resetPlayerChatState(RemoteChatSession session) {
this.chatSession = session;
diff --git a/patches/server/0951-Prevent-GameEvents-being-fired-from-unloaded-chunks.patch b/patches/server/0951-Prevent-GameEvents-being-fired-from-unloaded-chunks.patch
index 97b6952f55..5211deda6f 100644
--- a/patches/server/0951-Prevent-GameEvents-being-fired-from-unloaded-chunks.patch
+++ b/patches/server/0951-Prevent-GameEvents-being-fired-from-unloaded-chunks.patch
@@ -5,10 +5,10 @@ Subject: [PATCH] Prevent GameEvents being fired from unloaded chunks
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
-index a97926b5d36c92d5696c6b7a547ada30d652afbd..42d5b4ffc51da90a8f3bbec84e44ac2b0cb7b5ee 100644
+index 5932cfd855ee08442e0f8efb52bd9f2de8d6fcc7..2ac23779222369ace69f1e3f7fb12184865b7a43 100644
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
-@@ -1701,6 +1701,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
+@@ -1782,6 +1782,11 @@ public class ServerLevel extends Level implements WorldGenLevel {
@Override
public void gameEvent(GameEvent event, Vec3 emitterPos, GameEvent.Context emitter) {
diff --git a/patches/unapplied/server/0018-Rewrite-chunk-system.patch b/patches/unapplied/server/0018-Rewrite-chunk-system.patch
deleted file mode 100644
index 4480292070..0000000000
--- a/patches/unapplied/server/0018-Rewrite-chunk-system.patch
+++ /dev/null
@@ -1,18182 +0,0 @@
-From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
-From: Spottedleaf
-Date: Thu, 11 Mar 2021 02:32:30 -0800
-Subject: [PATCH] Rewrite chunk system
-
-== AT ==
-public net.minecraft.server.level.ChunkMap setViewDistance(I)V
-public net.minecraft.server.level.ChunkHolder pos
-public net.minecraft.server.level.ChunkMap overworldDataStorage
-public-f net.minecraft.world.level.chunk.storage.RegionFileStorage
-public net.minecraft.server.level.ChunkMap getPoiManager()Lnet/minecraft/world/entity/ai/village/poi/PoiManager;
-
-diff --git a/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java b/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java
-index 9a5fa60cb8156fe254a123e237d957ccb82f7195..0f7d36933e34e1d1b9dd27d8b0c35ff883818526 100644
---- a/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java
-+++ b/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java
-@@ -41,14 +41,14 @@ public final class StarLightInterface {
- protected final ArrayDeque cachedSkyPropagators;
- protected final ArrayDeque cachedBlockPropagators;
-
-- protected final LightQueue lightQueue = new LightQueue(this);
-+ public final io.papermc.paper.chunk.system.light.LightQueue lightQueue; // Paper - replace light queue
-
- protected final LayerLightEventListener skyReader;
- protected final LayerLightEventListener blockReader;
- protected final boolean isClientSide;
-
-- protected final int minSection;
-- protected final int maxSection;
-+ public final int minSection; // Paper - public
-+ public final int maxSection; // Paper - public
- protected final int minLightSection;
- protected final int maxLightSection;
-
-@@ -182,6 +182,7 @@ public final class StarLightInterface {
- StarLightInterface.this.sectionChange(pos, notReady);
- }
- };
-+ this.lightQueue = new io.papermc.paper.chunk.system.light.LightQueue(this); // Paper - replace light queue
- }
-
- protected int getSkyLightValue(final BlockPos blockPos, final ChunkAccess chunk) {
-@@ -325,7 +326,7 @@ public final class StarLightInterface {
- return this.lightAccess;
- }
-
-- protected final SkyStarLightEngine getSkyLightEngine() {
-+ public final SkyStarLightEngine getSkyLightEngine() { // Paper - public
- if (this.cachedSkyPropagators == null) {
- return null;
- }
-@@ -340,7 +341,7 @@ public final class StarLightInterface {
- return ret;
- }
-
-- protected final void releaseSkyLightEngine(final SkyStarLightEngine engine) {
-+ public final void releaseSkyLightEngine(final SkyStarLightEngine engine) { // Paper - public
- if (this.cachedSkyPropagators == null) {
- return;
- }
-@@ -349,7 +350,7 @@ public final class StarLightInterface {
- }
- }
-
-- protected final BlockStarLightEngine getBlockLightEngine() {
-+ public final BlockStarLightEngine getBlockLightEngine() { // Paper - public
- if (this.cachedBlockPropagators == null) {
- return null;
- }
-@@ -364,7 +365,7 @@ public final class StarLightInterface {
- return ret;
- }
-
-- protected final void releaseBlockLightEngine(final BlockStarLightEngine engine) {
-+ public final void releaseBlockLightEngine(final BlockStarLightEngine engine) { // Paper - public
- if (this.cachedBlockPropagators == null) {
- return;
- }
-@@ -511,57 +512,15 @@ public final class StarLightInterface {
- }
-
- public void scheduleChunkLight(final ChunkPos pos, final Runnable run) {
-- this.lightQueue.queueChunkLighting(pos, run);
-+ throw new UnsupportedOperationException("No longer implemented, use the new lightQueue field to queue tasks"); // Paper - replace light queue
- }
-
- public void removeChunkTasks(final ChunkPos pos) {
-- this.lightQueue.removeChunk(pos);
-+ throw new UnsupportedOperationException("No longer implemented, use the new lightQueue field to queue tasks"); // Paper - replace light queue
- }
-
- public void propagateChanges() {
-- if (this.lightQueue.isEmpty()) {
-- return;
-- }
--
-- final SkyStarLightEngine skyEngine = this.getSkyLightEngine();
-- final BlockStarLightEngine blockEngine = this.getBlockLightEngine();
--
-- try {
-- LightQueue.ChunkTasks task;
-- while ((task = this.lightQueue.removeFirstTask()) != null) {
-- if (task.lightTasks != null) {
-- for (final Runnable run : task.lightTasks) {
-- run.run();
-- }
-- }
--
-- final long coordinate = task.chunkCoordinate;
-- final int chunkX = CoordinateUtils.getChunkX(coordinate);
-- final int chunkZ = CoordinateUtils.getChunkZ(coordinate);
--
-- final Set positions = task.changedPositions;
-- final Boolean[] sectionChanges = task.changedSectionSet;
--
-- if (skyEngine != null && (!positions.isEmpty() || sectionChanges != null)) {
-- skyEngine.blocksChangedInChunk(this.lightAccess, chunkX, chunkZ, positions, sectionChanges);
-- }
-- if (blockEngine != null && (!positions.isEmpty() || sectionChanges != null)) {
-- blockEngine.blocksChangedInChunk(this.lightAccess, chunkX, chunkZ, positions, sectionChanges);
-- }
--
-- if (skyEngine != null && task.queuedEdgeChecksSky != null) {
-- skyEngine.checkChunkEdges(this.lightAccess, chunkX, chunkZ, task.queuedEdgeChecksSky);
-- }
-- if (blockEngine != null && task.queuedEdgeChecksBlock != null) {
-- blockEngine.checkChunkEdges(this.lightAccess, chunkX, chunkZ, task.queuedEdgeChecksBlock);
-- }
--
-- task.onComplete.complete(null);
-- }
-- } finally {
-- this.releaseSkyLightEngine(skyEngine);
-- this.releaseBlockLightEngine(blockEngine);
-- }
-+ throw new UnsupportedOperationException("No longer implemented, task draining is now performed by the light thread"); // Paper - replace light queue
- }
-
- protected static final class LightQueue {
-diff --git a/src/main/java/co/aikar/timings/TimingsExport.java b/src/main/java/co/aikar/timings/TimingsExport.java
-index 38f01952153348d937e326da0ec102cd9b0f80af..43380d5e3a40b64bebdf3c0e7c48eca8998c8ac0 100644
---- a/src/main/java/co/aikar/timings/TimingsExport.java
-+++ b/src/main/java/co/aikar/timings/TimingsExport.java
-@@ -163,7 +163,11 @@ public class TimingsExport extends Thread {
- pair("gamerules", toObjectMapper(world.getWorld().getGameRules(), rule -> {
- return pair(rule, world.getWorld().getGameRuleValue(rule));
- })),
-- pair("ticking-distance", world.getChunkSource().chunkMap.getEffectiveViewDistance())
-+ // Paper start - replace chunk loader system
-+ pair("ticking-distance", world.getChunkSource().chunkMap.playerChunkManager.getTargetTickViewDistance()),
-+ pair("no-ticking-distance", world.getChunkSource().chunkMap.playerChunkManager.getTargetNoTickViewDistance()),
-+ pair("sending-distance", world.getChunkSource().chunkMap.playerChunkManager.getTargetSendDistance())
-+ // Paper end - replace chunk loader system
- ));
- }));
-
-diff --git a/src/main/java/co/aikar/timings/WorldTimingsHandler.java b/src/main/java/co/aikar/timings/WorldTimingsHandler.java
-index 2f0d9b953802dee821cfde82d22b0567cce8ee91..22687667ec69a954261e55e59261286ac1b8b8cd 100644
---- a/src/main/java/co/aikar/timings/WorldTimingsHandler.java
-+++ b/src/main/java/co/aikar/timings/WorldTimingsHandler.java
-@@ -59,6 +59,16 @@ public class WorldTimingsHandler {
-
- public final Timing miscMobSpawning;
-
-+ public final Timing poiUnload;
-+ public final Timing chunkUnload;
-+ public final Timing poiSaveDataSerialization;
-+ public final Timing chunkSave;
-+ public final Timing chunkSaveDataSerialization;
-+ public final Timing chunkSaveIOWait;
-+ public final Timing chunkUnloadPrepareSave;
-+ public final Timing chunkUnloadPOISerialization;
-+ public final Timing chunkUnloadDataSave;
-+
- public WorldTimingsHandler(Level server) {
- String name = ((PrimaryLevelData) server.getLevelData()).getLevelName() + " - ";
-
-@@ -112,6 +122,16 @@ public class WorldTimingsHandler {
-
-
- miscMobSpawning = Timings.ofSafe(name + "Mob spawning - Misc");
-+
-+ poiUnload = Timings.ofSafe(name + "Chunk unload - POI");
-+ chunkUnload = Timings.ofSafe(name + "Chunk unload - Chunk");
-+ poiSaveDataSerialization = Timings.ofSafe(name + "Chunk save - POI Data serialization");
-+ chunkSave = Timings.ofSafe(name + "Chunk save - Chunk");
-+ chunkSaveDataSerialization = Timings.ofSafe(name + "Chunk save - Chunk Data serialization");
-+ chunkSaveIOWait = Timings.ofSafe(name + "Chunk save - Chunk IO Wait");
-+ chunkUnloadPrepareSave = Timings.ofSafe(name + "Chunk unload - Async Save Prepare");
-+ chunkUnloadPOISerialization = Timings.ofSafe(name + "Chunk unload - POI Data Serialization");
-+ chunkUnloadDataSave = Timings.ofSafe(name + "Chunk unload - Data Serialization");
- }
-
- public static Timing getTickList(ServerLevel worldserver, String timingsType) {
-diff --git a/src/main/java/com/destroystokyo/paper/io/IOUtil.java b/src/main/java/com/destroystokyo/paper/io/IOUtil.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..e064f96c90afd1a4890060baa055cfd0469b6a6f
---- /dev/null
-+++ b/src/main/java/com/destroystokyo/paper/io/IOUtil.java
-@@ -0,0 +1,63 @@
-+package com.destroystokyo.paper.io;
-+
-+import org.bukkit.Bukkit;
-+
-+@Deprecated(forRemoval = true)
-+public final class IOUtil {
-+
-+ /* Copied from concrete or concurrentutil */
-+
-+ public static long getCoordinateKey(final int x, final int z) {
-+ return ((long)z << 32) | (x & 0xFFFFFFFFL);
-+ }
-+
-+ public static int getCoordinateX(final long key) {
-+ return (int)key;
-+ }
-+
-+ public static int getCoordinateZ(final long key) {
-+ return (int)(key >>> 32);
-+ }
-+
-+ public static int getRegionCoordinate(final int chunkCoordinate) {
-+ return chunkCoordinate >> 5;
-+ }
-+
-+ public static int getChunkInRegion(final int chunkCoordinate) {
-+ return chunkCoordinate & 31;
-+ }
-+
-+ public static String genericToString(final Object object) {
-+ return object == null ? "null" : object.getClass().getName() + ":" + object.toString();
-+ }
-+
-+ public static T notNull(final T obj) {
-+ if (obj == null) {
-+ throw new NullPointerException();
-+ }
-+ return obj;
-+ }
-+
-+ public static T notNull(final T obj, final String msgIfNull) {
-+ if (obj == null) {
-+ throw new NullPointerException(msgIfNull);
-+ }
-+ return obj;
-+ }
-+
-+ public static void arrayBounds(final int off, final int len, final int arrayLength, final String msgPrefix) {
-+ if (off < 0 || len < 0 || (arrayLength - off) < len) {
-+ throw new ArrayIndexOutOfBoundsException(msgPrefix + ": off: " + off + ", len: " + len + ", array length: " + arrayLength);
-+ }
-+ }
-+
-+ public static int getPriorityForCurrentThread() {
-+ return Bukkit.isPrimaryThread() ? PrioritizedTaskQueue.HIGHEST_PRIORITY : PrioritizedTaskQueue.NORMAL_PRIORITY;
-+ }
-+
-+ @SuppressWarnings("unchecked")
-+ public static void rethrow(final Throwable throwable) throws T {
-+ throw (T)throwable;
-+ }
-+
-+}
-diff --git a/src/main/java/com/destroystokyo/paper/io/PaperFileIOThread.java b/src/main/java/com/destroystokyo/paper/io/PaperFileIOThread.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..f2c27e0ac65be4b75c1d86ef6fd45fdb538d96ac
---- /dev/null
-+++ b/src/main/java/com/destroystokyo/paper/io/PaperFileIOThread.java
-@@ -0,0 +1,474 @@
-+package com.destroystokyo.paper.io;
-+
-+import com.mojang.logging.LogUtils;
-+import net.minecraft.nbt.CompoundTag;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.world.level.ChunkPos;
-+import net.minecraft.world.level.chunk.storage.RegionFile;
-+import org.slf4j.Logger;
-+
-+import java.io.IOException;
-+import java.util.concurrent.CompletableFuture;
-+import java.util.concurrent.ConcurrentHashMap;
-+import java.util.concurrent.atomic.AtomicLong;
-+import java.util.function.Consumer;
-+import java.util.function.Function;
-+
-+/**
-+ * Prioritized singleton thread responsible for all chunk IO that occurs in a minecraft server.
-+ *
-+ *
-+ * Singleton access: {@link Holder#INSTANCE}
-+ *
-+ *
-+ *
-+ * All functions provided are MT-Safe, however certain ordering constraints are (but not enforced):
-+ *
-+ * Chunk saves may not occur for unloaded chunks.
-+ *
-+ *
-+ * Tasks must be scheduled on the main thread.
-+ *
-+ *
-+ *
-+ * @see Holder#INSTANCE
-+ * @see #scheduleSave(ServerLevel, int, int, CompoundTag, CompoundTag, int)
-+ * @see #loadChunkDataAsync(ServerLevel, int, int, int, Consumer, boolean, boolean, boolean)
-+ * @deprecated
-+ */
-+@Deprecated(forRemoval = true)
-+public final class PaperFileIOThread extends QueueExecutorThread {
-+
-+ public static final Logger LOGGER = LogUtils.getLogger();
-+ public static final CompoundTag FAILURE_VALUE = new CompoundTag();
-+
-+ public static final class Holder {
-+
-+ public static final PaperFileIOThread INSTANCE = new PaperFileIOThread();
-+
-+ static {
-+ // Paper - fail hard on usage
-+ }
-+ }
-+
-+ private final AtomicLong writeCounter = new AtomicLong();
-+
-+ private PaperFileIOThread() {
-+ super(new PrioritizedTaskQueue<>(), (int)(1.0e6)); // 1.0ms spinwait time
-+ this.setName("Paper RegionFile IO Thread");
-+ this.setPriority(Thread.NORM_PRIORITY - 1); // we keep priority close to normal because threads can wait on us
-+ this.setUncaughtExceptionHandler((final Thread unused, final Throwable thr) -> {
-+ LOGGER.error("Uncaught exception thrown from IO thread, report this!", thr);
-+ });
-+ }
-+
-+ /* run() is implemented by superclass */
-+
-+ /*
-+ *
-+ * IO thread will perform reads before writes
-+ *
-+ * How reads/writes are scheduled:
-+ *
-+ * If read in progress while scheduling write, ignore read and schedule write
-+ * If read in progress while scheduling read (no write in progress), chain the read task
-+ *
-+ *
-+ * If write in progress while scheduling read, use the pending write data and ret immediately
-+ * If write in progress while scheduling write (ignore read in progress), overwrite the write in progress data
-+ *
-+ * This allows the reads and writes to act as if they occur synchronously to the thread scheduling them, however
-+ * it fails to properly propagate write failures. When writes fail the data is kept so future reads will actually
-+ * read the failed write data. This should hopefully act as a way to prevent data loss for spurious fails for writing data.
-+ *
-+ */
-+
-+ /**
-+ * Attempts to bump the priority of all IO tasks for the given chunk coordinates. This has no effect if no tasks are queued.
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param priority Priority level to try to bump to
-+ */
-+ public void bumpPriority(final ServerLevel world, final int chunkX, final int chunkZ, final int priority) {
-+ throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
-+ }
-+
-+ public CompoundTag getPendingWrite(final ServerLevel world, final int chunkX, final int chunkZ, final boolean poiData) {
-+ // Paper start - rewrite chunk system
-+ return io.papermc.paper.chunk.system.io.RegionFileIOThread.getPendingWrite(
-+ world, chunkX, chunkZ, poiData ? io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.POI_DATA :
-+ io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.CHUNK_DATA
-+ );
-+ // Paper end - rewrite chunk system
-+ }
-+
-+ /**
-+ * Sets the priority of all IO tasks for the given chunk coordinates. This has no effect if no tasks are queued.
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param priority Priority level to set to
-+ */
-+ public void setPriority(final ServerLevel world, final int chunkX, final int chunkZ, final int priority) {
-+ throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
-+ }
-+
-+ /**
-+ * Schedules the chunk data to be written asynchronously.
-+ *
-+ * Impl notes:
-+ *
-+ *
-+ * This function presumes a chunk load for the coordinates is not called during this function (anytime after is OK). This means
-+ * saves must be scheduled before a chunk is unloaded.
-+ *
-+ *
-+ * Writes may be called concurrently, although only the "later" write will go through.
-+ *
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param poiData Chunk point of interest data. If {@code null}, then no poi data is saved.
-+ * @param chunkData Chunk data. If {@code null}, then no chunk data is saved.
-+ * @param priority Priority level for this task. See {@link PrioritizedTaskQueue}
-+ * @throws IllegalArgumentException If both {@code poiData} and {@code chunkData} are {@code null}.
-+ * @throws IllegalStateException If the file io thread has shutdown.
-+ */
-+ public void scheduleSave(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final CompoundTag poiData, final CompoundTag chunkData,
-+ final int priority) throws IllegalArgumentException {
-+ throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
-+ }
-+
-+ private void scheduleWrite(final ChunkDataController dataController, final ServerLevel world,
-+ final int chunkX, final int chunkZ, final CompoundTag data, final int priority, final long writeCounter) {
-+ throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
-+ }
-+
-+ /**
-+ * Same as {@link #loadChunkDataAsync(ServerLevel, int, int, int, Consumer, boolean, boolean, boolean)}, except this function returns
-+ * a {@link CompletableFuture} which is potentially completed ASYNCHRONOUSLY ON THE FILE IO THREAD when the load task
-+ * has completed.
-+ *
-+ * Note that if the chunk fails to load the returned future is completed with {@code null}.
-+ *
-+ */
-+ public CompletableFuture loadChunkDataAsyncFuture(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final int priority, final boolean readPoiData, final boolean readChunkData,
-+ final boolean intendingToBlock) {
-+ final CompletableFuture future = new CompletableFuture<>();
-+ this.loadChunkDataAsync(world, chunkX, chunkZ, priority, future::complete, readPoiData, readChunkData, intendingToBlock);
-+ return future;
-+ }
-+
-+ /**
-+ * Schedules a load to be executed asynchronously.
-+ *
-+ * Impl notes:
-+ *
-+ *
-+ * If a chunk fails to load, the {@code onComplete} parameter is completed with {@code null}.
-+ *
-+ *
-+ * It is possible for the {@code onComplete} parameter to be given {@link ChunkData} containing data
-+ * this call did not request.
-+ *
-+ *
-+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
-+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
-+ * data is undefined behaviour, and can cause deadlock.
-+ *
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param priority Priority level for this task. See {@link PrioritizedTaskQueue}
-+ * @param onComplete Consumer to execute once this task has completed
-+ * @param readPoiData Whether to read point of interest data. If {@code false}, the {@code NBTTagCompound} will be {@code null}.
-+ * @param readChunkData Whether to read chunk data. If {@code false}, the {@code NBTTagCompound} will be {@code null}.
-+ * @return The {@link PrioritizedTaskQueue.PrioritizedTask} associated with this task. Note that this task does not support
-+ * cancellation.
-+ */
-+ public void loadChunkDataAsync(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final int priority, final Consumer onComplete,
-+ final boolean readPoiData, final boolean readChunkData,
-+ final boolean intendingToBlock) {
-+ if (!PrioritizedTaskQueue.validPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority: " + priority);
-+ }
-+
-+ if (!(readPoiData | readChunkData)) {
-+ throw new IllegalArgumentException("Must read chunk data or poi data");
-+ }
-+
-+ final ChunkData complete = new ChunkData();
-+ // Paper start - rewrite chunk system
-+ final java.util.List types = new java.util.ArrayList<>();
-+ if (readPoiData) {
-+ types.add(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.POI_DATA);
-+ }
-+ if (readChunkData) {
-+ types.add(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.CHUNK_DATA);
-+ }
-+ final ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority newPriority;
-+ switch (priority) {
-+ case PrioritizedTaskQueue.HIGHEST_PRIORITY -> newPriority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.BLOCKING;
-+ case PrioritizedTaskQueue.HIGHER_PRIORITY -> newPriority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.HIGHEST;
-+ case PrioritizedTaskQueue.HIGH_PRIORITY -> newPriority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.HIGH;
-+ case PrioritizedTaskQueue.NORMAL_PRIORITY -> newPriority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.NORMAL;
-+ case PrioritizedTaskQueue.LOW_PRIORITY -> newPriority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.LOW;
-+ case PrioritizedTaskQueue.LOWEST_PRIORITY -> newPriority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.IDLE;
-+ default -> throw new IllegalStateException("Legacy priority " + priority + " should be valid");
-+ }
-+ final Consumer transformComplete = (io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileData data) -> {
-+ if (readPoiData) {
-+ if (data.getThrowable(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.POI_DATA) != null) {
-+ complete.poiData = FAILURE_VALUE;
-+ } else {
-+ complete.poiData = data.getData(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.POI_DATA);
-+ }
-+ }
-+
-+ if (readChunkData) {
-+ if (data.getThrowable(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.CHUNK_DATA) != null) {
-+ complete.chunkData = FAILURE_VALUE;
-+ } else {
-+ complete.chunkData = data.getData(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.CHUNK_DATA);
-+ }
-+ }
-+
-+ onComplete.accept(complete);
-+ };
-+ io.papermc.paper.chunk.system.io.RegionFileIOThread.loadChunkData(world, chunkX, chunkZ, transformComplete, intendingToBlock, newPriority, types.toArray(new io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType[0]));
-+ // Paper end - rewrite chunk system
-+
-+ }
-+
-+ // Note: the onComplete may be called asynchronously or synchronously here.
-+ private void scheduleRead(final ChunkDataController dataController, final ServerLevel world,
-+ final int chunkX, final int chunkZ, final Consumer onComplete, final int priority,
-+ final boolean intendingToBlock) {
-+ throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
-+ }
-+
-+ /**
-+ * Same as {@link #loadChunkDataAsync(ServerLevel, int, int, int, Consumer, boolean, boolean, boolean)}, except this function returns
-+ * the {@link ChunkData} associated with the specified chunk when the task is complete.
-+ * @return The chunk data, or {@code null} if the chunk failed to load.
-+ */
-+ public ChunkData loadChunkData(final ServerLevel world, final int chunkX, final int chunkZ, final int priority,
-+ final boolean readPoiData, final boolean readChunkData) {
-+ return this.loadChunkDataAsyncFuture(world, chunkX, chunkZ, priority, readPoiData, readChunkData, true).join();
-+ }
-+
-+ /**
-+ * Schedules the given task at the specified priority to be executed on the IO thread.
-+ *
-+ * Internal api. Do not use.
-+ *
-+ */
-+ public void runTask(final int priority, final Runnable runnable) {
-+ throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
-+ }
-+
-+ static final class GeneralTask extends PrioritizedTaskQueue.PrioritizedTask implements Runnable {
-+
-+ private final Runnable run;
-+
-+ public GeneralTask(final int priority, final Runnable run) {
-+ super(priority);
-+ this.run = IOUtil.notNull(run, "Task may not be null");
-+ }
-+
-+ @Override
-+ public void run() {
-+ try {
-+ this.run.run();
-+ } catch (final Throwable throwable) {
-+ if (throwable instanceof ThreadDeath) {
-+ throw (ThreadDeath)throwable;
-+ }
-+ LOGGER.error("Failed to execute general task on IO thread " + IOUtil.genericToString(this.run), throwable);
-+ }
-+ }
-+ }
-+
-+ public static final class ChunkData {
-+
-+ public CompoundTag poiData;
-+ public CompoundTag chunkData;
-+
-+ public ChunkData() {}
-+
-+ public ChunkData(final CompoundTag poiData, final CompoundTag chunkData) {
-+ this.poiData = poiData;
-+ this.chunkData = chunkData;
-+ }
-+ }
-+
-+ public static abstract class ChunkDataController {
-+
-+ // ConcurrentHashMap synchronizes per chain, so reduce the chance of task's hashes colliding.
-+ public final ConcurrentHashMap tasks = new ConcurrentHashMap<>(64, 0.5f);
-+
-+ public abstract void writeData(final int x, final int z, final CompoundTag compound) throws IOException;
-+ public abstract CompoundTag readData(final int x, final int z) throws IOException;
-+
-+ public abstract T computeForRegionFile(final int chunkX, final int chunkZ, final Function function);
-+ public abstract T computeForRegionFileIfLoaded(final int chunkX, final int chunkZ, final Function function);
-+
-+ public static final class InProgressWrite {
-+ public long writeCounter;
-+ public CompoundTag data;
-+ }
-+
-+ public static final class InProgressRead {
-+ public final CompletableFuture readFuture = new CompletableFuture<>();
-+ }
-+ }
-+
-+ public static final class ChunkDataTask extends PrioritizedTaskQueue.PrioritizedTask implements Runnable {
-+
-+ public ChunkDataController.InProgressWrite inProgressWrite;
-+ public ChunkDataController.InProgressRead inProgressRead;
-+
-+ private final ServerLevel world;
-+ private final int x;
-+ private final int z;
-+ private final ChunkDataController taskController;
-+
-+ public ChunkDataTask(final int priority, final ServerLevel world, final int x, final int z, final ChunkDataController taskController) {
-+ super(priority);
-+ this.world = world;
-+ this.x = x;
-+ this.z = z;
-+ this.taskController = taskController;
-+ }
-+
-+ @Override
-+ public String toString() {
-+ return "Task for world: '" + this.world.getWorld().getName() + "' at " + this.x + "," + this.z +
-+ " poi: " + (this.taskController == null) + ", hash: " + this.hashCode(); // Paper - TODO rewrite chunk system
-+ }
-+
-+ /*
-+ *
-+ * IO thread will perform reads before writes
-+ *
-+ * How reads/writes are scheduled:
-+ *
-+ * If read in progress while scheduling write, ignore read and schedule write
-+ * If read in progress while scheduling read (no write in progress), chain the read task
-+ *
-+ *
-+ * If write in progress while scheduling read, use the pending write data and ret immediately
-+ * If write in progress while scheduling write (ignore read in progress), overwrite the write in progress data
-+ *
-+ * This allows the reads and writes to act as if they occur synchronously to the thread scheduling them, however
-+ * it fails to properly propagate write failures
-+ *
-+ */
-+
-+ void reschedule(final int priority) {
-+ // priority is checked before this stage // TODO what
-+ this.queue.lazySet(null);
-+ this.priority.lazySet(priority);
-+ PaperFileIOThread.Holder.INSTANCE.queueTask(this);
-+ }
-+
-+ @Override
-+ public void run() {
-+ if (true) throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
-+ ChunkDataController.InProgressRead read = this.inProgressRead;
-+ if (read != null) {
-+ CompoundTag compound = PaperFileIOThread.FAILURE_VALUE;
-+ try {
-+ compound = this.taskController.readData(this.x, this.z);
-+ } catch (final Throwable thr) {
-+ if (thr instanceof ThreadDeath) {
-+ throw (ThreadDeath)thr;
-+ }
-+ LOGGER.error("Failed to read chunk data for task: " + this.toString(), thr);
-+ // fall through to complete with null data
-+ }
-+ read.readFuture.complete(compound);
-+ }
-+
-+ final Long chunkKey = Long.valueOf(IOUtil.getCoordinateKey(this.x, this.z));
-+
-+ ChunkDataController.InProgressWrite write = this.inProgressWrite;
-+
-+ if (write == null) {
-+ // IntelliJ warns this is invalid, however it does not consider that writes to the task map & the inProgress field can occur concurrently.
-+ ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final Long keyInMap, final ChunkDataTask valueInMap) -> {
-+ if (valueInMap == null) {
-+ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
-+ }
-+ if (valueInMap != ChunkDataTask.this) {
-+ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
-+ }
-+ return valueInMap.inProgressWrite == null ? null : valueInMap;
-+ });
-+
-+ if (inMap == null) {
-+ return; // set the task value to null, indicating we're done
-+ }
-+
-+ // not null, which means there was a concurrent write
-+ write = this.inProgressWrite;
-+ }
-+
-+ for (;;) {
-+ final long writeCounter;
-+ final CompoundTag data;
-+
-+ //noinspection SynchronizationOnLocalVariableOrMethodParameter
-+ synchronized (write) {
-+ writeCounter = write.writeCounter;
-+ data = write.data;
-+ }
-+
-+ boolean failedWrite = false;
-+
-+ try {
-+ this.taskController.writeData(this.x, this.z, data);
-+ } catch (final Throwable thr) {
-+ if (thr instanceof ThreadDeath) {
-+ throw (ThreadDeath)thr;
-+ }
-+ LOGGER.error("Failed to write chunk data for task: " + this.toString(), thr);
-+ failedWrite = true;
-+ }
-+
-+ boolean finalFailWrite = failedWrite;
-+
-+ ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final Long keyInMap, final ChunkDataTask valueInMap) -> {
-+ if (valueInMap == null) {
-+ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
-+ }
-+ if (valueInMap != ChunkDataTask.this) {
-+ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
-+ }
-+ if (valueInMap.inProgressWrite.writeCounter == writeCounter) {
-+ if (finalFailWrite) {
-+ valueInMap.inProgressWrite.writeCounter = -1L;
-+ }
-+
-+ return null;
-+ }
-+ return valueInMap;
-+ // Hack end
-+ });
-+
-+ if (inMap == null) {
-+ // write counter matched, so we wrote the most up-to-date pending data, we're done here
-+ // or we failed to write and successfully set the write counter to -1
-+ return; // we're done here
-+ }
-+
-+ // fetch & write new data
-+ continue;
-+ }
-+ }
-+ }
-+}
-diff --git a/src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java b/src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..7844a3515430472bd829ff246396bceb0797de1b
---- /dev/null
-+++ b/src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java
-@@ -0,0 +1,299 @@
-+package com.destroystokyo.paper.io;
-+
-+import java.util.concurrent.ConcurrentLinkedQueue;
-+import java.util.concurrent.atomic.AtomicBoolean;
-+import java.util.concurrent.atomic.AtomicInteger;
-+import java.util.concurrent.atomic.AtomicReference;
-+
-+@Deprecated(forRemoval = true)
-+public class PrioritizedTaskQueue {
-+
-+ // lower numbers are a higher priority (except < 0)
-+ // higher priorities are always executed before lower priorities
-+
-+ /**
-+ * Priority value indicating the task has completed or is being completed.
-+ */
-+ public static final int COMPLETING_PRIORITY = -1;
-+
-+ /**
-+ * Highest priority, should only be used for main thread tasks or tasks that are blocking the main thread.
-+ */
-+ public static final int HIGHEST_PRIORITY = 0;
-+
-+ /**
-+ * Should be only used in an IO task so that chunk loads do not wait on other IO tasks.
-+ * This only exists because IO tasks are scheduled before chunk load tasks to decrease IO waiting times.
-+ */
-+ public static final int HIGHER_PRIORITY = 1;
-+
-+ /**
-+ * Should be used for scheduling chunk loads/generation that would increase response times to users.
-+ */
-+ public static final int HIGH_PRIORITY = 2;
-+
-+ /**
-+ * Default priority.
-+ */
-+ public static final int NORMAL_PRIORITY = 3;
-+
-+ /**
-+ * Use for tasks not at all critical and can potentially be delayed.
-+ */
-+ public static final int LOW_PRIORITY = 4;
-+
-+ /**
-+ * Use for tasks that should "eventually" execute.
-+ */
-+ public static final int LOWEST_PRIORITY = 5;
-+
-+ private static final int TOTAL_PRIORITIES = 6;
-+
-+ final ConcurrentLinkedQueue[] queues = (ConcurrentLinkedQueue[])new ConcurrentLinkedQueue[TOTAL_PRIORITIES];
-+
-+ private final AtomicBoolean shutdown = new AtomicBoolean();
-+
-+ {
-+ for (int i = 0; i < TOTAL_PRIORITIES; ++i) {
-+ this.queues[i] = new ConcurrentLinkedQueue<>();
-+ }
-+ }
-+
-+ /**
-+ * Returns whether the specified priority is valid
-+ */
-+ public static boolean validPriority(final int priority) {
-+ return priority >= 0 && priority < TOTAL_PRIORITIES;
-+ }
-+
-+ /**
-+ * Queues a task.
-+ * @throws IllegalStateException If the task has already been queued. Use {@link PrioritizedTask#raisePriority(int)} to
-+ * raise a task's priority.
-+ * This can also be thrown if the queue has shutdown.
-+ */
-+ public void add(final T task) throws IllegalStateException {
-+ int priority = task.getPriority();
-+ if (priority != COMPLETING_PRIORITY) {
-+ task.setQueue(this);
-+ this.queues[priority].add(task);
-+ }
-+ if (this.shutdown.get()) {
-+ // note: we're not actually sure at this point if our task will go through
-+ throw new IllegalStateException("Queue has shutdown, refusing to execute task " + IOUtil.genericToString(task));
-+ }
-+ }
-+
-+ /**
-+ * Polls the highest priority task currently available. {@code null} if none.
-+ */
-+ public T poll() {
-+ T task;
-+ for (int i = 0; i < TOTAL_PRIORITIES; ++i) {
-+ final ConcurrentLinkedQueue queue = this.queues[i];
-+
-+ while ((task = queue.poll()) != null) {
-+ final int prevPriority = task.tryComplete(i);
-+ if (prevPriority != COMPLETING_PRIORITY && prevPriority <= i) {
-+ // if the prev priority was greater-than or equal to our current priority
-+ return task;
-+ }
-+ }
-+ }
-+
-+ return null;
-+ }
-+
-+ /**
-+ * Polls the highest priority task currently available. {@code null} if none.
-+ */
-+ public T poll(final int lowestPriority) {
-+ T task;
-+ final int max = Math.min(LOWEST_PRIORITY, lowestPriority);
-+ for (int i = 0; i <= max; ++i) {
-+ final ConcurrentLinkedQueue queue = this.queues[i];
-+
-+ while ((task = queue.poll()) != null) {
-+ final int prevPriority = task.tryComplete(i);
-+ if (prevPriority != COMPLETING_PRIORITY && prevPriority <= i) {
-+ // if the prev priority was greater-than or equal to our current priority
-+ return task;
-+ }
-+ }
-+ }
-+
-+ return null;
-+ }
-+
-+ /**
-+ * Returns whether this queue may have tasks queued.
-+ *
-+ * This operation is not atomic, but is MT-Safe.
-+ *
-+ * @return {@code true} if tasks may be queued, {@code false} otherwise
-+ */
-+ public boolean hasTasks() {
-+ for (int i = 0; i < TOTAL_PRIORITIES; ++i) {
-+ final ConcurrentLinkedQueue queue = this.queues[i];
-+
-+ if (queue.peek() != null) {
-+ return true;
-+ }
-+ }
-+ return false;
-+ }
-+
-+ /**
-+ * Prevent further additions to this queue. Attempts to add after this call has completed (potentially during) will
-+ * result in {@link IllegalStateException} being thrown.
-+ *
-+ * This operation is atomic with respect to other shutdown calls
-+ *
-+ *
-+ * After this call has completed, regardless of return value, this queue will be shutdown.
-+ *
-+ * @return {@code true} if the queue was shutdown, {@code false} if it has shut down already
-+ */
-+ public boolean shutdown() {
-+ return this.shutdown.getAndSet(false);
-+ }
-+
-+ public abstract static class PrioritizedTask {
-+
-+ protected final AtomicReference queue = new AtomicReference<>();
-+
-+ protected final AtomicInteger priority;
-+
-+ protected PrioritizedTask() {
-+ this(PrioritizedTaskQueue.NORMAL_PRIORITY);
-+ }
-+
-+ protected PrioritizedTask(final int priority) {
-+ if (!PrioritizedTaskQueue.validPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+ this.priority = new AtomicInteger(priority);
-+ }
-+
-+ /**
-+ * Returns the current priority. Note that {@link PrioritizedTaskQueue#COMPLETING_PRIORITY} will be returned
-+ * if this task is completing or has completed.
-+ */
-+ public final int getPriority() {
-+ return this.priority.get();
-+ }
-+
-+ /**
-+ * Returns whether this task is scheduled to execute, or has been already executed.
-+ */
-+ public boolean isScheduled() {
-+ return this.queue.get() != null;
-+ }
-+
-+ final int tryComplete(final int minPriority) {
-+ for (int curr = this.getPriorityVolatile();;) {
-+ if (curr == COMPLETING_PRIORITY) {
-+ return COMPLETING_PRIORITY;
-+ }
-+ if (curr > minPriority) {
-+ // curr is lower priority
-+ return curr;
-+ }
-+
-+ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, COMPLETING_PRIORITY))) {
-+ return curr;
-+ }
-+ continue;
-+ }
-+ }
-+
-+ /**
-+ * Forces this task to be completed.
-+ * @return {@code true} if the task was cancelled, {@code false} if the task has already completed or is being completed.
-+ */
-+ public boolean cancel() {
-+ return this.exchangePriorityVolatile(PrioritizedTaskQueue.COMPLETING_PRIORITY) != PrioritizedTaskQueue.COMPLETING_PRIORITY;
-+ }
-+
-+ /**
-+ * Attempts to raise the priority to the priority level specified.
-+ * @param priority Priority specified
-+ * @return {@code true} if successful, {@code false} otherwise.
-+ */
-+ public boolean raisePriority(final int priority) {
-+ if (!PrioritizedTaskQueue.validPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority");
-+ }
-+
-+ for (int curr = this.getPriorityVolatile();;) {
-+ if (curr == COMPLETING_PRIORITY) {
-+ return false;
-+ }
-+ if (priority >= curr) {
-+ return true;
-+ }
-+
-+ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority))) {
-+ PrioritizedTaskQueue queue = this.queue.get();
-+ if (queue != null) {
-+ //noinspection unchecked
-+ queue.queues[priority].add(this); // silently fail on shutdown
-+ }
-+ return true;
-+ }
-+ continue;
-+ }
-+ }
-+
-+ /**
-+ * Attempts to set this task's priority level to the level specified.
-+ * @param priority Specified priority level.
-+ * @return {@code true} if successful, {@code false} if this task is completing or has completed.
-+ */
-+ public boolean updatePriority(final int priority) {
-+ if (!PrioritizedTaskQueue.validPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority");
-+ }
-+
-+ for (int curr = this.getPriorityVolatile();;) {
-+ if (curr == COMPLETING_PRIORITY) {
-+ return false;
-+ }
-+ if (curr == priority) {
-+ return true;
-+ }
-+
-+ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority))) {
-+ PrioritizedTaskQueue queue = this.queue.get();
-+ if (queue != null) {
-+ //noinspection unchecked
-+ queue.queues[priority].add(this); // silently fail on shutdown
-+ }
-+ return true;
-+ }
-+ continue;
-+ }
-+ }
-+
-+ void setQueue(final PrioritizedTaskQueue queue) {
-+ this.queue.set(queue);
-+ }
-+
-+ /* priority */
-+
-+ protected final int getPriorityVolatile() {
-+ return this.priority.get();
-+ }
-+
-+ protected final int compareAndExchangePriorityVolatile(final int expect, final int update) {
-+ if (this.priority.compareAndSet(expect, update)) {
-+ return expect;
-+ }
-+ return this.priority.get();
-+ }
-+
-+ protected final int exchangePriorityVolatile(final int value) {
-+ return this.priority.getAndSet(value);
-+ }
-+ }
-+}
-diff --git a/src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java b/src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..99f49b5625cf51d6c97640553cf5c420bb6fdd36
---- /dev/null
-+++ b/src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java
-@@ -0,0 +1,255 @@
-+package com.destroystokyo.paper.io;
-+
-+import com.mojang.logging.LogUtils;
-+import org.slf4j.Logger;
-+
-+import java.util.concurrent.ConcurrentLinkedQueue;
-+import java.util.concurrent.atomic.AtomicBoolean;
-+import java.util.concurrent.locks.LockSupport;
-+
-+@Deprecated(forRemoval = true)
-+public class QueueExecutorThread extends Thread {
-+
-+ private static final Logger LOGGER = LogUtils.getLogger();
-+
-+ protected final PrioritizedTaskQueue queue;
-+ protected final long spinWaitTime;
-+
-+ protected volatile boolean closed;
-+
-+ protected final AtomicBoolean parked = new AtomicBoolean();
-+
-+ protected volatile ConcurrentLinkedQueue flushQueue = new ConcurrentLinkedQueue<>();
-+ protected volatile long flushCycles;
-+
-+ protected int lowestPriorityToPoll = PrioritizedTaskQueue.LOWEST_PRIORITY;
-+
-+ public int getLowestPriorityToPoll() {
-+ return this.lowestPriorityToPoll;
-+ }
-+
-+ public void setLowestPriorityToPoll(final int lowestPriorityToPoll) {
-+ if (this.isAlive()) {
-+ throw new IllegalStateException("Cannot set after starting");
-+ }
-+ this.lowestPriorityToPoll = lowestPriorityToPoll;
-+ }
-+
-+ public QueueExecutorThread(final PrioritizedTaskQueue queue) {
-+ this(queue, (int)(1.e6)); // 1.0ms
-+ }
-+
-+ public QueueExecutorThread(final PrioritizedTaskQueue queue, final long spinWaitTime) { // in ms
-+ this.queue = queue;
-+ this.spinWaitTime = spinWaitTime;
-+ }
-+
-+ @Override
-+ public void run() {
-+ final long spinWaitTime = this.spinWaitTime;
-+ main_loop:
-+ for (;;) {
-+ this.pollTasks(true);
-+
-+ // spinwait
-+
-+ final long start = System.nanoTime();
-+
-+ for (;;) {
-+ // If we are interrpted for any reason, park() will always return immediately. Clear so that we don't needlessly use cpu in such an event.
-+ Thread.interrupted();
-+ LockSupport.parkNanos("Spinwaiting on tasks", 1000L); // 1us
-+
-+ if (this.pollTasks(true)) {
-+ // restart loop, found tasks
-+ continue main_loop;
-+ }
-+
-+ if (this.handleClose()) {
-+ return; // we're done
-+ }
-+
-+ if ((System.nanoTime() - start) >= spinWaitTime) {
-+ break;
-+ }
-+ }
-+
-+ if (this.handleClose()) {
-+ return;
-+ }
-+
-+ this.parked.set(true);
-+
-+ // We need to parse here to avoid a race condition where a thread queues a task before we set parked to true
-+ // (i.e it will not notify us)
-+ if (this.pollTasks(true)) {
-+ this.parked.set(false);
-+ continue;
-+ }
-+
-+ if (this.handleClose()) {
-+ return;
-+ }
-+
-+ // we don't need to check parked before sleeping, but we do need to check parked in a do-while loop
-+ // LockSupport.park() can fail for any reason
-+ do {
-+ Thread.interrupted();
-+ LockSupport.park("Waiting on tasks");
-+ } while (this.parked.get());
-+ }
-+ }
-+
-+ protected boolean handleClose() {
-+ if (this.closed) {
-+ this.pollTasks(true); // this ensures we've emptied the queue
-+ this.handleFlushThreads(true);
-+ return true;
-+ }
-+ return false;
-+ }
-+
-+ protected boolean pollTasks(boolean flushTasks) {
-+ Runnable task;
-+ boolean ret = false;
-+
-+ while ((task = this.queue.poll(this.lowestPriorityToPoll)) != null) {
-+ ret = true;
-+ try {
-+ task.run();
-+ } catch (final Throwable throwable) {
-+ if (throwable instanceof ThreadDeath) {
-+ throw (ThreadDeath)throwable;
-+ }
-+ LOGGER.error("Exception thrown from prioritized runnable task in thread '" + this.getName() + "': " + IOUtil.genericToString(task), throwable);
-+ }
-+ }
-+
-+ if (flushTasks) {
-+ this.handleFlushThreads(false);
-+ }
-+
-+ return ret;
-+ }
-+
-+ protected void handleFlushThreads(final boolean shutdown) {
-+ Thread parking;
-+ ConcurrentLinkedQueue flushQueue = this.flushQueue;
-+ do {
-+ ++flushCycles; // may be plain read opaque write
-+ while ((parking = flushQueue.poll()) != null) {
-+ LockSupport.unpark(parking);
-+ }
-+ } while (this.pollTasks(false));
-+
-+ if (shutdown) {
-+ this.flushQueue = null;
-+
-+ // defend against a race condition where a flush thread double-checks right before we set to null
-+ while ((parking = flushQueue.poll()) != null) {
-+ LockSupport.unpark(parking);
-+ }
-+ }
-+ }
-+
-+ /**
-+ * Notify's this thread that a task has been added to its queue
-+ * @return {@code true} if this thread was waiting for tasks, {@code false} if it is executing tasks
-+ */
-+ public boolean notifyTasks() {
-+ if (this.parked.get() && this.parked.getAndSet(false)) {
-+ LockSupport.unpark(this);
-+ return true;
-+ }
-+ return false;
-+ }
-+
-+ protected void queueTask(final T task) {
-+ this.queue.add(task);
-+ this.notifyTasks();
-+ }
-+
-+ /**
-+ * Waits until this thread's queue is empty.
-+ *
-+ * @throws IllegalStateException If the current thread is {@code this} thread.
-+ */
-+ public void flush() {
-+ final Thread currentThread = Thread.currentThread();
-+
-+ if (currentThread == this) {
-+ // avoid deadlock
-+ throw new IllegalStateException("Cannot flush the queue executor thread while on the queue executor thread");
-+ }
-+
-+ // order is important
-+
-+ int successes = 0;
-+ long lastCycle = -1L;
-+
-+ do {
-+ final ConcurrentLinkedQueue flushQueue = this.flushQueue;
-+ if (flushQueue == null) {
-+ return;
-+ }
-+
-+ flushQueue.add(currentThread);
-+
-+ // double check flush queue
-+ if (this.flushQueue == null) {
-+ return;
-+ }
-+
-+ final long currentCycle = this.flushCycles; // may be opaque read
-+
-+ if (currentCycle == lastCycle) {
-+ Thread.yield();
-+ continue;
-+ }
-+
-+ // force response
-+ this.parked.set(false);
-+ LockSupport.unpark(this);
-+
-+ LockSupport.park("flushing queue executor thread");
-+
-+ // returns whether there are tasks queued, does not return whether there are tasks executing
-+ // this is why we cycle twice twice through flush (we know a pollTask call is made after a flush cycle)
-+ // we really only need to guarantee that the tasks this thread has queued has gone through, and can leave
-+ // tasks queued concurrently that are unsychronized with this thread as undefined behavior
-+ if (this.queue.hasTasks()) {
-+ successes = 0;
-+ } else {
-+ ++successes;
-+ }
-+
-+ } while (successes != 2);
-+
-+ }
-+
-+ /**
-+ * Closes this queue executor's queue and optionally waits for it to empty.
-+ *
-+ * If wait is {@code true}, then the queue will be empty by the time this call completes.
-+ *
-+ *
-+ * This function is MT-Safe.
-+ *
-+ * @param wait If this call is to wait until the queue is empty
-+ * @param killQueue Whether to shutdown this thread's queue
-+ * @return whether this thread shut down the queue
-+ */
-+ public boolean close(final boolean wait, final boolean killQueue) {
-+ boolean ret = !killQueue ? false : this.queue.shutdown();
-+ this.closed = true;
-+
-+ // force thread to respond to the shutdown
-+ this.parked.set(false);
-+ LockSupport.unpark(this);
-+
-+ if (wait) {
-+ this.flush();
-+ }
-+ return ret;
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/PlayerChunkLoader.java b/src/main/java/io/papermc/paper/chunk/PlayerChunkLoader.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..e77972c4c264100ffdd824bfa2dac58dbbc6d678
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/PlayerChunkLoader.java
-@@ -0,0 +1,1128 @@
-+package io.papermc.paper.chunk;
-+
-+import com.destroystokyo.paper.util.misc.PlayerAreaMap;
-+import com.destroystokyo.paper.util.misc.PooledLinkedHashSets;
-+import io.papermc.paper.configuration.GlobalConfiguration;
-+import io.papermc.paper.util.CoordinateUtils;
-+import io.papermc.paper.util.IntervalledCounter;
-+import io.papermc.paper.util.TickThread;
-+import it.unimi.dsi.fastutil.longs.LongOpenHashSet;
-+import it.unimi.dsi.fastutil.objects.Reference2IntOpenHashMap;
-+import it.unimi.dsi.fastutil.objects.Reference2ObjectLinkedOpenHashMap;
-+import it.unimi.dsi.fastutil.objects.ReferenceLinkedOpenHashSet;
-+import net.minecraft.network.protocol.game.ClientboundSetChunkCacheCenterPacket;
-+import net.minecraft.network.protocol.game.ClientboundSetChunkCacheRadiusPacket;
-+import net.minecraft.network.protocol.game.ClientboundSetSimulationDistancePacket;
-+import io.papermc.paper.util.MCUtil;
-+import net.minecraft.server.MinecraftServer;
-+import net.minecraft.server.level.*;
-+import net.minecraft.util.Mth;
-+import net.minecraft.world.level.ChunkPos;
-+import net.minecraft.world.level.chunk.LevelChunk;
-+import org.apache.commons.lang3.mutable.MutableObject;
-+import org.bukkit.craftbukkit.entity.CraftPlayer;
-+import org.bukkit.entity.Player;
-+import java.util.ArrayDeque;
-+import java.util.ArrayList;
-+import java.util.List;
-+import java.util.TreeSet;
-+import java.util.concurrent.atomic.AtomicInteger;
-+
-+public final class PlayerChunkLoader {
-+
-+ public static final int MIN_VIEW_DISTANCE = 2;
-+ public static final int MAX_VIEW_DISTANCE = 32;
-+
-+ public static final int TICK_TICKET_LEVEL = 31;
-+ public static final int LOADED_TICKET_LEVEL = 33;
-+
-+ public static int getTickViewDistance(final Player player) {
-+ return getTickViewDistance(((CraftPlayer)player).getHandle());
-+ }
-+
-+ public static int getTickViewDistance(final ServerPlayer player) {
-+ final ServerLevel level = (ServerLevel)player.level;
-+ final PlayerLoaderData data = level.chunkSource.chunkMap.playerChunkManager.getData(player);
-+ if (data == null) {
-+ return level.chunkSource.chunkMap.playerChunkManager.getTargetTickViewDistance();
-+ }
-+ return data.getTargetTickViewDistance();
-+ }
-+
-+ public static int getLoadViewDistance(final Player player) {
-+ return getLoadViewDistance(((CraftPlayer)player).getHandle());
-+ }
-+
-+ public static int getLoadViewDistance(final ServerPlayer player) {
-+ final ServerLevel level = (ServerLevel)player.level;
-+ final PlayerLoaderData data = level.chunkSource.chunkMap.playerChunkManager.getData(player);
-+ if (data == null) {
-+ return level.chunkSource.chunkMap.playerChunkManager.getLoadDistance();
-+ }
-+ return data.getLoadDistance();
-+ }
-+
-+ public static int getSendViewDistance(final Player player) {
-+ return getSendViewDistance(((CraftPlayer)player).getHandle());
-+ }
-+
-+ public static int getSendViewDistance(final ServerPlayer player) {
-+ final ServerLevel level = (ServerLevel)player.level;
-+ final PlayerLoaderData data = level.chunkSource.chunkMap.playerChunkManager.getData(player);
-+ if (data == null) {
-+ return level.chunkSource.chunkMap.playerChunkManager.getTargetSendDistance();
-+ }
-+ return data.getTargetSendViewDistance();
-+ }
-+
-+ protected final ChunkMap chunkMap;
-+ protected final Reference2ObjectLinkedOpenHashMap playerMap = new Reference2ObjectLinkedOpenHashMap<>(512, 0.7f);
-+ protected final ReferenceLinkedOpenHashSet chunkSendQueue = new ReferenceLinkedOpenHashSet<>(512, 0.7f);
-+
-+ protected final TreeSet chunkLoadQueue = new TreeSet<>((final PlayerLoaderData p1, final PlayerLoaderData p2) -> {
-+ if (p1 == p2) {
-+ return 0;
-+ }
-+
-+ final ChunkPriorityHolder holder1 = p1.loadQueue.peekFirst();
-+ final ChunkPriorityHolder holder2 = p2.loadQueue.peekFirst();
-+
-+ final int priorityCompare = Double.compare(holder1 == null ? Double.MAX_VALUE : holder1.priority, holder2 == null ? Double.MAX_VALUE : holder2.priority);
-+
-+ final int lastLoadTimeCompare = Long.compare(p1.lastChunkLoad - p2.lastChunkLoad, 0);
-+
-+ if ((holder1 == null || holder2 == null || lastLoadTimeCompare == 0 || holder1.priority < 0.0 || holder2.priority < 0.0) && priorityCompare != 0) {
-+ return priorityCompare;
-+ }
-+
-+ if (lastLoadTimeCompare != 0) {
-+ return lastLoadTimeCompare;
-+ }
-+
-+ final int idCompare = Integer.compare(p1.player.getId(), p2.player.getId());
-+
-+ if (idCompare != 0) {
-+ return idCompare;
-+ }
-+
-+ // last resort
-+ return Integer.compare(System.identityHashCode(p1), System.identityHashCode(p2));
-+ });
-+
-+ protected final TreeSet chunkSendWaitQueue = new TreeSet<>((final PlayerLoaderData p1, final PlayerLoaderData p2) -> {
-+ if (p1 == p2) {
-+ return 0;
-+ }
-+
-+ final int timeCompare = Long.compare(p1.nextChunkSendTarget - p2.nextChunkSendTarget, 0);
-+ if (timeCompare != 0) {
-+ return timeCompare;
-+ }
-+
-+ final int idCompare = Integer.compare(p1.player.getId(), p2.player.getId());
-+
-+ if (idCompare != 0) {
-+ return idCompare;
-+ }
-+
-+ // last resort
-+ return Integer.compare(System.identityHashCode(p1), System.identityHashCode(p2));
-+ });
-+
-+
-+ // no throttling is applied below this VD for loading
-+
-+ /**
-+ * The chunks to be sent to players, provided they're send-ready. Send-ready means the chunk and its 1 radius neighbours are loaded.
-+ */
-+ public final PlayerAreaMap broadcastMap;
-+
-+ /**
-+ * The chunks to be brought up to send-ready status. Send-ready means the chunk and its 1 radius neighbours are loaded.
-+ */
-+ public final PlayerAreaMap loadMap;
-+
-+ /**
-+ * Areamap used only to remove tickets for send-ready chunks. View distance is always + 1 of load view distance. Thus,
-+ * this map is always representing the chunks we are actually going to load.
-+ */
-+ public final PlayerAreaMap loadTicketCleanup;
-+
-+ /**
-+ * The chunks to brought to ticking level. Each chunk must have 2 radius neighbours loaded before this can happen.
-+ */
-+ public final PlayerAreaMap tickMap;
-+
-+ /**
-+ * -1 if defaulting to [load distance], else always in [2, load distance]
-+ */
-+ protected int rawSendDistance = -1;
-+
-+ /**
-+ * -1 if defaulting to [tick view distance + 1], else always in [tick view distance + 1, 32 + 1]
-+ */
-+ protected int rawLoadDistance = -1;
-+
-+ /**
-+ * Never -1, always in [2, 32]
-+ */
-+ protected int rawTickDistance = -1;
-+
-+ // methods to bridge for API
-+
-+ public int getTargetTickViewDistance() {
-+ return this.getTickDistance();
-+ }
-+
-+ public void setTargetTickViewDistance(final int distance) {
-+ this.setTickDistance(distance);
-+ }
-+
-+ public int getTargetNoTickViewDistance() {
-+ return this.getLoadDistance() - 1;
-+ }
-+
-+ public void setTargetNoTickViewDistance(final int distance) {
-+ this.setLoadDistance(distance == -1 ? -1 : distance + 1);
-+ }
-+
-+ public int getTargetSendDistance() {
-+ return this.rawSendDistance == -1 ? this.getLoadDistance() : this.rawSendDistance;
-+ }
-+
-+ public void setTargetSendDistance(final int distance) {
-+ this.setSendDistance(distance);
-+ }
-+
-+ // internal methods
-+
-+ public int getSendDistance() {
-+ final int loadDistance = this.getLoadDistance();
-+ return this.rawSendDistance == -1 ? loadDistance : Math.min(this.rawSendDistance, loadDistance);
-+ }
-+
-+ public void setSendDistance(final int distance) {
-+ if (distance != -1 && (distance < MIN_VIEW_DISTANCE || distance > MAX_VIEW_DISTANCE + 1)) {
-+ throw new IllegalArgumentException("Send distance must be a number between " + MIN_VIEW_DISTANCE + " and " + (MAX_VIEW_DISTANCE + 1) + ", or -1, got: " + distance);
-+ }
-+ this.rawSendDistance = distance;
-+ }
-+
-+ public int getLoadDistance() {
-+ final int tickDistance = this.getTickDistance();
-+ return this.rawLoadDistance == -1 ? tickDistance + 1 : Math.max(tickDistance + 1, this.rawLoadDistance);
-+ }
-+
-+ public void setLoadDistance(final int distance) {
-+ if (distance != -1 && (distance < MIN_VIEW_DISTANCE || distance > MAX_VIEW_DISTANCE + 1)) {
-+ throw new IllegalArgumentException("Load distance must be a number between " + MIN_VIEW_DISTANCE + " and " + (MAX_VIEW_DISTANCE + 1) + ", or -1, got: " + distance);
-+ }
-+ this.rawLoadDistance = distance;
-+ }
-+
-+ public int getTickDistance() {
-+ return this.rawTickDistance;
-+ }
-+
-+ public void setTickDistance(final int distance) {
-+ if (distance < MIN_VIEW_DISTANCE || distance > MAX_VIEW_DISTANCE) {
-+ throw new IllegalArgumentException("View distance must be a number between " + MIN_VIEW_DISTANCE + " and " + MAX_VIEW_DISTANCE + ", got: " + distance);
-+ }
-+ this.rawTickDistance = distance;
-+ }
-+
-+ /*
-+ Players have 3 different types of view distance:
-+ 1. Sending view distance
-+ 2. Loading view distance
-+ 3. Ticking view distance
-+
-+ But for configuration purposes (and API) there are:
-+ 1. No-tick view distance
-+ 2. Tick view distance
-+ 3. Broadcast view distance
-+
-+ These aren't always the same as the types we represent internally.
-+
-+ Loading view distance is always max(no-tick + 1, tick + 1)
-+ - no-tick has 1 added because clients need an extra radius to render chunks
-+ - tick has 1 added because it needs an extra radius of chunks to load before they can be marked ticking
-+
-+ Loading view distance is defined as the radius of chunks that will be brought to send-ready status, which means
-+ it loads chunks in radius load-view-distance + 1.
-+
-+ The maximum value for send view distance is the load view distance. API can set it lower.
-+ */
-+
-+ public PlayerChunkLoader(final ChunkMap chunkMap, final PooledLinkedHashSets pooledHashSets) {
-+ this.chunkMap = chunkMap;
-+ this.broadcastMap = new PlayerAreaMap(pooledHashSets,
-+ null,
-+ (ServerPlayer player, int rangeX, int rangeZ, int currPosX, int currPosZ, int prevPosX, int prevPosZ,
-+ com.destroystokyo.paper.util.misc.PooledLinkedHashSets.PooledObjectLinkedOpenHashSet newState) -> {
-+ PlayerChunkLoader.this.onChunkLeave(player, rangeX, rangeZ);
-+ });
-+ this.loadMap = new PlayerAreaMap(pooledHashSets,
-+ null,
-+ (ServerPlayer player, int rangeX, int rangeZ, int currPosX, int currPosZ, int prevPosX, int prevPosZ,
-+ com.destroystokyo.paper.util.misc.PooledLinkedHashSets.PooledObjectLinkedOpenHashSet newState) -> {
-+ if (newState != null) {
-+ return;
-+ }
-+ PlayerChunkLoader.this.isTargetedForPlayerLoad.remove(CoordinateUtils.getChunkKey(rangeX, rangeZ));
-+ });
-+ this.loadTicketCleanup = new PlayerAreaMap(pooledHashSets,
-+ null,
-+ (ServerPlayer player, int rangeX, int rangeZ, int currPosX, int currPosZ, int prevPosX, int prevPosZ,
-+ com.destroystokyo.paper.util.misc.PooledLinkedHashSets.PooledObjectLinkedOpenHashSet newState) -> {
-+ if (newState != null) {
-+ return;
-+ }
-+ ChunkPos chunkPos = new ChunkPos(rangeX, rangeZ);
-+ PlayerChunkLoader.this.chunkMap.level.getChunkSource().removeTicketAtLevel(TicketType.PLAYER, chunkPos, LOADED_TICKET_LEVEL, chunkPos);
-+ if (PlayerChunkLoader.this.chunkTicketTracker.remove(chunkPos.toLong())) {
-+ --PlayerChunkLoader.this.concurrentChunkLoads;
-+ }
-+ });
-+ this.tickMap = new PlayerAreaMap(pooledHashSets,
-+ (ServerPlayer player, int rangeX, int rangeZ, int currPosX, int currPosZ, int prevPosX, int prevPosZ,
-+ com.destroystokyo.paper.util.misc.PooledLinkedHashSets.PooledObjectLinkedOpenHashSet newState) -> {
-+ if (newState.size() != 1) {
-+ return;
-+ }
-+ LevelChunk chunk = PlayerChunkLoader.this.chunkMap.level.getChunkSource().getChunkAtIfLoadedMainThreadNoCache(rangeX, rangeZ);
-+ if (chunk == null || !chunk.areNeighboursLoaded(2)) {
-+ return;
-+ }
-+
-+ ChunkPos chunkPos = new ChunkPos(rangeX, rangeZ);
-+ PlayerChunkLoader.this.chunkMap.level.getChunkSource().addTicketAtLevel(TicketType.PLAYER, chunkPos, TICK_TICKET_LEVEL, chunkPos);
-+ },
-+ (ServerPlayer player, int rangeX, int rangeZ, int currPosX, int currPosZ, int prevPosX, int prevPosZ,
-+ com.destroystokyo.paper.util.misc.PooledLinkedHashSets.PooledObjectLinkedOpenHashSet newState) -> {
-+ if (newState != null) {
-+ return;
-+ }
-+ ChunkPos chunkPos = new ChunkPos(rangeX, rangeZ);
-+ PlayerChunkLoader.this.chunkMap.level.getChunkSource().removeTicketAtLevel(TicketType.PLAYER, chunkPos, TICK_TICKET_LEVEL, chunkPos);
-+ });
-+ }
-+
-+ protected final LongOpenHashSet isTargetedForPlayerLoad = new LongOpenHashSet();
-+ protected final LongOpenHashSet chunkTicketTracker = new LongOpenHashSet();
-+
-+ public boolean isChunkNearPlayers(final int chunkX, final int chunkZ) {
-+ final PooledLinkedHashSets.PooledObjectLinkedOpenHashSet playersInSendRange = this.broadcastMap.getObjectsInRange(chunkX, chunkZ);
-+
-+ return playersInSendRange != null;
-+ }
-+
-+ public void onChunkPostProcessing(final int chunkX, final int chunkZ) {
-+ this.onChunkSendReady(chunkX, chunkZ);
-+ }
-+
-+ private boolean chunkNeedsPostProcessing(final int chunkX, final int chunkZ) {
-+ final long key = CoordinateUtils.getChunkKey(chunkX, chunkZ);
-+ final ChunkHolder chunk = this.chunkMap.getVisibleChunkIfPresent(key);
-+
-+ if (chunk == null) {
-+ return false;
-+ }
-+
-+ final LevelChunk levelChunk = chunk.getSendingChunk();
-+
-+ return levelChunk != null && !levelChunk.isPostProcessingDone;
-+ }
-+
-+ // rets whether the chunk is at a loaded stage that is ready to be sent to players
-+ public boolean isChunkPlayerLoaded(final int chunkX, final int chunkZ) {
-+ final long key = CoordinateUtils.getChunkKey(chunkX, chunkZ);
-+ final ChunkHolder chunk = this.chunkMap.getVisibleChunkIfPresent(key);
-+
-+ if (chunk == null) {
-+ return false;
-+ }
-+
-+ final LevelChunk levelChunk = chunk.getSendingChunk();
-+
-+ return levelChunk != null && levelChunk.isPostProcessingDone && this.isTargetedForPlayerLoad.contains(key);
-+ }
-+
-+ public boolean isChunkSent(final ServerPlayer player, final int chunkX, final int chunkZ, final boolean borderOnly) {
-+ return borderOnly ? this.isChunkSentBorderOnly(player, chunkX, chunkZ) : this.isChunkSent(player, chunkX, chunkZ);
-+ }
-+
-+ public boolean isChunkSent(final ServerPlayer player, final int chunkX, final int chunkZ) {
-+ final PlayerLoaderData data = this.playerMap.get(player);
-+ if (data == null) {
-+ return false;
-+ }
-+
-+ return data.hasSentChunk(chunkX, chunkZ);
-+ }
-+
-+ public boolean isChunkSentBorderOnly(final ServerPlayer player, final int chunkX, final int chunkZ) {
-+ final PlayerLoaderData data = this.playerMap.get(player);
-+ if (data == null) {
-+ return false;
-+ }
-+
-+ final boolean center = data.hasSentChunk(chunkX, chunkZ);
-+ if (!center) {
-+ return false;
-+ }
-+
-+ return !(data.hasSentChunk(chunkX - 1, chunkZ) && data.hasSentChunk(chunkX + 1, chunkZ) &&
-+ data.hasSentChunk(chunkX, chunkZ - 1) && data.hasSentChunk(chunkX, chunkZ + 1));
-+ }
-+
-+ protected int getMaxConcurrentChunkSends() {
-+ return GlobalConfiguration.get().chunkLoading.maxConcurrentSends;
-+ }
-+
-+ protected int getMaxChunkLoads() {
-+ double config = GlobalConfiguration.get().chunkLoading.playerMaxConcurrentLoads;
-+ double max = GlobalConfiguration.get().chunkLoading.globalMaxConcurrentLoads;
-+ return (int)Math.ceil(Math.min(config * MinecraftServer.getServer().getPlayerCount(), max <= 1.0 ? Double.MAX_VALUE : max));
-+ }
-+
-+ protected long getTargetSendPerPlayerAddend() {
-+ return GlobalConfiguration.get().chunkLoading.targetPlayerChunkSendRate <= 1.0 ? 0L : (long)Math.round(1.0e9 / GlobalConfiguration.get().chunkLoading.targetPlayerChunkSendRate);
-+ }
-+
-+ protected long getMaxSendAddend() {
-+ return GlobalConfiguration.get().chunkLoading.globalMaxChunkSendRate <= 1.0 ? 0L : (long)Math.round(1.0e9 / GlobalConfiguration.get().chunkLoading.globalMaxChunkSendRate);
-+ }
-+
-+ public void onChunkPlayerTickReady(final int chunkX, final int chunkZ) {
-+ final ChunkPos chunkPos = new ChunkPos(chunkX, chunkZ);
-+ this.chunkMap.level.getChunkSource().addTicketAtLevel(TicketType.PLAYER, chunkPos, TICK_TICKET_LEVEL, chunkPos);
-+ }
-+
-+ public void onChunkSendReady(final int chunkX, final int chunkZ) {
-+ final PooledLinkedHashSets.PooledObjectLinkedOpenHashSet playersInSendRange = this.broadcastMap.getObjectsInRange(chunkX, chunkZ);
-+
-+ if (playersInSendRange == null) {
-+ return;
-+ }
-+
-+ final Object[] rawData = playersInSendRange.getBackingSet();
-+ for (int i = 0, len = rawData.length; i < len; ++i) {
-+ final Object raw = rawData[i];
-+
-+ if (!(raw instanceof ServerPlayer)) {
-+ continue;
-+ }
-+ this.onChunkSendReady((ServerPlayer)raw, chunkX, chunkZ);
-+ }
-+ }
-+
-+ public void onChunkSendReady(final ServerPlayer player, final int chunkX, final int chunkZ) {
-+ final PlayerLoaderData data = this.playerMap.get(player);
-+
-+ if (data == null) {
-+ return;
-+ }
-+
-+ if (data.hasSentChunk(chunkX, chunkZ) || !this.isChunkPlayerLoaded(chunkX, chunkZ)) {
-+ // if we don't have player tickets, then the load logic will pick this up and queue to send
-+ return;
-+ }
-+
-+ if (!data.chunksToBeSent.remove(CoordinateUtils.getChunkKey(chunkX, chunkZ))) {
-+ // don't queue to send, we don't want the chunk
-+ return;
-+ }
-+
-+ final long playerPos = this.broadcastMap.getLastCoordinate(player);
-+ final int playerChunkX = CoordinateUtils.getChunkX(playerPos);
-+ final int playerChunkZ = CoordinateUtils.getChunkZ(playerPos);
-+ final int manhattanDistance = Math.abs(playerChunkX - chunkX) + Math.abs(playerChunkZ - chunkZ);
-+
-+ final ChunkPriorityHolder holder = new ChunkPriorityHolder(chunkX, chunkZ, manhattanDistance, 0.0);
-+ data.sendQueue.add(holder);
-+ }
-+
-+ public void onChunkLoad(final int chunkX, final int chunkZ) {
-+ if (this.chunkTicketTracker.remove(CoordinateUtils.getChunkKey(chunkX, chunkZ))) {
-+ --this.concurrentChunkLoads;
-+ }
-+ }
-+
-+ public void onChunkLeave(final ServerPlayer player, final int chunkX, final int chunkZ) {
-+ final PlayerLoaderData data = this.playerMap.get(player);
-+
-+ if (data == null) {
-+ return;
-+ }
-+
-+ data.unloadChunk(chunkX, chunkZ);
-+ }
-+
-+ public void addPlayer(final ServerPlayer player) {
-+ TickThread.ensureTickThread("Cannot add player async");
-+ if (!player.isRealPlayer) {
-+ return;
-+ }
-+ final PlayerLoaderData data = new PlayerLoaderData(player, this);
-+ if (this.playerMap.putIfAbsent(player, data) == null) {
-+ data.update();
-+ }
-+ }
-+
-+ public void removePlayer(final ServerPlayer player) {
-+ TickThread.ensureTickThread("Cannot remove player async");
-+ if (!player.isRealPlayer) {
-+ return;
-+ }
-+
-+ final PlayerLoaderData loaderData = this.playerMap.remove(player);
-+ if (loaderData == null) {
-+ return;
-+ }
-+ loaderData.remove();
-+ this.chunkLoadQueue.remove(loaderData);
-+ this.chunkSendQueue.remove(loaderData);
-+ this.chunkSendWaitQueue.remove(loaderData);
-+ synchronized (this.sendingChunkCounts) {
-+ final int count = this.sendingChunkCounts.removeInt(loaderData);
-+ if (count != 0) {
-+ concurrentChunkSends.getAndAdd(-count);
-+ }
-+ }
-+ }
-+
-+ public void updatePlayer(final ServerPlayer player) {
-+ TickThread.ensureTickThread("Cannot update player async");
-+ if (!player.isRealPlayer) {
-+ return;
-+ }
-+ final PlayerLoaderData loaderData = this.playerMap.get(player);
-+ if (loaderData != null) {
-+ loaderData.update();
-+ }
-+ }
-+
-+ public PlayerLoaderData getData(final ServerPlayer player) {
-+ return this.playerMap.get(player);
-+ }
-+
-+ public void tick() {
-+ TickThread.ensureTickThread("Cannot tick async");
-+ for (final PlayerLoaderData data : this.playerMap.values()) {
-+ data.update();
-+ }
-+ this.tickMidTick();
-+ }
-+
-+ protected static final AtomicInteger concurrentChunkSends = new AtomicInteger();
-+ protected final Reference2IntOpenHashMap sendingChunkCounts = new Reference2IntOpenHashMap<>();
-+ private static long nextChunkSend;
-+ private void trySendChunks() {
-+ final long time = System.nanoTime();
-+ if (nextChunkSend - time > 0) {
-+ return;
-+ }
-+ // drain entries from wait queue
-+ while (!this.chunkSendWaitQueue.isEmpty()) {
-+ final PlayerLoaderData data = this.chunkSendWaitQueue.first();
-+
-+ if (data.nextChunkSendTarget - time > 0) {
-+ break;
-+ }
-+
-+ this.chunkSendWaitQueue.pollFirst();
-+
-+ this.chunkSendQueue.add(data);
-+ }
-+
-+ if (this.chunkSendQueue.isEmpty()) {
-+ return;
-+ }
-+
-+ final int maxSends = this.getMaxConcurrentChunkSends();
-+ final long nextPlayerDeadline = this.getTargetSendPerPlayerAddend() + time;
-+ for (;;) {
-+ if (this.chunkSendQueue.isEmpty()) {
-+ break;
-+ }
-+ final int currSends = concurrentChunkSends.get();
-+ if (currSends >= maxSends) {
-+ break;
-+ }
-+
-+ if (!concurrentChunkSends.compareAndSet(currSends, currSends + 1)) {
-+ continue;
-+ }
-+
-+ // send chunk
-+
-+ final PlayerLoaderData data = this.chunkSendQueue.removeFirst();
-+
-+ final ChunkPriorityHolder queuedSend = data.sendQueue.pollFirst();
-+ if (queuedSend == null) {
-+ concurrentChunkSends.getAndDecrement(); // we never sent, so decrease
-+ // stop iterating over players who have nothing to send
-+ if (this.chunkSendQueue.isEmpty()) {
-+ // nothing left
-+ break;
-+ }
-+ continue;
-+ }
-+
-+ if (!this.isChunkPlayerLoaded(queuedSend.chunkX, queuedSend.chunkZ)) {
-+ throw new IllegalStateException();
-+ }
-+
-+ data.nextChunkSendTarget = nextPlayerDeadline;
-+ this.chunkSendWaitQueue.add(data);
-+
-+ synchronized (this.sendingChunkCounts) {
-+ this.sendingChunkCounts.addTo(data, 1);
-+ }
-+
-+ data.sendChunk(queuedSend.chunkX, queuedSend.chunkZ, () -> {
-+ synchronized (this.sendingChunkCounts) {
-+ final int count = this.sendingChunkCounts.getInt(data);
-+ if (count == 0) {
-+ // disconnected, so we don't need to decrement: it will be decremented for us
-+ return;
-+ }
-+ if (count == 1) {
-+ this.sendingChunkCounts.removeInt(data);
-+ } else {
-+ this.sendingChunkCounts.put(data, count - 1);
-+ }
-+ }
-+
-+ concurrentChunkSends.getAndDecrement();
-+ });
-+
-+ nextChunkSend = this.getMaxSendAddend() + time;
-+ if (nextChunkSend - time > 0) {
-+ break;
-+ }
-+ }
-+ }
-+
-+ protected int concurrentChunkLoads;
-+ // this interval prevents bursting a lot of chunk loads
-+ protected static final IntervalledCounter TICKET_ADDITION_COUNTER_SHORT = new IntervalledCounter((long)(1.0e6 * 50.0)); // 50ms
-+ // this interval ensures the rate is kept between ticks correctly
-+ protected static final IntervalledCounter TICKET_ADDITION_COUNTER_LONG = new IntervalledCounter((long)(1.0e6 * 1000.0)); // 1000ms
-+ private void tryLoadChunks() {
-+ if (this.chunkLoadQueue.isEmpty()) {
-+ return;
-+ }
-+
-+ final int maxLoads = this.getMaxChunkLoads();
-+ final long time = System.nanoTime();
-+ boolean updatedCounters = false;
-+ for (;;) {
-+ final PlayerLoaderData data = this.chunkLoadQueue.pollFirst();
-+
-+ data.lastChunkLoad = time;
-+
-+ final ChunkPriorityHolder queuedLoad = data.loadQueue.peekFirst();
-+ if (queuedLoad == null) {
-+ if (this.chunkLoadQueue.isEmpty()) {
-+ break;
-+ }
-+ continue;
-+ }
-+
-+ if (!updatedCounters) {
-+ updatedCounters = true;
-+ TICKET_ADDITION_COUNTER_SHORT.updateCurrentTime(time);
-+ TICKET_ADDITION_COUNTER_LONG.updateCurrentTime(time);
-+ data.ticketAdditionCounterShort.updateCurrentTime(time);
-+ data.ticketAdditionCounterLong.updateCurrentTime(time);
-+ }
-+
-+ if (this.isChunkPlayerLoaded(queuedLoad.chunkX, queuedLoad.chunkZ)) {
-+ // already loaded!
-+ data.loadQueue.pollFirst(); // already loaded so we just skip
-+ this.chunkLoadQueue.add(data);
-+
-+ // ensure the chunk is queued to send
-+ this.onChunkSendReady(queuedLoad.chunkX, queuedLoad.chunkZ);
-+ continue;
-+ }
-+
-+ final long chunkKey = CoordinateUtils.getChunkKey(queuedLoad.chunkX, queuedLoad.chunkZ);
-+
-+ final double priority = queuedLoad.priority;
-+ // while we do need to rate limit chunk loads, the logic for sending chunks requires that tickets are present.
-+ // when chunks are loaded (i.e spawn) but do not have this player's tickets, they have to wait behind the
-+ // load queue. To avoid this problem, we check early here if tickets are required to load the chunk - if they
-+ // aren't required, it bypasses the limiter system.
-+ boolean unloadedTargetChunk = false;
-+ unloaded_check:
-+ for (int dz = -1; dz <= 1; ++dz) {
-+ for (int dx = -1; dx <= 1; ++dx) {
-+ final int offX = queuedLoad.chunkX + dx;
-+ final int offZ = queuedLoad.chunkZ + dz;
-+ if (this.chunkMap.level.getChunkSource().getChunkAtIfLoadedMainThreadNoCache(offX, offZ) == null) {
-+ unloadedTargetChunk = true;
-+ break unloaded_check;
-+ }
-+ }
-+ }
-+ if (unloadedTargetChunk && priority >= 0.0) {
-+ // priority >= 0.0 implies rate limited chunks
-+
-+ final int currentChunkLoads = this.concurrentChunkLoads;
-+ if (currentChunkLoads >= maxLoads || (GlobalConfiguration.get().chunkLoading.globalMaxChunkLoadRate > 0 && (TICKET_ADDITION_COUNTER_SHORT.getRate() >= GlobalConfiguration.get().chunkLoading.globalMaxChunkLoadRate || TICKET_ADDITION_COUNTER_LONG.getRate() >= GlobalConfiguration.get().chunkLoading.globalMaxChunkLoadRate))
-+ || (GlobalConfiguration.get().chunkLoading.playerMaxChunkLoadRate > 0.0 && (data.ticketAdditionCounterShort.getRate() >= GlobalConfiguration.get().chunkLoading.playerMaxChunkLoadRate || data.ticketAdditionCounterLong.getRate() >= GlobalConfiguration.get().chunkLoading.playerMaxChunkLoadRate))) {
-+ // don't poll, we didn't load it
-+ this.chunkLoadQueue.add(data);
-+ break;
-+ }
-+ }
-+
-+ // can only poll after we decide to load
-+ data.loadQueue.pollFirst();
-+
-+ // now that we've polled we can re-add to load queue
-+ this.chunkLoadQueue.add(data);
-+
-+ // add necessary tickets to load chunk up to send-ready
-+ for (int dz = -1; dz <= 1; ++dz) {
-+ for (int dx = -1; dx <= 1; ++dx) {
-+ final int offX = queuedLoad.chunkX + dx;
-+ final int offZ = queuedLoad.chunkZ + dz;
-+ final ChunkPos chunkPos = new ChunkPos(offX, offZ);
-+
-+ this.chunkMap.level.getChunkSource().addTicketAtLevel(TicketType.PLAYER, chunkPos, LOADED_TICKET_LEVEL, chunkPos);
-+ if (this.chunkMap.level.getChunkSource().getChunkAtIfLoadedMainThreadNoCache(offX, offZ) != null) {
-+ continue;
-+ }
-+
-+ if (priority > 0.0 && this.chunkTicketTracker.add(CoordinateUtils.getChunkKey(offX, offZ))) {
-+ // won't reach here if unloadedTargetChunk is false
-+ ++this.concurrentChunkLoads;
-+ TICKET_ADDITION_COUNTER_SHORT.addTime(time);
-+ TICKET_ADDITION_COUNTER_LONG.addTime(time);
-+ data.ticketAdditionCounterShort.addTime(time);
-+ data.ticketAdditionCounterLong.addTime(time);
-+ }
-+ }
-+ }
-+
-+ // mark that we've added tickets here
-+ this.isTargetedForPlayerLoad.add(chunkKey);
-+
-+ // it's possible all we needed was the player tickets to queue up the send.
-+ if (this.isChunkPlayerLoaded(queuedLoad.chunkX, queuedLoad.chunkZ)) {
-+ // yup, all we needed.
-+ this.onChunkSendReady(queuedLoad.chunkX, queuedLoad.chunkZ);
-+ } else if (this.chunkNeedsPostProcessing(queuedLoad.chunkX, queuedLoad.chunkZ)) {
-+ // requires post processing
-+ this.chunkMap.mainThreadExecutor.execute(() -> {
-+ final long key = CoordinateUtils.getChunkKey(queuedLoad.chunkX, queuedLoad.chunkZ);
-+ final ChunkHolder holder = PlayerChunkLoader.this.chunkMap.getVisibleChunkIfPresent(key);
-+
-+ if (holder == null) {
-+ return;
-+ }
-+
-+ final LevelChunk chunk = holder.getSendingChunk();
-+
-+ if (chunk != null && !chunk.isPostProcessingDone) {
-+ chunk.postProcessGeneration();
-+ }
-+ });
-+ }
-+ }
-+ }
-+
-+ public void tickMidTick() {
-+ // try to send more chunks
-+ this.trySendChunks();
-+
-+ // try to queue more chunks to load
-+ this.tryLoadChunks();
-+ }
-+
-+ static final class ChunkPriorityHolder {
-+ public final int chunkX;
-+ public final int chunkZ;
-+ public final int manhattanDistanceToPlayer;
-+ public final double priority;
-+
-+ public ChunkPriorityHolder(final int chunkX, final int chunkZ, final int manhattanDistanceToPlayer, final double priority) {
-+ this.chunkX = chunkX;
-+ this.chunkZ = chunkZ;
-+ this.manhattanDistanceToPlayer = manhattanDistanceToPlayer;
-+ this.priority = priority;
-+ }
-+ }
-+
-+ public static final class PlayerLoaderData {
-+
-+ protected static final float FOV = 110.0f;
-+ protected static final double PRIORITISED_DISTANCE = 12.0 * 16.0;
-+
-+ // Player max sprint speed is approximately 8m/s
-+ protected static final double LOOK_PRIORITY_SPEED_THRESHOLD = (10.0/20.0) * (10.0/20.0);
-+ protected static final double LOOK_PRIORITY_YAW_DELTA_RECALC_THRESHOLD = 3.0f;
-+
-+ protected double lastLocX = Double.NEGATIVE_INFINITY;
-+ protected double lastLocZ = Double.NEGATIVE_INFINITY;
-+
-+ protected int lastChunkX = Integer.MIN_VALUE;
-+ protected int lastChunkZ = Integer.MIN_VALUE;
-+
-+ // this is corrected so that 0 is along the positive x-axis
-+ protected float lastYaw = Float.NEGATIVE_INFINITY;
-+
-+ protected int lastSendDistance = Integer.MIN_VALUE;
-+ protected int lastLoadDistance = Integer.MIN_VALUE;
-+ protected int lastTickDistance = Integer.MIN_VALUE;
-+ protected boolean usingLookingPriority;
-+
-+ protected final ServerPlayer player;
-+ protected final PlayerChunkLoader loader;
-+
-+ // warning: modifications of this field must be aware that the loadQueue inside PlayerChunkLoader uses this field
-+ // in a comparator!
-+ protected final ArrayDeque loadQueue = new ArrayDeque<>();
-+ protected final LongOpenHashSet sentChunks = new LongOpenHashSet();
-+ protected final LongOpenHashSet chunksToBeSent = new LongOpenHashSet();
-+
-+ protected final TreeSet sendQueue = new TreeSet<>((final ChunkPriorityHolder p1, final ChunkPriorityHolder p2) -> {
-+ final int distanceCompare = Integer.compare(p1.manhattanDistanceToPlayer, p2.manhattanDistanceToPlayer);
-+ if (distanceCompare != 0) {
-+ return distanceCompare;
-+ }
-+
-+ final int coordinateXCompare = Integer.compare(p1.chunkX, p2.chunkX);
-+ if (coordinateXCompare != 0) {
-+ return coordinateXCompare;
-+ }
-+
-+ return Integer.compare(p1.chunkZ, p2.chunkZ);
-+ });
-+
-+ protected int sendViewDistance = -1;
-+ protected int loadViewDistance = -1;
-+ protected int tickViewDistance = -1;
-+
-+ protected long nextChunkSendTarget;
-+
-+ // this interval prevents bursting a lot of chunk loads
-+ protected final IntervalledCounter ticketAdditionCounterShort = new IntervalledCounter((long)(1.0e6 * 50.0)); // 50ms
-+ // this ensures the rate is kept between ticks correctly
-+ protected final IntervalledCounter ticketAdditionCounterLong = new IntervalledCounter((long)(1.0e6 * 1000.0)); // 1000ms
-+
-+ public long lastChunkLoad;
-+
-+ public PlayerLoaderData(final ServerPlayer player, final PlayerChunkLoader loader) {
-+ this.player = player;
-+ this.loader = loader;
-+ }
-+
-+ // these view distance methods are for api
-+ public int getTargetSendViewDistance() {
-+ final int tickViewDistance = this.tickViewDistance == -1 ? this.loader.getTickDistance() : this.tickViewDistance;
-+ final int loadViewDistance = Math.max(tickViewDistance + 1, this.loadViewDistance == -1 ? this.loader.getLoadDistance() : this.loadViewDistance);
-+ final int clientViewDistance = this.getClientViewDistance();
-+ final int sendViewDistance = Math.min(loadViewDistance, this.sendViewDistance == -1 ? (!GlobalConfiguration.get().chunkLoading.autoconfigSendDistance || clientViewDistance == -1 ? this.loader.getSendDistance() : clientViewDistance + 1) : this.sendViewDistance);
-+ return sendViewDistance;
-+ }
-+
-+ public void setTargetSendViewDistance(final int distance) {
-+ if (distance != -1 && (distance < MIN_VIEW_DISTANCE || distance > MAX_VIEW_DISTANCE + 1)) {
-+ throw new IllegalArgumentException("Send view distance must be a number between " + MIN_VIEW_DISTANCE + " and " + (MAX_VIEW_DISTANCE + 1) + " or -1, got: " + distance);
-+ }
-+ this.sendViewDistance = distance;
-+ }
-+
-+ public int getTargetNoTickViewDistance() {
-+ return (this.loadViewDistance == -1 ? this.getLoadDistance() : this.loadViewDistance) - 1;
-+ }
-+
-+ public void setTargetNoTickViewDistance(final int distance) {
-+ if (distance != -1 && (distance < MIN_VIEW_DISTANCE || distance > MAX_VIEW_DISTANCE)) {
-+ throw new IllegalArgumentException("Simulation distance must be a number between " + MIN_VIEW_DISTANCE + " and " + MAX_VIEW_DISTANCE + " or -1, got: " + distance);
-+ }
-+ this.loadViewDistance = distance == -1 ? -1 : distance + 1;
-+ }
-+
-+ public int getTargetTickViewDistance() {
-+ return this.tickViewDistance == -1 ? this.loader.getTickDistance() : this.tickViewDistance;
-+ }
-+
-+ public void setTargetTickViewDistance(final int distance) {
-+ if (distance != -1 && (distance < MIN_VIEW_DISTANCE || distance > MAX_VIEW_DISTANCE)) {
-+ throw new IllegalArgumentException("View distance must be a number between " + MIN_VIEW_DISTANCE + " and " + MAX_VIEW_DISTANCE + " or -1, got: " + distance);
-+ }
-+ this.tickViewDistance = distance;
-+ }
-+
-+ protected int getLoadDistance() {
-+ final int tickViewDistance = this.tickViewDistance == -1 ? this.loader.getTickDistance() : this.tickViewDistance;
-+
-+ return Math.max(tickViewDistance + 1, this.loadViewDistance == -1 ? this.loader.getLoadDistance() : this.loadViewDistance);
-+ }
-+
-+ public boolean hasSentChunk(final int chunkX, final int chunkZ) {
-+ return this.sentChunks.contains(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-+ }
-+
-+ public void sendChunk(final int chunkX, final int chunkZ, final Runnable onChunkSend) {
-+ if (this.sentChunks.add(CoordinateUtils.getChunkKey(chunkX, chunkZ))) {
-+ this.player.getLevel().getChunkSource().chunkMap.updateChunkTracking(this.player,
-+ new ChunkPos(chunkX, chunkZ), new MutableObject<>(), false, true); // unloaded, loaded
-+ this.player.connection.connection.execute(onChunkSend);
-+ } else {
-+ throw new IllegalStateException();
-+ }
-+ }
-+
-+ public void unloadChunk(final int chunkX, final int chunkZ) {
-+ if (this.sentChunks.remove(CoordinateUtils.getChunkKey(chunkX, chunkZ))) {
-+ this.player.getLevel().getChunkSource().chunkMap.updateChunkTracking(this.player,
-+ new ChunkPos(chunkX, chunkZ), null, true, false); // unloaded, loaded
-+ }
-+ }
-+
-+ protected static boolean wantChunkLoaded(final int centerX, final int centerZ, final int chunkX, final int chunkZ,
-+ final int sendRadius) {
-+ // expect sendRadius to be = 1 + target viewable radius
-+ return ChunkMap.isChunkInRange(chunkX, chunkZ, centerX, centerZ, sendRadius);
-+ }
-+
-+ protected static boolean triangleIntersects(final double p1x, final double p1z, // triangle point
-+ final double p2x, final double p2z, // triangle point
-+ final double p3x, final double p3z, // triangle point
-+
-+ final double targetX, final double targetZ) { // point
-+ // from barycentric coordinates:
-+ // targetX = a*p1x + b*p2x + c*p3x
-+ // targetZ = a*p1z + b*p2z + c*p3z
-+ // 1.0 = a*1.0 + b*1.0 + c*1.0
-+ // where a, b, c >= 0.0
-+ // so, if any of a, b, c are less-than zero then there is no intersection.
-+
-+ // d = ((p2z - p3z)(p1x - p3x) + (p3x - p2x)(p1z - p3z))
-+ // a = ((p2z - p3z)(targetX - p3x) + (p3x - p2x)(targetZ - p3z)) / d
-+ // b = ((p3z - p1z)(targetX - p3x) + (p1x - p3x)(targetZ - p3z)) / d
-+ // c = 1.0 - a - b
-+
-+ final double d = (p2z - p3z)*(p1x - p3x) + (p3x - p2x)*(p1z - p3z);
-+ final double a = ((p2z - p3z)*(targetX - p3x) + (p3x - p2x)*(targetZ - p3z)) / d;
-+
-+ if (a < 0.0 || a > 1.0) {
-+ return false;
-+ }
-+
-+ final double b = ((p3z - p1z)*(targetX - p3x) + (p1x - p3x)*(targetZ - p3z)) / d;
-+ if (b < 0.0 || b > 1.0) {
-+ return false;
-+ }
-+
-+ final double c = 1.0 - a - b;
-+
-+ return c >= 0.0 && c <= 1.0;
-+ }
-+
-+ public void remove() {
-+ this.loader.broadcastMap.remove(this.player);
-+ this.loader.loadMap.remove(this.player);
-+ this.loader.loadTicketCleanup.remove(this.player);
-+ this.loader.tickMap.remove(this.player);
-+ }
-+
-+ protected int getClientViewDistance() {
-+ return this.player.clientViewDistance == null ? -1 : Math.max(0, this.player.clientViewDistance.intValue());
-+ }
-+
-+ public void update() {
-+ final int tickViewDistance = this.tickViewDistance == -1 ? this.loader.getTickDistance() : this.tickViewDistance;
-+ // load view cannot be less-than tick view + 1
-+ final int loadViewDistance = Math.max(tickViewDistance + 1, this.loadViewDistance == -1 ? this.loader.getLoadDistance() : this.loadViewDistance);
-+ // send view cannot be greater-than load view
-+ final int clientViewDistance = this.getClientViewDistance();
-+ final int sendViewDistance = Math.min(loadViewDistance, this.sendViewDistance == -1 ? (!GlobalConfiguration.get().chunkLoading.autoconfigSendDistance || clientViewDistance == -1 ? this.loader.getSendDistance() : clientViewDistance + 1) : this.sendViewDistance);
-+
-+ final double posX = this.player.getX();
-+ final double posZ = this.player.getZ();
-+ final float yaw = MCUtil.normalizeYaw(this.player.getYRot() + 90.0f); // mc yaw 0 is along the positive z axis, but obviously this is really dumb - offset so we are at positive x-axis
-+
-+ // in general, we really only want to prioritise chunks in front if we know we're moving pretty fast into them.
-+ final boolean useLookPriority = GlobalConfiguration.get().chunkLoading.enableFrustumPriority && (this.player.getDeltaMovement().horizontalDistanceSqr() > LOOK_PRIORITY_SPEED_THRESHOLD ||
-+ this.player.getAbilities().flying);
-+
-+ // make sure we're in the send queue
-+ this.loader.chunkSendWaitQueue.add(this);
-+
-+ if (
-+ // has view distance stayed the same?
-+ sendViewDistance == this.lastSendDistance
-+ && loadViewDistance == this.lastLoadDistance
-+ && tickViewDistance == this.lastTickDistance
-+
-+ && (this.usingLookingPriority ? (
-+ // has our block stayed the same (this also accounts for chunk change)?
-+ Mth.floor(this.lastLocX) == Mth.floor(posX)
-+ && Mth.floor(this.lastLocZ) == Mth.floor(posZ)
-+ ) : (
-+ // has our chunk stayed the same
-+ (Mth.floor(this.lastLocX) >> 4) == (Mth.floor(posX) >> 4)
-+ && (Mth.floor(this.lastLocZ) >> 4) == (Mth.floor(posZ) >> 4)
-+ ))
-+
-+ // has our decision about look priority changed?
-+ && this.usingLookingPriority == useLookPriority
-+
-+ // if we are currently using look priority, has our yaw stayed within recalc threshold?
-+ && (!this.usingLookingPriority || Math.abs(yaw - this.lastYaw) <= LOOK_PRIORITY_YAW_DELTA_RECALC_THRESHOLD)
-+ ) {
-+ // nothing we care about changed, so we're not re-calculating
-+ return;
-+ }
-+
-+ final int centerChunkX = Mth.floor(posX) >> 4;
-+ final int centerChunkZ = Mth.floor(posZ) >> 4;
-+
-+ final boolean needsChunkCenterUpdate = (centerChunkX != this.lastChunkX) || (centerChunkZ != this.lastChunkZ);
-+ this.loader.broadcastMap.addOrUpdate(this.player, centerChunkX, centerChunkZ, sendViewDistance);
-+ this.loader.loadMap.addOrUpdate(this.player, centerChunkX, centerChunkZ, loadViewDistance);
-+ this.loader.loadTicketCleanup.addOrUpdate(this.player, centerChunkX, centerChunkZ, loadViewDistance + 1);
-+ this.loader.tickMap.addOrUpdate(this.player, centerChunkX, centerChunkZ, tickViewDistance);
-+
-+ if (sendViewDistance != this.lastSendDistance) {
-+ // update the view radius for client
-+ // note that this should be after the map calls because the client wont expect unload calls not in its VD
-+ // and it's possible we decreased VD here
-+ this.player.connection.send(new ClientboundSetChunkCacheRadiusPacket(sendViewDistance));
-+ }
-+ if (tickViewDistance != this.lastTickDistance) {
-+ this.player.connection.send(new ClientboundSetSimulationDistancePacket(tickViewDistance));
-+ }
-+
-+ this.lastLocX = posX;
-+ this.lastLocZ = posZ;
-+ this.lastYaw = yaw;
-+ this.lastSendDistance = sendViewDistance;
-+ this.lastLoadDistance = loadViewDistance;
-+ this.lastTickDistance = tickViewDistance;
-+ this.usingLookingPriority = useLookPriority;
-+
-+ this.lastChunkX = centerChunkX;
-+ this.lastChunkZ = centerChunkZ;
-+
-+ // points for player "view" triangle:
-+
-+ // obviously, the player pos is a vertex
-+ final double p1x = posX;
-+ final double p1z = posZ;
-+
-+ // to the left of the looking direction
-+ final double p2x = PRIORITISED_DISTANCE * Math.cos(Math.toRadians(yaw + (double)(FOV / 2.0))) // calculate rotated vector
-+ + p1x; // offset vector
-+ final double p2z = PRIORITISED_DISTANCE * Math.sin(Math.toRadians(yaw + (double)(FOV / 2.0))) // calculate rotated vector
-+ + p1z; // offset vector
-+
-+ // to the right of the looking direction
-+ final double p3x = PRIORITISED_DISTANCE * Math.cos(Math.toRadians(yaw - (double)(FOV / 2.0))) // calculate rotated vector
-+ + p1x; // offset vector
-+ final double p3z = PRIORITISED_DISTANCE * Math.sin(Math.toRadians(yaw - (double)(FOV / 2.0))) // calculate rotated vector
-+ + p1z; // offset vector
-+
-+ // now that we have all of our points, we can recalculate the load queue
-+
-+ final List loadQueue = new ArrayList<>();
-+
-+ // clear send queue, we are re-sorting
-+ this.sendQueue.clear();
-+ // clear chunk want set, vd/position might have changed
-+ this.chunksToBeSent.clear();
-+
-+ final int searchViewDistance = Math.max(loadViewDistance, sendViewDistance);
-+
-+ for (int dx = -searchViewDistance; dx <= searchViewDistance; ++dx) {
-+ for (int dz = -searchViewDistance; dz <= searchViewDistance; ++dz) {
-+ final int chunkX = dx + centerChunkX;
-+ final int chunkZ = dz + centerChunkZ;
-+ final int squareDistance = Math.max(Math.abs(dx), Math.abs(dz));
-+ final boolean sendChunk = squareDistance <= sendViewDistance && wantChunkLoaded(centerChunkX, centerChunkZ, chunkX, chunkZ, sendViewDistance);
-+
-+ if (this.hasSentChunk(chunkX, chunkZ)) {
-+ // already sent (which means it is also loaded)
-+ if (!sendChunk) {
-+ // have sent the chunk, but don't want it anymore
-+ // unload it now
-+ this.unloadChunk(chunkX, chunkZ);
-+ }
-+ continue;
-+ }
-+
-+ final boolean loadChunk = squareDistance <= loadViewDistance;
-+
-+ final boolean prioritised = useLookPriority && triangleIntersects(
-+ // prioritisation triangle
-+ p1x, p1z, p2x, p2z, p3x, p3z,
-+
-+ // center of chunk
-+ (double)((chunkX << 4) | 8), (double)((chunkZ << 4) | 8)
-+ );
-+
-+ final int manhattanDistance = Math.abs(dx) + Math.abs(dz);
-+
-+ final double priority;
-+
-+ if (squareDistance <= GlobalConfiguration.get().chunkLoading.minLoadRadius) {
-+ // priority should be negative, and we also want to order it from center outwards
-+ // so we want (0,0) to be the smallest, and (minLoadedRadius,minLoadedRadius) to be the greatest
-+ priority = -((2 * GlobalConfiguration.get().chunkLoading.minLoadRadius + 1) - manhattanDistance);
-+ } else {
-+ if (prioritised) {
-+ // we don't prioritise these chunks above others because we also want to make sure some chunks
-+ // will be loaded if the player changes direction
-+ priority = (double)manhattanDistance / 6.0;
-+ } else {
-+ priority = (double)manhattanDistance;
-+ }
-+ }
-+
-+ final ChunkPriorityHolder holder = new ChunkPriorityHolder(chunkX, chunkZ, manhattanDistance, priority);
-+
-+ if (!this.loader.isChunkPlayerLoaded(chunkX, chunkZ)) {
-+ if (loadChunk) {
-+ loadQueue.add(holder);
-+ if (sendChunk) {
-+ this.chunksToBeSent.add(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-+ }
-+ }
-+ } else {
-+ // loaded but not sent: so queue it!
-+ if (sendChunk) {
-+ this.sendQueue.add(holder);
-+ }
-+ }
-+ }
-+ }
-+
-+ loadQueue.sort((final ChunkPriorityHolder p1, final ChunkPriorityHolder p2) -> {
-+ return Double.compare(p1.priority, p2.priority);
-+ });
-+
-+ // we're modifying loadQueue, must remove
-+ this.loader.chunkLoadQueue.remove(this);
-+
-+ this.loadQueue.clear();
-+ this.loadQueue.addAll(loadQueue);
-+
-+ // must re-add
-+ this.loader.chunkLoadQueue.add(this);
-+
-+ // update the chunk center
-+ // this must be done last so that the client does not ignore any of our unload chunk packets
-+ if (needsChunkCenterUpdate) {
-+ this.player.connection.send(new ClientboundSetChunkCacheCenterPacket(centerChunkX, centerChunkZ));
-+ }
-+ }
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java b/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java
-index 8a5e93961dac4d87c81c0e70b6f4124a1f1d2556..0dc94dec1317b3f86d38074c6cbe41ab828cab1d 100644
---- a/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java
-+++ b/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java
-@@ -31,191 +31,41 @@ public final class ChunkSystem {
- }
-
- public static void scheduleChunkTask(final ServerLevel level, final int chunkX, final int chunkZ, final Runnable run, final PrioritisedExecutor.Priority priority) {
-- level.chunkSource.mainThreadProcessor.execute(run);
-+ level.chunkTaskScheduler.scheduleChunkTask(chunkX, chunkZ, run, priority); // Paper - rewrite chunk system
- }
-
- public static void scheduleChunkLoad(final ServerLevel level, final int chunkX, final int chunkZ, final boolean gen,
- final ChunkStatus toStatus, final boolean addTicket, final PrioritisedExecutor.Priority priority,
- final Consumer onComplete) {
-- if (gen) {
-- scheduleChunkLoad(level, chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
-- return;
-- }
-- scheduleChunkLoad(level, chunkX, chunkZ, ChunkStatus.EMPTY, addTicket, priority, (final ChunkAccess chunk) -> {
-- if (chunk == null) {
-- onComplete.accept(null);
-- } else {
-- if (chunk.getStatus().isOrAfter(toStatus)) {
-- scheduleChunkLoad(level, chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
-- } else {
-- onComplete.accept(null);
-- }
-- }
-- });
-+ level.chunkTaskScheduler.scheduleChunkLoad(chunkX, chunkZ, gen, toStatus, addTicket, priority, onComplete); // Paper - rewrite chunk system
- }
-
-- static final TicketType CHUNK_LOAD = TicketType.create("chunk_load", Long::compareTo);
--
-- private static long chunkLoadCounter = 0L;
-+ // Paper - rewrite chunk system
- public static void scheduleChunkLoad(final ServerLevel level, final int chunkX, final int chunkZ, final ChunkStatus toStatus,
- final boolean addTicket, final PrioritisedExecutor.Priority priority, final Consumer onComplete) {
-- if (!Bukkit.isPrimaryThread()) {
-- scheduleChunkTask(level, chunkX, chunkZ, () -> {
-- scheduleChunkLoad(level, chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
-- }, priority);
-- return;
-- }
--
-- final int minLevel = 33 + ChunkStatus.getDistance(toStatus);
-- final Long chunkReference = addTicket ? Long.valueOf(++chunkLoadCounter) : null;
-- final ChunkPos chunkPos = new ChunkPos(chunkX, chunkZ);
--
-- if (addTicket) {
-- level.chunkSource.addTicketAtLevel(CHUNK_LOAD, chunkPos, minLevel, chunkReference);
-- }
-- level.chunkSource.runDistanceManagerUpdates();
--
-- final Consumer loadCallback = (final ChunkAccess chunk) -> {
-- try {
-- if (onComplete != null) {
-- onComplete.accept(chunk);
-- }
-- } catch (final ThreadDeath death) {
-- throw death;
-- } catch (final Throwable thr) {
-- LOGGER.error("Exception handling chunk load callback", thr);
-- SneakyThrow.sneaky(thr);
-- } finally {
-- if (addTicket) {
-- level.chunkSource.addTicketAtLevel(TicketType.UNKNOWN, chunkPos, minLevel, chunkPos);
-- level.chunkSource.removeTicketAtLevel(CHUNK_LOAD, chunkPos, minLevel, chunkReference);
-- }
-- }
-- };
--
-- final ChunkHolder holder = level.chunkSource.chunkMap.getUpdatingChunkIfPresent(CoordinateUtils.getChunkKey(chunkX, chunkZ));
--
-- if (holder == null || holder.getTicketLevel() > minLevel) {
-- loadCallback.accept(null);
-- return;
-- }
--
-- final CompletableFuture> loadFuture = holder.getOrScheduleFuture(toStatus, level.chunkSource.chunkMap);
--
-- if (loadFuture.isDone()) {
-- loadCallback.accept(loadFuture.join().left().orElse(null));
-- return;
-- }
--
-- loadFuture.whenCompleteAsync((final Either either, final Throwable thr) -> {
-- if (thr != null) {
-- loadCallback.accept(null);
-- return;
-- }
-- loadCallback.accept(either.left().orElse(null));
-- }, (final Runnable r) -> {
-- scheduleChunkTask(level, chunkX, chunkZ, r, PrioritisedExecutor.Priority.HIGHEST);
-- });
-+ level.chunkTaskScheduler.scheduleChunkLoad(chunkX, chunkZ, toStatus, addTicket, priority, onComplete); // Paper - rewrite chunk system
- }
-
- public static void scheduleTickingState(final ServerLevel level, final int chunkX, final int chunkZ,
- final ChunkHolder.FullChunkStatus toStatus, final boolean addTicket,
- final PrioritisedExecutor.Priority priority, final Consumer onComplete) {
-- if (toStatus == ChunkHolder.FullChunkStatus.INACCESSIBLE) {
-- throw new IllegalArgumentException("Cannot wait for INACCESSIBLE status");
-- }
--
-- if (!Bukkit.isPrimaryThread()) {
-- scheduleChunkTask(level, chunkX, chunkZ, () -> {
-- scheduleTickingState(level, chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
-- }, priority);
-- return;
-- }
--
-- final int minLevel = 33 - (toStatus.ordinal() - 1);
-- final int radius = toStatus.ordinal() - 1;
-- final Long chunkReference = addTicket ? Long.valueOf(++chunkLoadCounter) : null;
-- final ChunkPos chunkPos = new ChunkPos(chunkX, chunkZ);
--
-- if (addTicket) {
-- level.chunkSource.addTicketAtLevel(CHUNK_LOAD, chunkPos, minLevel, chunkReference);
-- }
-- level.chunkSource.runDistanceManagerUpdates();
--
-- final Consumer loadCallback = (final LevelChunk chunk) -> {
-- try {
-- if (onComplete != null) {
-- onComplete.accept(chunk);
-- }
-- } catch (final ThreadDeath death) {
-- throw death;
-- } catch (final Throwable thr) {
-- LOGGER.error("Exception handling chunk load callback", thr);
-- SneakyThrow.sneaky(thr);
-- } finally {
-- if (addTicket) {
-- level.chunkSource.addTicketAtLevel(TicketType.UNKNOWN, chunkPos, minLevel, chunkPos);
-- level.chunkSource.removeTicketAtLevel(CHUNK_LOAD, chunkPos, minLevel, chunkReference);
-- }
-- }
-- };
--
-- final ChunkHolder holder = level.chunkSource.chunkMap.getUpdatingChunkIfPresent(CoordinateUtils.getChunkKey(chunkX, chunkZ));
--
-- if (holder == null || holder.getTicketLevel() > minLevel) {
-- loadCallback.accept(null);
-- return;
-- }
--
-- final CompletableFuture> tickingState;
-- switch (toStatus) {
-- case BORDER: {
-- tickingState = holder.getFullChunkFuture();
-- break;
-- }
-- case TICKING: {
-- tickingState = holder.getTickingChunkFuture();
-- break;
-- }
-- case ENTITY_TICKING: {
-- tickingState = holder.getEntityTickingChunkFuture();
-- break;
-- }
-- default: {
-- throw new IllegalStateException("Cannot reach here");
-- }
-- }
--
-- if (tickingState.isDone()) {
-- loadCallback.accept(tickingState.join().left().orElse(null));
-- return;
-- }
--
-- tickingState.whenCompleteAsync((final Either either, final Throwable thr) -> {
-- if (thr != null) {
-- loadCallback.accept(null);
-- return;
-- }
-- loadCallback.accept(either.left().orElse(null));
-- }, (final Runnable r) -> {
-- scheduleChunkTask(level, chunkX, chunkZ, r, PrioritisedExecutor.Priority.HIGHEST);
-- });
-+ level.chunkTaskScheduler.scheduleTickingState(chunkX, chunkZ, toStatus, addTicket, priority, onComplete); // Paper - rewrite chunk system
- }
-
- public static List getVisibleChunkHolders(final ServerLevel level) {
-- return new ArrayList<>(level.chunkSource.chunkMap.visibleChunkMap.values());
-+ return level.chunkTaskScheduler.chunkHolderManager.getOldChunkHolders(); // Paper - rewrite chunk system
- }
-
- public static List getUpdatingChunkHolders(final ServerLevel level) {
-- return new ArrayList<>(level.chunkSource.chunkMap.updatingChunkMap.values());
-+ return level.chunkTaskScheduler.chunkHolderManager.getOldChunkHolders(); // Paper - rewrite chunk system
- }
-
- public static int getVisibleChunkHolderCount(final ServerLevel level) {
-- return level.chunkSource.chunkMap.visibleChunkMap.size();
-+ return level.chunkTaskScheduler.chunkHolderManager.size(); // Paper - rewrite chunk system
- }
-
- public static int getUpdatingChunkHolderCount(final ServerLevel level) {
-- return level.chunkSource.chunkMap.updatingChunkMap.size();
-+ return level.chunkTaskScheduler.chunkHolderManager.size(); // Paper - rewrite chunk system
- }
-
- public static boolean hasAnyChunkHolders(final ServerLevel level) {
-@@ -269,23 +119,15 @@ public final class ChunkSystem {
- }
-
- public static int getSendViewDistance(final ServerPlayer player) {
-- return getLoadViewDistance(player);
-+ return io.papermc.paper.chunk.PlayerChunkLoader.getSendViewDistance(player);
- }
-
- public static int getLoadViewDistance(final ServerPlayer player) {
-- final ServerLevel level = player.getLevel();
-- if (level == null) {
-- return Bukkit.getViewDistance() + 1;
-- }
-- return level.chunkSource.chunkMap.getEffectiveViewDistance() + 1;
-+ return io.papermc.paper.chunk.PlayerChunkLoader.getLoadViewDistance(player);
- }
-
- public static int getTickViewDistance(final ServerPlayer player) {
-- final ServerLevel level = player.getLevel();
-- if (level == null) {
-- return Bukkit.getSimulationDistance();
-- }
-- return level.chunkSource.chunkMap.distanceManager.getSimulationDistance();
-+ return io.papermc.paper.chunk.PlayerChunkLoader.getTickViewDistance(player);
- }
-
- private ChunkSystem() {
-diff --git a/src/main/java/io/papermc/paper/chunk/system/entity/EntityLookup.java b/src/main/java/io/papermc/paper/chunk/system/entity/EntityLookup.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..61c170555c8854b102c640b0b6a615f9f732edbf
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/entity/EntityLookup.java
-@@ -0,0 +1,839 @@
-+package io.papermc.paper.chunk.system.entity;
-+
-+import com.destroystokyo.paper.util.maplist.EntityList;
-+import com.mojang.logging.LogUtils;
-+import io.papermc.paper.util.CoordinateUtils;
-+import io.papermc.paper.util.TickThread;
-+import io.papermc.paper.util.WorldUtil;
-+import io.papermc.paper.world.ChunkEntitySlices;
-+import it.unimi.dsi.fastutil.ints.Int2ReferenceOpenHashMap;
-+import it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap;
-+import it.unimi.dsi.fastutil.objects.Object2ReferenceOpenHashMap;
-+import net.minecraft.core.BlockPos;
-+import io.papermc.paper.chunk.system.ChunkSystem;
-+import net.minecraft.server.level.ChunkHolder;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.util.AbortableIterationConsumer;
-+import net.minecraft.util.Mth;
-+import net.minecraft.world.entity.Entity;
-+import net.minecraft.world.entity.EntityType;
-+import net.minecraft.world.level.entity.EntityInLevelCallback;
-+import net.minecraft.world.level.entity.EntityTypeTest;
-+import net.minecraft.world.level.entity.LevelCallback;
-+import net.minecraft.world.level.entity.LevelEntityGetter;
-+import net.minecraft.world.level.entity.Visibility;
-+import net.minecraft.world.phys.AABB;
-+import org.jetbrains.annotations.NotNull;
-+import org.jetbrains.annotations.Nullable;
-+import org.slf4j.Logger;
-+import java.util.ArrayList;
-+import java.util.Iterator;
-+import java.util.List;
-+import java.util.NoSuchElementException;
-+import java.util.UUID;
-+import java.util.concurrent.locks.StampedLock;
-+import java.util.function.Consumer;
-+import java.util.function.Predicate;
-+
-+public final class EntityLookup implements LevelEntityGetter {
-+
-+ private static final Logger LOGGER = LogUtils.getClassLogger();
-+
-+ protected static final int REGION_SHIFT = 5;
-+ protected static final int REGION_MASK = (1 << REGION_SHIFT) - 1;
-+ protected static final int REGION_SIZE = 1 << REGION_SHIFT;
-+
-+ public final ServerLevel world;
-+
-+ private final StampedLock stateLock = new StampedLock();
-+ protected final Long2ObjectOpenHashMap regions = new Long2ObjectOpenHashMap<>(128, 0.5f);
-+
-+ private final int minSection; // inclusive
-+ private final int maxSection; // inclusive
-+ private final LevelCallback worldCallback;
-+
-+ private final StampedLock entityByLock = new StampedLock();
-+ private final Int2ReferenceOpenHashMap entityById = new Int2ReferenceOpenHashMap<>();
-+ private final Object2ReferenceOpenHashMap entityByUUID = new Object2ReferenceOpenHashMap<>();
-+ private final EntityList accessibleEntities = new EntityList();
-+
-+ public EntityLookup(final ServerLevel world, final LevelCallback worldCallback) {
-+ this.world = world;
-+ this.minSection = WorldUtil.getMinSection(world);
-+ this.maxSection = WorldUtil.getMaxSection(world);
-+ this.worldCallback = worldCallback;
-+ }
-+
-+ private static Entity maskNonAccessible(final Entity entity) {
-+ if (entity == null) {
-+ return null;
-+ }
-+ final Visibility visibility = EntityLookup.getEntityStatus(entity);
-+ return visibility.isAccessible() ? entity : null;
-+ }
-+
-+ @Nullable
-+ @Override
-+ public Entity get(final int id) {
-+ final long attempt = this.entityByLock.tryOptimisticRead();
-+ if (attempt != 0L) {
-+ try {
-+ final Entity ret = this.entityById.get(id);
-+
-+ if (this.entityByLock.validate(attempt)) {
-+ return maskNonAccessible(ret);
-+ }
-+ } catch (final Error error) {
-+ throw error;
-+ } catch (final Throwable thr) {
-+ // ignore
-+ }
-+ }
-+
-+ this.entityByLock.readLock();
-+ try {
-+ return maskNonAccessible(this.entityById.get(id));
-+ } finally {
-+ this.entityByLock.tryUnlockRead();
-+ }
-+ }
-+
-+ @Nullable
-+ @Override
-+ public Entity get(final UUID id) {
-+ final long attempt = this.entityByLock.tryOptimisticRead();
-+ if (attempt != 0L) {
-+ try {
-+ final Entity ret = this.entityByUUID.get(id);
-+
-+ if (this.entityByLock.validate(attempt)) {
-+ return maskNonAccessible(ret);
-+ }
-+ } catch (final Error error) {
-+ throw error;
-+ } catch (final Throwable thr) {
-+ // ignore
-+ }
-+ }
-+
-+ this.entityByLock.readLock();
-+ try {
-+ return maskNonAccessible(this.entityByUUID.get(id));
-+ } finally {
-+ this.entityByLock.tryUnlockRead();
-+ }
-+ }
-+
-+ public boolean hasEntity(final UUID uuid) {
-+ return this.get(uuid) != null;
-+ }
-+
-+ public String getDebugInfo() {
-+ return "count_id:" + this.entityById.size() + ",count_uuid:" + this.entityByUUID.size() + ",region_count:" + this.regions.size();
-+ }
-+
-+ static final class ArrayIterable implements Iterable {
-+
-+ private final T[] array;
-+ private final int off;
-+ private final int length;
-+
-+ public ArrayIterable(final T[] array, final int off, final int length) {
-+ this.array = array;
-+ this.off = off;
-+ this.length = length;
-+ if (length > array.length) {
-+ throw new IllegalArgumentException("Length must be no greater-than the array length");
-+ }
-+ }
-+
-+ @NotNull
-+ @Override
-+ public Iterator iterator() {
-+ return new ArrayIterator<>(this.array, this.off, this.length);
-+ }
-+
-+ static final class ArrayIterator implements Iterator {
-+
-+ private final T[] array;
-+ private int off;
-+ private final int length;
-+
-+ public ArrayIterator(final T[] array, final int off, final int length) {
-+ this.array = array;
-+ this.off = off;
-+ this.length = length;
-+ }
-+
-+ @Override
-+ public boolean hasNext() {
-+ return this.off < this.length;
-+ }
-+
-+ @Override
-+ public T next() {
-+ if (this.off >= this.length) {
-+ throw new NoSuchElementException();
-+ }
-+ return this.array[this.off++];
-+ }
-+
-+ @Override
-+ public void remove() {
-+ throw new UnsupportedOperationException();
-+ }
-+ }
-+ }
-+
-+ @Override
-+ public Iterable getAll() {
-+ return new ArrayIterable<>(this.accessibleEntities.getRawData(), 0, this.accessibleEntities.size());
-+ }
-+
-+ @Override
-+ public void get(final EntityTypeTest filter, final AbortableIterationConsumer action) {
-+ for (final Entity entity : this.entityById.values()) {
-+ final Visibility visibility = EntityLookup.getEntityStatus(entity);
-+ if (!visibility.isAccessible()) {
-+ continue;
-+ }
-+ final U casted = filter.tryCast(entity);
-+ if (casted != null && action.accept(casted).shouldAbort()) {
-+ break;
-+ }
-+ }
-+ }
-+
-+ @Override
-+ public void get(final AABB box, final Consumer action) {
-+ List entities = new ArrayList<>();
-+ this.getEntitiesWithoutDragonParts(null, box, entities, null);
-+ for (int i = 0, len = entities.size(); i < len; ++i) {
-+ action.accept(entities.get(i));
-+ }
-+ }
-+
-+ @Override
-+ public void get(final EntityTypeTest filter, final AABB box, final AbortableIterationConsumer action) {
-+ List entities = new ArrayList<>();
-+ this.getEntitiesWithoutDragonParts(null, box, entities, null);
-+ for (int i = 0, len = entities.size(); i < len; ++i) {
-+ final U casted = filter.tryCast(entities.get(i));
-+ if (casted != null && action.accept(casted).shouldAbort()) {
-+ break;
-+ }
-+ }
-+ }
-+
-+ public void entityStatusChange(final Entity entity, final ChunkEntitySlices slices, final Visibility oldVisibility, final Visibility newVisibility, final boolean moved,
-+ final boolean created, final boolean destroyed) {
-+ TickThread.ensureTickThread(entity, "Entity status change must only happen on the main thread");
-+
-+ if (entity.updatingSectionStatus) {
-+ // recursive status update
-+ LOGGER.error("Cannot recursively update entity chunk status for entity " + entity, new Throwable());
-+ return;
-+ }
-+
-+ final boolean entityStatusUpdateBefore = slices == null ? false : slices.startPreventingStatusUpdates();
-+
-+ if (entityStatusUpdateBefore) {
-+ LOGGER.error("Cannot update chunk status for entity " + entity + " since entity chunk (" + slices.chunkX + "," + slices.chunkZ + ") is receiving update", new Throwable());
-+ return;
-+ }
-+
-+ try {
-+ final Boolean ticketBlockBefore = this.world.chunkTaskScheduler.chunkHolderManager.blockTicketUpdates();
-+ try {
-+ entity.updatingSectionStatus = true;
-+ try {
-+ if (created) {
-+ EntityLookup.this.worldCallback.onCreated(entity);
-+ }
-+
-+ if (oldVisibility == newVisibility) {
-+ if (moved && newVisibility.isAccessible()) {
-+ EntityLookup.this.worldCallback.onSectionChange(entity);
-+ }
-+ return;
-+ }
-+
-+ if (newVisibility.ordinal() > oldVisibility.ordinal()) {
-+ // status upgrade
-+ if (!oldVisibility.isAccessible() && newVisibility.isAccessible()) {
-+ this.accessibleEntities.add(entity);
-+ EntityLookup.this.worldCallback.onTrackingStart(entity);
-+ }
-+
-+ if (!oldVisibility.isTicking() && newVisibility.isTicking()) {
-+ EntityLookup.this.worldCallback.onTickingStart(entity);
-+ }
-+ } else {
-+ // status downgrade
-+ if (oldVisibility.isTicking() && !newVisibility.isTicking()) {
-+ EntityLookup.this.worldCallback.onTickingEnd(entity);
-+ }
-+
-+ if (oldVisibility.isAccessible() && !newVisibility.isAccessible()) {
-+ this.accessibleEntities.remove(entity);
-+ EntityLookup.this.worldCallback.onTrackingEnd(entity);
-+ }
-+ }
-+
-+ if (moved && newVisibility.isAccessible()) {
-+ EntityLookup.this.worldCallback.onSectionChange(entity);
-+ }
-+
-+ if (destroyed) {
-+ EntityLookup.this.worldCallback.onDestroyed(entity);
-+ }
-+ } finally {
-+ entity.updatingSectionStatus = false;
-+ }
-+ } finally {
-+ this.world.chunkTaskScheduler.chunkHolderManager.unblockTicketUpdates(ticketBlockBefore);
-+ }
-+ } finally {
-+ if (slices != null) {
-+ slices.stopPreventingStatusUpdates(false);
-+ }
-+ }
-+ }
-+
-+ public void chunkStatusChange(final int x, final int z, final ChunkHolder.FullChunkStatus newStatus) {
-+ this.getChunk(x, z).updateStatus(newStatus, this);
-+ }
-+
-+ public void addLegacyChunkEntities(final List entities) {
-+ for (int i = 0, len = entities.size(); i < len; ++i) {
-+ this.addEntity(entities.get(i), true);
-+ }
-+ }
-+
-+ public void addEntityChunkEntities(final List entities) {
-+ for (int i = 0, len = entities.size(); i < len; ++i) {
-+ this.addEntity(entities.get(i), true);
-+ }
-+ }
-+
-+ public void addWorldGenChunkEntities(final List entities) {
-+ for (int i = 0, len = entities.size(); i < len; ++i) {
-+ this.addEntity(entities.get(i), false);
-+ }
-+ }
-+
-+ public boolean addNewEntity(final Entity entity) {
-+ return this.addEntity(entity, false);
-+ }
-+
-+ public static Visibility getEntityStatus(final Entity entity) {
-+ if (entity.isAlwaysTicking()) {
-+ return Visibility.TICKING;
-+ }
-+ final ChunkHolder.FullChunkStatus entityStatus = entity.chunkStatus;
-+ return Visibility.fromFullChunkStatus(entityStatus == null ? ChunkHolder.FullChunkStatus.INACCESSIBLE : entityStatus);
-+ }
-+
-+ private boolean addEntity(final Entity entity, final boolean fromDisk) {
-+ final BlockPos pos = entity.blockPosition();
-+ final int sectionX = pos.getX() >> 4;
-+ final int sectionY = Mth.clamp(pos.getY() >> 4, this.minSection, this.maxSection);
-+ final int sectionZ = pos.getZ() >> 4;
-+ TickThread.ensureTickThread(this.world, sectionX, sectionZ, "Cannot add entity off-main thread");
-+
-+ if (entity.isRemoved()) {
-+ LOGGER.warn("Refusing to add removed entity: " + entity);
-+ return false;
-+ }
-+
-+ if (entity.updatingSectionStatus) {
-+ LOGGER.warn("Entity " + entity + " is currently prevented from being added/removed to world since it is processing section status updates", new Throwable());
-+ return false;
-+ }
-+
-+ if (fromDisk) {
-+ ChunkSystem.onEntityPreAdd(this.world, entity);
-+ if (entity.isRemoved()) {
-+ // removed from checkDupeUUID call
-+ return false;
-+ }
-+ }
-+
-+ this.entityByLock.writeLock();
-+ try {
-+ if (this.entityById.containsKey(entity.getId())) {
-+ LOGGER.warn("Entity id already exists: " + entity.getId() + ", mapped to " + this.entityById.get(entity.getId()) + ", can't add " + entity);
-+ return false;
-+ }
-+ if (this.entityByUUID.containsKey(entity.getUUID())) {
-+ LOGGER.warn("Entity uuid already exists: " + entity.getUUID() + ", mapped to " + this.entityByUUID.get(entity.getUUID()) + ", can't add " + entity);
-+ return false;
-+ }
-+ this.entityById.put(entity.getId(), entity);
-+ this.entityByUUID.put(entity.getUUID(), entity);
-+ } finally {
-+ this.entityByLock.tryUnlockWrite();
-+ }
-+
-+ entity.sectionX = sectionX;
-+ entity.sectionY = sectionY;
-+ entity.sectionZ = sectionZ;
-+ final ChunkEntitySlices slices = this.getOrCreateChunk(sectionX, sectionZ);
-+ if (!slices.addEntity(entity, sectionY)) {
-+ LOGGER.warn("Entity " + entity + " added to world '" + this.world.getWorld().getName() + "', but was already contained in entity chunk (" + sectionX + "," + sectionZ + ")");
-+ }
-+
-+ entity.setLevelCallback(new EntityCallback(entity));
-+
-+ this.entityStatusChange(entity, slices, Visibility.HIDDEN, getEntityStatus(entity), false, !fromDisk, false);
-+
-+ return true;
-+ }
-+
-+ private void removeEntity(final Entity entity) {
-+ final int sectionX = entity.sectionX;
-+ final int sectionY = entity.sectionY;
-+ final int sectionZ = entity.sectionZ;
-+ TickThread.ensureTickThread(this.world, sectionX, sectionZ, "Cannot remove entity off-main");
-+ if (!entity.isRemoved()) {
-+ throw new IllegalStateException("Only call Entity#setRemoved to remove an entity");
-+ }
-+ final ChunkEntitySlices slices = this.getChunk(sectionX, sectionZ);
-+ // all entities should be in a chunk
-+ if (slices == null) {
-+ LOGGER.warn("Cannot remove entity " + entity + " from null entity slices (" + sectionX + "," + sectionZ + ")");
-+ } else {
-+ if (!slices.removeEntity(entity, sectionY)) {
-+ LOGGER.warn("Failed to remove entity " + entity + " from entity slices (" + sectionX + "," + sectionZ + ")");
-+ }
-+ }
-+ entity.sectionX = entity.sectionY = entity.sectionZ = Integer.MIN_VALUE;
-+
-+ this.entityByLock.writeLock();
-+ try {
-+ if (!this.entityById.remove(entity.getId(), entity)) {
-+ LOGGER.warn("Failed to remove entity " + entity + " by id, current entity mapped: " + this.entityById.get(entity.getId()));
-+ }
-+ if (!this.entityByUUID.remove(entity.getUUID(), entity)) {
-+ LOGGER.warn("Failed to remove entity " + entity + " by uuid, current entity mapped: " + this.entityByUUID.get(entity.getUUID()));
-+ }
-+ } finally {
-+ this.entityByLock.tryUnlockWrite();
-+ }
-+ }
-+
-+ private ChunkEntitySlices moveEntity(final Entity entity) {
-+ // ensure we own the entity
-+ TickThread.ensureTickThread(entity, "Cannot move entity off-main");
-+
-+ final BlockPos newPos = entity.blockPosition();
-+ final int newSectionX = newPos.getX() >> 4;
-+ final int newSectionY = Mth.clamp(newPos.getY() >> 4, this.minSection, this.maxSection);
-+ final int newSectionZ = newPos.getZ() >> 4;
-+
-+ if (newSectionX == entity.sectionX && newSectionY == entity.sectionY && newSectionZ == entity.sectionZ) {
-+ return null;
-+ }
-+
-+ // ensure the new section is owned by this tick thread
-+ TickThread.ensureTickThread(this.world, newSectionX, newSectionZ, "Cannot move entity off-main");
-+
-+ // ensure the old section is owned by this tick thread
-+ TickThread.ensureTickThread(this.world, entity.sectionX, entity.sectionZ, "Cannot move entity off-main");
-+
-+ final ChunkEntitySlices old = this.getChunk(entity.sectionX, entity.sectionZ);
-+ final ChunkEntitySlices slices = this.getOrCreateChunk(newSectionX, newSectionZ);
-+
-+ if (!old.removeEntity(entity, entity.sectionY)) {
-+ LOGGER.warn("Could not remove entity " + entity + " from its old chunk section (" + entity.sectionX + "," + entity.sectionY + "," + entity.sectionZ + ") since it was not contained in the section");
-+ }
-+
-+ if (!slices.addEntity(entity, newSectionY)) {
-+ LOGGER.warn("Could not add entity " + entity + " to its new chunk section (" + newSectionX + "," + newSectionY + "," + newSectionZ + ") as it is already contained in the section");
-+ }
-+
-+ entity.sectionX = newSectionX;
-+ entity.sectionY = newSectionY;
-+ entity.sectionZ = newSectionZ;
-+
-+ return slices;
-+ }
-+
-+ public void getEntitiesWithoutDragonParts(final Entity except, final AABB box, final List into, final Predicate super Entity> predicate) {
-+ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
-+ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
-+ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
-+ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
-+
-+ final int minRegionX = minChunkX >> REGION_SHIFT;
-+ final int minRegionZ = minChunkZ >> REGION_SHIFT;
-+ final int maxRegionX = maxChunkX >> REGION_SHIFT;
-+ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
-+
-+ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
-+ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
-+ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
-+
-+ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
-+ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
-+
-+ if (region == null) {
-+ continue;
-+ }
-+
-+ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
-+ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
-+
-+ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
-+ for (int currX = minX; currX <= maxX; ++currX) {
-+ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
-+ if (chunk == null || !chunk.status.isOrAfter(ChunkHolder.FullChunkStatus.BORDER)) {
-+ continue;
-+ }
-+
-+ chunk.getEntitiesWithoutDragonParts(except, box, into, predicate);
-+ }
-+ }
-+ }
-+ }
-+ }
-+
-+ public void getEntities(final Entity except, final AABB box, final List into, final Predicate super Entity> predicate) {
-+ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
-+ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
-+ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
-+ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
-+
-+ final int minRegionX = minChunkX >> REGION_SHIFT;
-+ final int minRegionZ = minChunkZ >> REGION_SHIFT;
-+ final int maxRegionX = maxChunkX >> REGION_SHIFT;
-+ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
-+
-+ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
-+ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
-+ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
-+
-+ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
-+ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
-+
-+ if (region == null) {
-+ continue;
-+ }
-+
-+ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
-+ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
-+
-+ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
-+ for (int currX = minX; currX <= maxX; ++currX) {
-+ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
-+ if (chunk == null || !chunk.status.isOrAfter(ChunkHolder.FullChunkStatus.BORDER)) {
-+ continue;
-+ }
-+
-+ chunk.getEntities(except, box, into, predicate);
-+ }
-+ }
-+ }
-+ }
-+ }
-+
-+ public void getHardCollidingEntities(final Entity except, final AABB box, final List into, final Predicate super Entity> predicate) {
-+ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
-+ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
-+ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
-+ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
-+
-+ final int minRegionX = minChunkX >> REGION_SHIFT;
-+ final int minRegionZ = minChunkZ >> REGION_SHIFT;
-+ final int maxRegionX = maxChunkX >> REGION_SHIFT;
-+ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
-+
-+ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
-+ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
-+ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
-+
-+ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
-+ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
-+
-+ if (region == null) {
-+ continue;
-+ }
-+
-+ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
-+ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
-+
-+ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
-+ for (int currX = minX; currX <= maxX; ++currX) {
-+ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
-+ if (chunk == null || !chunk.status.isOrAfter(ChunkHolder.FullChunkStatus.BORDER)) {
-+ continue;
-+ }
-+
-+ chunk.getHardCollidingEntities(except, box, into, predicate);
-+ }
-+ }
-+ }
-+ }
-+ }
-+
-+ public void getEntities(final EntityType> type, final AABB box, final List super T> into,
-+ final Predicate super T> predicate) {
-+ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
-+ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
-+ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
-+ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
-+
-+ final int minRegionX = minChunkX >> REGION_SHIFT;
-+ final int minRegionZ = minChunkZ >> REGION_SHIFT;
-+ final int maxRegionX = maxChunkX >> REGION_SHIFT;
-+ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
-+
-+ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
-+ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
-+ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
-+
-+ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
-+ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
-+
-+ if (region == null) {
-+ continue;
-+ }
-+
-+ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
-+ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
-+
-+ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
-+ for (int currX = minX; currX <= maxX; ++currX) {
-+ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
-+ if (chunk == null || !chunk.status.isOrAfter(ChunkHolder.FullChunkStatus.BORDER)) {
-+ continue;
-+ }
-+
-+ chunk.getEntities(type, box, (List)into, (Predicate)predicate);
-+ }
-+ }
-+ }
-+ }
-+ }
-+
-+ public void getEntities(final Class extends T> clazz, final Entity except, final AABB box, final List super T> into,
-+ final Predicate super T> predicate) {
-+ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
-+ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
-+ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
-+ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
-+
-+ final int minRegionX = minChunkX >> REGION_SHIFT;
-+ final int minRegionZ = minChunkZ >> REGION_SHIFT;
-+ final int maxRegionX = maxChunkX >> REGION_SHIFT;
-+ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
-+
-+ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
-+ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
-+ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
-+
-+ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
-+ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
-+
-+ if (region == null) {
-+ continue;
-+ }
-+
-+ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
-+ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
-+
-+ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
-+ for (int currX = minX; currX <= maxX; ++currX) {
-+ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
-+ if (chunk == null || !chunk.status.isOrAfter(ChunkHolder.FullChunkStatus.BORDER)) {
-+ continue;
-+ }
-+
-+ chunk.getEntities(clazz, except, box, into, predicate);
-+ }
-+ }
-+ }
-+ }
-+ }
-+
-+ public void entitySectionLoad(final int chunkX, final int chunkZ, final ChunkEntitySlices slices) {
-+ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot load in entity section off-main");
-+ synchronized (this) {
-+ final ChunkEntitySlices curr = this.getChunk(chunkX, chunkZ);
-+ if (curr != null) {
-+ this.removeChunk(chunkX, chunkZ);
-+
-+ curr.mergeInto(slices);
-+
-+ this.addChunk(chunkX, chunkZ, slices);
-+ } else {
-+ this.addChunk(chunkX, chunkZ, slices);
-+ }
-+ }
-+ }
-+
-+ public void entitySectionUnload(final int chunkX, final int chunkZ) {
-+ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot unload entity section off-main");
-+ this.removeChunk(chunkX, chunkZ);
-+ }
-+
-+ public ChunkEntitySlices getChunk(final int chunkX, final int chunkZ) {
-+ final ChunkSlicesRegion region = this.getRegion(chunkX >> REGION_SHIFT, chunkZ >> REGION_SHIFT);
-+ if (region == null) {
-+ return null;
-+ }
-+
-+ return region.get((chunkX & REGION_MASK) | ((chunkZ & REGION_MASK) << REGION_SHIFT));
-+ }
-+
-+ public ChunkEntitySlices getOrCreateChunk(final int chunkX, final int chunkZ) {
-+ final ChunkSlicesRegion region = this.getRegion(chunkX >> REGION_SHIFT, chunkZ >> REGION_SHIFT);
-+ ChunkEntitySlices ret;
-+ if (region == null || (ret = region.get((chunkX & REGION_MASK) | ((chunkZ & REGION_MASK) << REGION_SHIFT))) == null) {
-+ // loadInEntityChunk will call addChunk for us
-+ return this.world.chunkTaskScheduler.chunkHolderManager.getOrCreateEntityChunk(chunkX, chunkZ, true);
-+ }
-+
-+ return ret;
-+ }
-+
-+ public ChunkSlicesRegion getRegion(final int regionX, final int regionZ) {
-+ final long key = CoordinateUtils.getChunkKey(regionX, regionZ);
-+ final long attempt = this.stateLock.tryOptimisticRead();
-+ if (attempt != 0L) {
-+ try {
-+ final ChunkSlicesRegion ret = this.regions.get(key);
-+
-+ if (this.stateLock.validate(attempt)) {
-+ return ret;
-+ }
-+ } catch (final Error error) {
-+ throw error;
-+ } catch (final Throwable thr) {
-+ // ignore
-+ }
-+ }
-+
-+ this.stateLock.readLock();
-+ try {
-+ return this.regions.get(key);
-+ } finally {
-+ this.stateLock.tryUnlockRead();
-+ }
-+ }
-+
-+ private synchronized void removeChunk(final int chunkX, final int chunkZ) {
-+ final long key = CoordinateUtils.getChunkKey(chunkX >> REGION_SHIFT, chunkZ >> REGION_SHIFT);
-+ final int relIndex = (chunkX & REGION_MASK) | ((chunkZ & REGION_MASK) << REGION_SHIFT);
-+
-+ final ChunkSlicesRegion region = this.regions.get(key);
-+ final int remaining = region.remove(relIndex);
-+
-+ if (remaining == 0) {
-+ this.stateLock.writeLock();
-+ try {
-+ this.regions.remove(key);
-+ } finally {
-+ this.stateLock.tryUnlockWrite();
-+ }
-+ }
-+ }
-+
-+ public synchronized void addChunk(final int chunkX, final int chunkZ, final ChunkEntitySlices slices) {
-+ final long key = CoordinateUtils.getChunkKey(chunkX >> REGION_SHIFT, chunkZ >> REGION_SHIFT);
-+ final int relIndex = (chunkX & REGION_MASK) | ((chunkZ & REGION_MASK) << REGION_SHIFT);
-+
-+ ChunkSlicesRegion region = this.regions.get(key);
-+ if (region != null) {
-+ region.add(relIndex, slices);
-+ } else {
-+ region = new ChunkSlicesRegion();
-+ region.add(relIndex, slices);
-+ this.stateLock.writeLock();
-+ try {
-+ this.regions.put(key, region);
-+ } finally {
-+ this.stateLock.tryUnlockWrite();
-+ }
-+ }
-+ }
-+
-+ public static final class ChunkSlicesRegion {
-+
-+ protected final ChunkEntitySlices[] slices = new ChunkEntitySlices[REGION_SIZE * REGION_SIZE];
-+ protected int sliceCount;
-+
-+ public ChunkEntitySlices get(final int index) {
-+ return this.slices[index];
-+ }
-+
-+ public int remove(final int index) {
-+ final ChunkEntitySlices slices = this.slices[index];
-+ if (slices == null) {
-+ throw new IllegalStateException();
-+ }
-+
-+ this.slices[index] = null;
-+
-+ return --this.sliceCount;
-+ }
-+
-+ public void add(final int index, final ChunkEntitySlices slices) {
-+ final ChunkEntitySlices curr = this.slices[index];
-+ if (curr != null) {
-+ throw new IllegalStateException();
-+ }
-+
-+ this.slices[index] = slices;
-+
-+ ++this.sliceCount;
-+ }
-+ }
-+
-+ private final class EntityCallback implements EntityInLevelCallback {
-+
-+ public final Entity entity;
-+
-+ public EntityCallback(final Entity entity) {
-+ this.entity = entity;
-+ }
-+
-+ @Override
-+ public void onMove() {
-+ final Entity entity = this.entity;
-+ final Visibility oldVisibility = getEntityStatus(entity);
-+ final ChunkEntitySlices newSlices = EntityLookup.this.moveEntity(this.entity);
-+ if (newSlices == null) {
-+ // no new section, so didn't change sections
-+ return;
-+ }
-+ final Visibility newVisibility = getEntityStatus(entity);
-+
-+ EntityLookup.this.entityStatusChange(entity, newSlices, oldVisibility, newVisibility, true, false, false);
-+ }
-+
-+ @Override
-+ public void onRemove(final Entity.RemovalReason reason) {
-+ final Entity entity = this.entity;
-+ TickThread.ensureTickThread(entity, "Cannot remove entity off-main"); // Paper - rewrite chunk system
-+ final Visibility tickingState = EntityLookup.getEntityStatus(entity);
-+
-+ EntityLookup.this.removeEntity(entity);
-+
-+ EntityLookup.this.entityStatusChange(entity, null, tickingState, Visibility.HIDDEN, false, false, reason.shouldDestroy());
-+
-+ this.entity.setLevelCallback(NoOpCallback.INSTANCE);
-+ }
-+ }
-+
-+ private static final class NoOpCallback implements EntityInLevelCallback {
-+
-+ public static final NoOpCallback INSTANCE = new NoOpCallback();
-+
-+ @Override
-+ public void onMove() {}
-+
-+ @Override
-+ public void onRemove(final Entity.RemovalReason reason) {}
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/io/RegionFileIOThread.java b/src/main/java/io/papermc/paper/chunk/system/io/RegionFileIOThread.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..a08cde4eefe879adcee7c4118bc38f98c5097ed0
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/io/RegionFileIOThread.java
-@@ -0,0 +1,1328 @@
-+package io.papermc.paper.chunk.system.io;
-+
-+import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
-+import ca.spottedleaf.concurrentutil.executor.Cancellable;
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedQueueExecutorThread;
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadedTaskQueue;
-+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
-+import com.mojang.logging.LogUtils;
-+import io.papermc.paper.util.CoordinateUtils;
-+import io.papermc.paper.util.TickThread;
-+import it.unimi.dsi.fastutil.HashCommon;
-+import net.minecraft.nbt.CompoundTag;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.world.level.ChunkPos;
-+import net.minecraft.world.level.chunk.storage.RegionFile;
-+import net.minecraft.world.level.chunk.storage.RegionFileStorage;
-+import org.slf4j.Logger;
-+import java.io.IOException;
-+import java.lang.invoke.VarHandle;
-+import java.util.concurrent.CompletableFuture;
-+import java.util.concurrent.CompletionException;
-+import java.util.concurrent.ConcurrentHashMap;
-+import java.util.concurrent.atomic.AtomicInteger;
-+import java.util.function.BiConsumer;
-+import java.util.function.BiFunction;
-+import java.util.function.Consumer;
-+import java.util.function.Function;
-+
-+/**
-+ * Prioritised RegionFile I/O executor, responsible for all RegionFile access.
-+ *
-+ * All functions provided are MT-Safe, however certain ordering constraints are recommended:
-+ *
-+ * Chunk saves may not occur for unloaded chunks.
-+ *
-+ *
-+ * Tasks must be scheduled on the chunk scheduler thread.
-+ *
-+ * By following these constraints, no chunk data loss should occur with the exception of underlying I/O problems.
-+ *
-+ */
-+public final class RegionFileIOThread extends PrioritisedQueueExecutorThread {
-+
-+ private static final Logger LOGGER = LogUtils.getClassLogger();
-+
-+ /**
-+ * The kinds of region files controlled by the region file thread. Add more when needed, and ensure
-+ * getControllerFor is updated.
-+ */
-+ public static enum RegionFileType {
-+ CHUNK_DATA,
-+ POI_DATA,
-+ ENTITY_DATA;
-+ }
-+
-+ protected static final RegionFileType[] CACHED_REGIONFILE_TYPES = RegionFileType.values();
-+
-+ private ChunkDataController getControllerFor(final ServerLevel world, final RegionFileType type) {
-+ switch (type) {
-+ case CHUNK_DATA:
-+ return world.chunkDataControllerNew;
-+ case POI_DATA:
-+ return world.poiDataControllerNew;
-+ case ENTITY_DATA:
-+ return world.entityDataControllerNew;
-+ default:
-+ throw new IllegalStateException("Unknown controller type " + type);
-+ }
-+ }
-+
-+ /**
-+ * Collects regionfile data for a certain chunk.
-+ */
-+ public static final class RegionFileData {
-+
-+ private final boolean[] hasResult = new boolean[CACHED_REGIONFILE_TYPES.length];
-+ private final CompoundTag[] data = new CompoundTag[CACHED_REGIONFILE_TYPES.length];
-+ private final Throwable[] throwables = new Throwable[CACHED_REGIONFILE_TYPES.length];
-+
-+ /**
-+ * Sets the result associated with the specified regionfile type. Note that
-+ * results can only be set once per regionfile type.
-+ *
-+ * @param type The regionfile type.
-+ * @param data The result to set.
-+ */
-+ public void setData(final RegionFileType type, final CompoundTag data) {
-+ final int index = type.ordinal();
-+
-+ if (this.hasResult[index]) {
-+ throw new IllegalArgumentException("Result already exists for type " + type);
-+ }
-+ this.hasResult[index] = true;
-+ this.data[index] = data;
-+ }
-+
-+ /**
-+ * Sets the result associated with the specified regionfile type. Note that
-+ * results can only be set once per regionfile type.
-+ *
-+ * @param type The regionfile type.
-+ * @param throwable The result to set.
-+ */
-+ public void setThrowable(final RegionFileType type, final Throwable throwable) {
-+ final int index = type.ordinal();
-+
-+ if (this.hasResult[index]) {
-+ throw new IllegalArgumentException("Result already exists for type " + type);
-+ }
-+ this.hasResult[index] = true;
-+ this.throwables[index] = throwable;
-+ }
-+
-+ /**
-+ * Returns whether there is a result for the specified regionfile type.
-+ *
-+ * @param type Specified regionfile type.
-+ *
-+ * @return Whether a result exists for {@code type}.
-+ */
-+ public boolean hasResult(final RegionFileType type) {
-+ return this.hasResult[type.ordinal()];
-+ }
-+
-+ /**
-+ * Returns the data result for the regionfile type.
-+ *
-+ * @param type Specified regionfile type.
-+ *
-+ * @throws IllegalArgumentException If the result has not been set for {@code type}.
-+ * @return The data result for the specified type. If the result is a {@code Throwable},
-+ * then returns {@code null}.
-+ */
-+ public CompoundTag getData(final RegionFileType type) {
-+ final int index = type.ordinal();
-+
-+ if (!this.hasResult[index]) {
-+ throw new IllegalArgumentException("Result does not exist for type " + type);
-+ }
-+
-+ return this.data[index];
-+ }
-+
-+ /**
-+ * Returns the throwable result for the regionfile type.
-+ *
-+ * @param type Specified regionfile type.
-+ *
-+ * @throws IllegalArgumentException If the result has not been set for {@code type}.
-+ * @return The throwable result for the specified type. If the result is an {@code CompoundTag},
-+ * then returns {@code null}.
-+ */
-+ public Throwable getThrowable(final RegionFileType type) {
-+ final int index = type.ordinal();
-+
-+ if (!this.hasResult[index]) {
-+ throw new IllegalArgumentException("Result does not exist for type " + type);
-+ }
-+
-+ return this.throwables[index];
-+ }
-+ }
-+
-+ private static final Object INIT_LOCK = new Object();
-+
-+ static RegionFileIOThread[] threads;
-+
-+ /* needs to be consistent given a set of parameters */
-+ static RegionFileIOThread selectThread(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
-+ if (threads == null) {
-+ throw new IllegalStateException("Threads not initialised");
-+ }
-+
-+ final int regionX = chunkX >> 5;
-+ final int regionZ = chunkZ >> 5;
-+ final int typeOffset = type.ordinal();
-+
-+ return threads[(System.identityHashCode(world) + regionX + regionZ + typeOffset) % threads.length];
-+ }
-+
-+ /**
-+ * Shuts down the I/O executor(s). Watis for all tasks to complete if specified.
-+ * Tasks queued during this call might not be accepted, and tasks queued after will not be accepted.
-+ *
-+ * @param wait Whether to wait until all tasks have completed.
-+ */
-+ public static void close(final boolean wait) {
-+ for (int i = 0, len = threads.length; i < len; ++i) {
-+ threads[i].close(false, true);
-+ }
-+ if (wait) {
-+ RegionFileIOThread.flush();
-+ }
-+ }
-+
-+ public static long[] getExecutedTasks() {
-+ final long[] ret = new long[threads.length];
-+ for (int i = 0, len = threads.length; i < len; ++i) {
-+ ret[i] = threads[i].getTotalTasksExecuted();
-+ }
-+
-+ return ret;
-+ }
-+
-+ public static long[] getTasksScheduled() {
-+ final long[] ret = new long[threads.length];
-+ for (int i = 0, len = threads.length; i < len; ++i) {
-+ ret[i] = threads[i].getTotalTasksScheduled();
-+ }
-+ return ret;
-+ }
-+
-+ public static void flush() {
-+ for (int i = 0, len = threads.length; i < len; ++i) {
-+ threads[i].waitUntilAllExecuted();
-+ }
-+ }
-+
-+ public static void partialFlush(final int totalTasksRemaining) {
-+ long failures = 1L; // start out at 0.25ms
-+
-+ for (;;) {
-+ final long[] executed = getExecutedTasks();
-+ final long[] scheduled = getTasksScheduled();
-+
-+ long sum = 0;
-+ for (int i = 0; i < executed.length; ++i) {
-+ sum += scheduled[i] - executed[i];
-+ }
-+
-+ if (sum <= totalTasksRemaining) {
-+ break;
-+ }
-+
-+ failures = ConcurrentUtil.linearLongBackoff(failures, 250_000L, 5_000_000L); // 500us, 5ms
-+ }
-+ }
-+
-+ /**
-+ * Inits the executor with the specified number of threads.
-+ *
-+ * @param threads Specified number of threads.
-+ */
-+ public static void init(final int threads) {
-+ synchronized (INIT_LOCK) {
-+ if (RegionFileIOThread.threads != null) {
-+ throw new IllegalStateException("Already initialised threads");
-+ }
-+
-+ RegionFileIOThread.threads = new RegionFileIOThread[threads];
-+
-+ for (int i = 0; i < threads; ++i) {
-+ RegionFileIOThread.threads[i] = new RegionFileIOThread(i);
-+ RegionFileIOThread.threads[i].start();
-+ }
-+ }
-+ }
-+
-+ private RegionFileIOThread(final int threadNumber) {
-+ super(new PrioritisedThreadedTaskQueue(), (int)(1.0e6)); // 1.0ms spinwait time
-+ this.setName("RegionFile I/O Thread #" + threadNumber);
-+ this.setPriority(Thread.NORM_PRIORITY - 2); // we keep priority close to normal because threads can wait on us
-+ this.setUncaughtExceptionHandler((final Thread thread, final Throwable thr) -> {
-+ LOGGER.error("Uncaught exception thrown from I/O thread, report this! Thread: " + thread.getName(), thr);
-+ });
-+ }
-+
-+ /**
-+ * Returns whether the current thread is a regionfile I/O executor.
-+ * @return Whether the current thread is a regionfile I/O executor.
-+ */
-+ public static boolean isRegionFileThread() {
-+ return Thread.currentThread() instanceof RegionFileIOThread;
-+ }
-+
-+ /**
-+ * Returns the priority associated with blocking I/O based on the current thread. The goal is to avoid
-+ * dumb plugins from taking away priority from threads we consider crucial.
-+ * @return The priroity to use with blocking I/O on the current thread.
-+ */
-+ public static PrioritisedExecutor.Priority getIOBlockingPriorityForCurrentThread() {
-+ if (TickThread.isTickThread()) {
-+ return PrioritisedExecutor.Priority.BLOCKING;
-+ }
-+ return PrioritisedExecutor.Priority.HIGHEST;
-+ }
-+
-+ /**
-+ * Returns the current {@code CompoundTag} pending for write for the specified chunk & regionfile type.
-+ * Note that this does not copy the result, so do not modify the result returned.
-+ *
-+ * @param world Specified world.
-+ * @param chunkX Specified chunk x.
-+ * @param chunkZ Specified chunk z.
-+ * @param type Specified regionfile type.
-+ *
-+ * @return The compound tag associated for the specified chunk. {@code null} if no write was pending, or if {@code null} is the write pending.
-+ */
-+ public static CompoundTag getPendingWrite(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
-+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
-+ return thread.getPendingWriteInternal(world, chunkX, chunkZ, type);
-+ }
-+
-+ CompoundTag getPendingWriteInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
-+ final ChunkDataController taskController = this.getControllerFor(world, type);
-+ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
-+
-+ if (task == null) {
-+ return null;
-+ }
-+
-+ final CompoundTag ret = task.inProgressWrite;
-+
-+ return ret == ChunkDataTask.NOTHING_TO_WRITE ? null : ret;
-+ }
-+
-+ /**
-+ * Returns the priority for the specified regionfile type for the specified chunk.
-+ * @param world Specified world.
-+ * @param chunkX Specified chunk x.
-+ * @param chunkZ Specified chunk z.
-+ * @param type Specified regionfile type.
-+ * @return The priority for the chunk
-+ */
-+ public static PrioritisedExecutor.Priority getPriority(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
-+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
-+ return thread.getPriorityInternal(world, chunkX, chunkZ, type);
-+ }
-+
-+ PrioritisedExecutor.Priority getPriorityInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
-+ final ChunkDataController taskController = this.getControllerFor(world, type);
-+ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
-+
-+ if (task == null) {
-+ return PrioritisedExecutor.Priority.COMPLETING;
-+ }
-+
-+ return task.prioritisedTask.getPriority();
-+ }
-+
-+ /**
-+ * Sets the priority for all regionfile types for the specified chunk. Note that great care should
-+ * be taken using this method, as there can be multiple tasks tied to the same chunk that want different
-+ * priorities.
-+ *
-+ * @param world Specified world.
-+ * @param chunkX Specified chunk x.
-+ * @param chunkZ Specified chunk z.
-+ * @param priority New priority.
-+ *
-+ * @see #raisePriority(ServerLevel, int, int, Priority)
-+ * @see #raisePriority(ServerLevel, int, int, RegionFileType, Priority)
-+ * @see #lowerPriority(ServerLevel, int, int, Priority)
-+ * @see #lowerPriority(ServerLevel, int, int, RegionFileType, Priority)
-+ */
-+ public static void setPriority(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final PrioritisedExecutor.Priority priority) {
-+ for (final RegionFileType type : CACHED_REGIONFILE_TYPES) {
-+ RegionFileIOThread.setPriority(world, chunkX, chunkZ, type, priority);
-+ }
-+ }
-+
-+ /**
-+ * Sets the priority for the specified regionfile type for the specified chunk. Note that great care should
-+ * be taken using this method, as there can be multiple tasks tied to the same chunk that want different
-+ * priorities.
-+ *
-+ * @param world Specified world.
-+ * @param chunkX Specified chunk x.
-+ * @param chunkZ Specified chunk z.
-+ * @param type Specified regionfile type.
-+ * @param priority New priority.
-+ *
-+ * @see #raisePriority(ServerLevel, int, int, Priority)
-+ * @see #raisePriority(ServerLevel, int, int, RegionFileType, Priority)
-+ * @see #lowerPriority(ServerLevel, int, int, Priority)
-+ * @see #lowerPriority(ServerLevel, int, int, RegionFileType, Priority)
-+ */
-+ public static void setPriority(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
-+ final PrioritisedExecutor.Priority priority) {
-+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
-+ thread.setPriorityInternal(world, chunkX, chunkZ, type, priority);
-+ }
-+
-+ void setPriorityInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
-+ final PrioritisedExecutor.Priority priority) {
-+ final ChunkDataController taskController = this.getControllerFor(world, type);
-+ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
-+
-+ if (task != null) {
-+ task.prioritisedTask.setPriority(priority);
-+ }
-+ }
-+
-+ /**
-+ * Raises the priority for all regionfile types for the specified chunk.
-+ *
-+ * @param world Specified world.
-+ * @param chunkX Specified chunk x.
-+ * @param chunkZ Specified chunk z.
-+ * @param priority New priority.
-+ *
-+ * @see #setPriority(ServerLevel, int, int, Priority)
-+ * @see #setPriority(ServerLevel, int, int, RegionFileType, Priority)
-+ * @see #lowerPriority(ServerLevel, int, int, Priority)
-+ * @see #lowerPriority(ServerLevel, int, int, RegionFileType, Priority)
-+ */
-+ public static void raisePriority(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final PrioritisedExecutor.Priority priority) {
-+ for (final RegionFileType type : CACHED_REGIONFILE_TYPES) {
-+ RegionFileIOThread.raisePriority(world, chunkX, chunkZ, type, priority);
-+ }
-+ }
-+
-+ /**
-+ * Raises the priority for the specified regionfile type for the specified chunk.
-+ *
-+ * @param world Specified world.
-+ * @param chunkX Specified chunk x.
-+ * @param chunkZ Specified chunk z.
-+ * @param type Specified regionfile type.
-+ * @param priority New priority.
-+ *
-+ * @see #setPriority(ServerLevel, int, int, Priority)
-+ * @see #setPriority(ServerLevel, int, int, RegionFileType, Priority)
-+ * @see #lowerPriority(ServerLevel, int, int, Priority)
-+ * @see #lowerPriority(ServerLevel, int, int, RegionFileType, Priority)
-+ */
-+ public static void raisePriority(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
-+ final PrioritisedExecutor.Priority priority) {
-+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
-+ thread.raisePriorityInternal(world, chunkX, chunkZ, type, priority);
-+ }
-+
-+ void raisePriorityInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
-+ final PrioritisedExecutor.Priority priority) {
-+ final ChunkDataController taskController = this.getControllerFor(world, type);
-+ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
-+
-+ if (task != null) {
-+ task.prioritisedTask.raisePriority(priority);
-+ }
-+ }
-+
-+ /**
-+ * Lowers the priority for all regionfile types for the specified chunk.
-+ *
-+ * @param world Specified world.
-+ * @param chunkX Specified chunk x.
-+ * @param chunkZ Specified chunk z.
-+ * @param priority New priority.
-+ *
-+ * @see #raisePriority(ServerLevel, int, int, Priority)
-+ * @see #raisePriority(ServerLevel, int, int, RegionFileType, Priority)
-+ * @see #setPriority(ServerLevel, int, int, Priority)
-+ * @see #setPriority(ServerLevel, int, int, RegionFileType, Priority)
-+ */
-+ public static void lowerPriority(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final PrioritisedExecutor.Priority priority) {
-+ for (final RegionFileType type : CACHED_REGIONFILE_TYPES) {
-+ RegionFileIOThread.lowerPriority(world, chunkX, chunkZ, type, priority);
-+ }
-+ }
-+
-+ /**
-+ * Lowers the priority for the specified regionfile type for the specified chunk.
-+ *
-+ * @param world Specified world.
-+ * @param chunkX Specified chunk x.
-+ * @param chunkZ Specified chunk z.
-+ * @param type Specified regionfile type.
-+ * @param priority New priority.
-+ *
-+ * @see #raisePriority(ServerLevel, int, int, Priority)
-+ * @see #raisePriority(ServerLevel, int, int, RegionFileType, Priority)
-+ * @see #setPriority(ServerLevel, int, int, Priority)
-+ * @see #setPriority(ServerLevel, int, int, RegionFileType, Priority)
-+ */
-+ public static void lowerPriority(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
-+ final PrioritisedExecutor.Priority priority) {
-+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
-+ thread.lowerPriorityInternal(world, chunkX, chunkZ, type, priority);
-+ }
-+
-+ void lowerPriorityInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
-+ final PrioritisedExecutor.Priority priority) {
-+ final ChunkDataController taskController = this.getControllerFor(world, type);
-+ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
-+
-+ if (task != null) {
-+ task.prioritisedTask.lowerPriority(priority);
-+ }
-+ }
-+
-+ /**
-+ * Schedules the chunk data to be written asynchronously.
-+ *
-+ * Impl notes:
-+ *
-+ *
-+ * This function presumes a chunk load for the coordinates is not called during this function (anytime after is OK). This means
-+ * saves must be scheduled before a chunk is unloaded.
-+ *
-+ *
-+ * Writes may be called concurrently, although only the "later" write will go through.
-+ *
-+ *
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param data Chunk's data
-+ * @param type The regionfile type to write to.
-+ *
-+ * @throws IllegalStateException If the file io thread has shutdown.
-+ */
-+ public static void scheduleSave(final ServerLevel world, final int chunkX, final int chunkZ, final CompoundTag data,
-+ final RegionFileType type) {
-+ RegionFileIOThread.scheduleSave(world, chunkX, chunkZ, data, type, PrioritisedExecutor.Priority.NORMAL);
-+ }
-+
-+ /**
-+ * Schedules the chunk data to be written asynchronously.
-+ *
-+ * Impl notes:
-+ *
-+ *
-+ * This function presumes a chunk load for the coordinates is not called during this function (anytime after is OK). This means
-+ * saves must be scheduled before a chunk is unloaded.
-+ *
-+ *
-+ * Writes may be called concurrently, although only the "later" write will go through.
-+ *
-+ *
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param data Chunk's data
-+ * @param type The regionfile type to write to.
-+ * @param priority The minimum priority to schedule at.
-+ *
-+ * @throws IllegalStateException If the file io thread has shutdown.
-+ */
-+ public static void scheduleSave(final ServerLevel world, final int chunkX, final int chunkZ, final CompoundTag data,
-+ final RegionFileType type, final PrioritisedExecutor.Priority priority) {
-+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
-+ thread.scheduleSaveInternal(world, chunkX, chunkZ, data, type, priority);
-+ }
-+
-+ void scheduleSaveInternal(final ServerLevel world, final int chunkX, final int chunkZ, final CompoundTag data,
-+ final RegionFileType type, final PrioritisedExecutor.Priority priority) {
-+ final ChunkDataController taskController = this.getControllerFor(world, type);
-+
-+ final boolean[] created = new boolean[1];
-+ final ChunkCoordinate key = new ChunkCoordinate(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-+ final ChunkDataTask task = taskController.tasks.compute(key, (final ChunkCoordinate keyInMap, final ChunkDataTask taskRunning) -> {
-+ if (taskRunning == null || taskRunning.failedWrite) {
-+ // no task is scheduled or the previous write failed - meaning we need to overwrite it
-+
-+ // create task
-+ final ChunkDataTask newTask = new ChunkDataTask(world, chunkX, chunkZ, taskController, RegionFileIOThread.this, priority);
-+ newTask.inProgressWrite = data;
-+ created[0] = true;
-+
-+ return newTask;
-+ }
-+
-+ taskRunning.inProgressWrite = data;
-+
-+ return taskRunning;
-+ });
-+
-+ if (created[0]) {
-+ task.prioritisedTask.queue();
-+ } else {
-+ task.prioritisedTask.raisePriority(priority);
-+ }
-+ }
-+
-+ /**
-+ * Schedules a load to be executed asynchronously. This task will load all regionfile types, and then call
-+ * {@code onComplete}. This is a bulk load operation, see {@link #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)}
-+ * for single load.
-+ *
-+ * Impl notes:
-+ *
-+ *
-+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
-+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
-+ * data is undefined behaviour, and can cause deadlock.
-+ *
-+ *
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param onComplete Consumer to execute once this task has completed
-+ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
-+ * of this call.
-+ *
-+ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
-+ *
-+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)
-+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)
-+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, RegionFileType...)
-+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, Priority, RegionFileType...)
-+ */
-+ public static Cancellable loadAllChunkData(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final Consumer onComplete, final boolean intendingToBlock) {
-+ return RegionFileIOThread.loadAllChunkData(world, chunkX, chunkZ, onComplete, intendingToBlock, PrioritisedExecutor.Priority.NORMAL);
-+ }
-+
-+ /**
-+ * Schedules a load to be executed asynchronously. This task will load all regionfile types, and then call
-+ * {@code onComplete}. This is a bulk load operation, see {@link #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)}
-+ * for single load.
-+ *
-+ * Impl notes:
-+ *
-+ *
-+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
-+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
-+ * data is undefined behaviour, and can cause deadlock.
-+ *
-+ *
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param onComplete Consumer to execute once this task has completed
-+ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
-+ * of this call.
-+ * @param priority The minimum priority to load the data at.
-+ *
-+ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
-+ *
-+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)
-+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)
-+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, RegionFileType...)
-+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, Priority, RegionFileType...)
-+ */
-+ public static Cancellable loadAllChunkData(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final Consumer onComplete, final boolean intendingToBlock,
-+ final PrioritisedExecutor.Priority priority) {
-+ return RegionFileIOThread.loadChunkData(world, chunkX, chunkZ, onComplete, intendingToBlock, priority, CACHED_REGIONFILE_TYPES);
-+ }
-+
-+ /**
-+ * Schedules a load to be executed asynchronously. This task will load data for the specified regionfile type(s), and
-+ * then call {@code onComplete}. This is a bulk load operation, see {@link #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)}
-+ * for single load.
-+ *
-+ * Impl notes:
-+ *
-+ *
-+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
-+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
-+ * data is undefined behaviour, and can cause deadlock.
-+ *
-+ *
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param onComplete Consumer to execute once this task has completed
-+ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
-+ * of this call.
-+ * @param types The regionfile type(s) to load.
-+ *
-+ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
-+ *
-+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)
-+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)
-+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean)
-+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean, Priority)
-+ */
-+ public static Cancellable loadChunkData(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final Consumer onComplete, final boolean intendingToBlock,
-+ final RegionFileType... types) {
-+ return RegionFileIOThread.loadChunkData(world, chunkX, chunkZ, onComplete, intendingToBlock, PrioritisedExecutor.Priority.NORMAL, types);
-+ }
-+
-+ /**
-+ * Schedules a load to be executed asynchronously. This task will load data for the specified regionfile type(s), and
-+ * then call {@code onComplete}. This is a bulk load operation, see {@link #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)}
-+ * for single load.
-+ *
-+ * Impl notes:
-+ *
-+ *
-+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
-+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
-+ * data is undefined behaviour, and can cause deadlock.
-+ *
-+ *
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param onComplete Consumer to execute once this task has completed
-+ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
-+ * of this call.
-+ * @param types The regionfile type(s) to load.
-+ * @param priority The minimum priority to load the data at.
-+ *
-+ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
-+ *
-+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)
-+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)
-+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean)
-+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean, Priority)
-+ */
-+ public static Cancellable loadChunkData(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final Consumer onComplete, final boolean intendingToBlock,
-+ final PrioritisedExecutor.Priority priority, final RegionFileType... types) {
-+ if (types == null) {
-+ throw new NullPointerException("Types cannot be null");
-+ }
-+ if (types.length == 0) {
-+ throw new IllegalArgumentException("Types cannot be empty");
-+ }
-+
-+ final RegionFileData ret = new RegionFileData();
-+
-+ final Cancellable[] reads = new CancellableRead[types.length];
-+ final AtomicInteger completions = new AtomicInteger();
-+ final int expectedCompletions = types.length;
-+
-+ for (int i = 0; i < expectedCompletions; ++i) {
-+ final RegionFileType type = types[i];
-+ reads[i] = RegionFileIOThread.loadDataAsync(world, chunkX, chunkZ, type,
-+ (final CompoundTag data, final Throwable throwable) -> {
-+ if (throwable != null) {
-+ ret.setThrowable(type, throwable);
-+ } else {
-+ ret.setData(type, data);
-+ }
-+
-+ if (completions.incrementAndGet() == expectedCompletions) {
-+ onComplete.accept(ret);
-+ }
-+ }, intendingToBlock, priority);
-+ }
-+
-+ return new CancellableReads(reads);
-+ }
-+
-+ /**
-+ * Schedules a load to be executed asynchronously. This task will load the specified regionfile type, and then call
-+ * {@code onComplete}.
-+ *
-+ * Impl notes:
-+ *
-+ *
-+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
-+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
-+ * data is undefined behaviour, and can cause deadlock.
-+ *
-+ *
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param onComplete Consumer to execute once this task has completed
-+ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
-+ * of this call.
-+ *
-+ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
-+ *
-+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, RegionFileType...)
-+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, Priority, RegionFileType...)
-+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean)
-+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean, Priority)
-+ */
-+ public static Cancellable loadDataAsync(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final RegionFileType type, final BiConsumer onComplete,
-+ final boolean intendingToBlock) {
-+ return RegionFileIOThread.loadDataAsync(world, chunkX, chunkZ, type, onComplete, intendingToBlock, PrioritisedExecutor.Priority.NORMAL);
-+ }
-+
-+ /**
-+ * Schedules a load to be executed asynchronously. This task will load the specified regionfile type, and then call
-+ * {@code onComplete}.
-+ *
-+ * Impl notes:
-+ *
-+ *
-+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
-+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
-+ * data is undefined behaviour, and can cause deadlock.
-+ *
-+ *
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param onComplete Consumer to execute once this task has completed
-+ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
-+ * of this call.
-+ * @param priority Minimum priority to load the data at.
-+ *
-+ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
-+ *
-+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, RegionFileType...)
-+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, Priority, RegionFileType...)
-+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean)
-+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean, Priority)
-+ */
-+ public static Cancellable loadDataAsync(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final RegionFileType type, final BiConsumer onComplete,
-+ final boolean intendingToBlock, final PrioritisedExecutor.Priority priority) {
-+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
-+ return thread.loadDataAsyncInternal(world, chunkX, chunkZ, type, onComplete, intendingToBlock, priority);
-+ }
-+
-+ private static Boolean doesRegionFileExist(final int chunkX, final int chunkZ, final boolean intendingToBlock,
-+ final ChunkDataController taskController) {
-+ final ChunkPos chunkPos = new ChunkPos(chunkX, chunkZ);
-+ if (intendingToBlock) {
-+ return taskController.computeForRegionFile(chunkX, chunkZ, true, (final RegionFile file) -> {
-+ if (file == null) { // null if no regionfile exists
-+ return Boolean.FALSE;
-+ }
-+
-+ return file.hasChunk(chunkPos) ? Boolean.TRUE : Boolean.FALSE;
-+ });
-+ } else {
-+ return taskController.computeForRegionFileIfLoaded(chunkX, chunkZ, (final RegionFile file) -> {
-+ if (file == null) { // null if not loaded
-+ return Boolean.TRUE;
-+ }
-+
-+ return file.hasChunk(chunkPos) ? Boolean.TRUE : Boolean.FALSE;
-+ });
-+ }
-+ }
-+
-+ Cancellable loadDataAsyncInternal(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final RegionFileType type, final BiConsumer onComplete,
-+ final boolean intendingToBlock, final PrioritisedExecutor.Priority priority) {
-+ final ChunkDataController taskController = this.getControllerFor(world, type);
-+
-+ final ImmediateCallbackCompletion callbackInfo = new ImmediateCallbackCompletion();
-+
-+ final ChunkCoordinate key = new ChunkCoordinate(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-+ final BiFunction compute = (final ChunkCoordinate keyInMap, final ChunkDataTask running) -> {
-+ if (running == null) {
-+ // not scheduled
-+
-+ if (callbackInfo.regionFileCalculation == null) {
-+ // caller will compute this outside of compute(), to avoid holding the bin lock
-+ callbackInfo.needsRegionFileTest = true;
-+ return null;
-+ }
-+
-+ if (callbackInfo.regionFileCalculation == Boolean.FALSE) {
-+ // not on disk
-+ callbackInfo.data = null;
-+ callbackInfo.throwable = null;
-+ callbackInfo.completeNow = true;
-+ return null;
-+ }
-+
-+ // set up task
-+ final ChunkDataTask newTask = new ChunkDataTask(
-+ world, chunkX, chunkZ, taskController, RegionFileIOThread.this, priority
-+ );
-+ newTask.inProgressRead = new RegionFileIOThread.InProgressRead();
-+ newTask.inProgressRead.waiters.add(onComplete);
-+
-+ callbackInfo.tasksNeedsScheduling = true;
-+ return newTask;
-+ }
-+
-+ final CompoundTag pendingWrite = running.inProgressWrite;
-+
-+ if (pendingWrite == ChunkDataTask.NOTHING_TO_WRITE) {
-+ // need to add to waiters here, because the regionfile thread will use compute() to lock and check for cancellations
-+ if (!running.inProgressRead.addToWaiters(onComplete)) {
-+ callbackInfo.data = running.inProgressRead.value;
-+ callbackInfo.throwable = running.inProgressRead.throwable;
-+ callbackInfo.completeNow = true;
-+ }
-+ return running;
-+ }
-+ // using the result sync here - don't bump priority
-+
-+ // at this stage we have to use the in progress write's data to avoid an order issue
-+ callbackInfo.data = pendingWrite;
-+ callbackInfo.throwable = null;
-+ callbackInfo.completeNow = true;
-+ return running;
-+ };
-+
-+ ChunkDataTask curr = taskController.tasks.get(key);
-+ if (curr == null) {
-+ callbackInfo.regionFileCalculation = doesRegionFileExist(chunkX, chunkZ, intendingToBlock, taskController);
-+ }
-+ ChunkDataTask ret = taskController.tasks.compute(key, compute);
-+ if (callbackInfo.needsRegionFileTest) {
-+ // curr isn't null but when we went into compute() it was
-+ callbackInfo.regionFileCalculation = doesRegionFileExist(chunkX, chunkZ, intendingToBlock, taskController);
-+ // now it should be fine
-+ ret = taskController.tasks.compute(key, compute);
-+ }
-+
-+ // needs to be scheduled
-+ if (callbackInfo.tasksNeedsScheduling) {
-+ ret.prioritisedTask.queue();
-+ } else if (callbackInfo.completeNow) {
-+ try {
-+ onComplete.accept(callbackInfo.data, callbackInfo.throwable);
-+ } catch (final ThreadDeath thr) {
-+ throw thr;
-+ } catch (final Throwable thr) {
-+ LOGGER.error("Callback " + ConcurrentUtil.genericToString(onComplete) + " synchronously failed to handle chunk data for task " + ret.toString(), thr);
-+ }
-+ } else {
-+ // we're waiting on a task we didn't schedule, so raise its priority to what we want
-+ ret.prioritisedTask.raisePriority(priority);
-+ }
-+
-+ return new CancellableRead(onComplete, ret);
-+ }
-+
-+ /**
-+ * Schedules a load task to be executed asynchronously, and blocks on that task.
-+ *
-+ * @param world Chunk's world
-+ * @param chunkX Chunk's x coordinate
-+ * @param chunkZ Chunk's z coordinate
-+ * @param type Regionfile type
-+ * @param priority Minimum priority to load the data at.
-+ *
-+ * @return The chunk data for the chunk. Note that a {@code null} result means the chunk or regionfile does not exist on disk.
-+ *
-+ * @throws IOException If the load fails for any reason
-+ */
-+ public static CompoundTag loadData(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
-+ final PrioritisedExecutor.Priority priority) throws IOException {
-+ final CompletableFuture ret = new CompletableFuture<>();
-+
-+ RegionFileIOThread.loadDataAsync(world, chunkX, chunkZ, type, (final CompoundTag compound, final Throwable thr) -> {
-+ if (thr != null) {
-+ ret.completeExceptionally(thr);
-+ } else {
-+ ret.complete(compound);
-+ }
-+ }, true, priority);
-+
-+ try {
-+ return ret.join();
-+ } catch (final CompletionException ex) {
-+ throw new IOException(ex);
-+ }
-+ }
-+
-+ private static final class ImmediateCallbackCompletion {
-+
-+ public CompoundTag data;
-+ public Throwable throwable;
-+ public boolean completeNow;
-+ public boolean tasksNeedsScheduling;
-+ public boolean needsRegionFileTest;
-+ public Boolean regionFileCalculation;
-+
-+ }
-+
-+ static final class CancellableRead implements Cancellable {
-+
-+ private BiConsumer callback;
-+ private RegionFileIOThread.ChunkDataTask task;
-+
-+ CancellableRead(final BiConsumer callback, final RegionFileIOThread.ChunkDataTask task) {
-+ this.callback = callback;
-+ this.task = task;
-+ }
-+
-+ @Override
-+ public boolean cancel() {
-+ final BiConsumer callback = this.callback;
-+ final RegionFileIOThread.ChunkDataTask task = this.task;
-+
-+ if (callback == null || task == null) {
-+ return false;
-+ }
-+
-+ this.callback = null;
-+ this.task = null;
-+
-+ final RegionFileIOThread.InProgressRead read = task.inProgressRead;
-+
-+ // read can be null if no read was scheduled (i.e no regionfile existed or chunk in regionfile didn't)
-+ return (read != null && read.waiters.remove(callback));
-+ }
-+ }
-+
-+ static final class CancellableReads implements Cancellable {
-+
-+ private Cancellable[] reads;
-+
-+ protected static final VarHandle READS_HANDLE = ConcurrentUtil.getVarHandle(CancellableReads.class, "reads", Cancellable[].class);
-+
-+ CancellableReads(final Cancellable[] reads) {
-+ this.reads = reads;
-+ }
-+
-+ @Override
-+ public boolean cancel() {
-+ final Cancellable[] reads = (Cancellable[])READS_HANDLE.getAndSet((CancellableReads)this, (Cancellable[])null);
-+
-+ if (reads == null) {
-+ return false;
-+ }
-+
-+ boolean ret = false;
-+
-+ for (final Cancellable read : reads) {
-+ ret |= read.cancel();
-+ }
-+
-+ return ret;
-+ }
-+ }
-+
-+ static final class InProgressRead {
-+
-+ private static final Logger LOGGER = LogUtils.getClassLogger();
-+
-+ CompoundTag value;
-+ Throwable throwable;
-+ final MultiThreadedQueue> waiters = new MultiThreadedQueue<>();
-+
-+ // rets false if already completed (callback not invoked), true if callback was added
-+ boolean addToWaiters(final BiConsumer callback) {
-+ return this.waiters.add(callback);
-+ }
-+
-+ void complete(final RegionFileIOThread.ChunkDataTask task, final CompoundTag value, final Throwable throwable) {
-+ this.value = value;
-+ this.throwable = throwable;
-+
-+ BiConsumer consumer;
-+ while ((consumer = this.waiters.pollOrBlockAdds()) != null) {
-+ try {
-+ consumer.accept(value, throwable);
-+ } catch (final ThreadDeath thr) {
-+ throw thr;
-+ } catch (final Throwable thr) {
-+ LOGGER.error("Callback " + ConcurrentUtil.genericToString(consumer) + " failed to handle chunk data for task " + task.toString(), thr);
-+ }
-+ }
-+ }
-+ }
-+
-+ /**
-+ * Class exists to replace {@link Long} usages as keys inside non-fastutil hashtables. The hash for some Long {@code x}
-+ * is defined as {@code (x >>> 32) ^ x}. Chunk keys as long values are defined as {@code ((chunkX & 0xFFFFFFFFL) | (chunkZ << 32))},
-+ * which means the hashcode as a Long value will be {@code chunkX ^ chunkZ}. Given that most chunks are created within a radius arounds players,
-+ * this will lead to many hash collisions. So, this class uses a better hashing algorithm so that usage of
-+ * non-fastutil collections is not degraded.
-+ */
-+ public static final class ChunkCoordinate implements Comparable {
-+
-+ public final long key;
-+
-+ public ChunkCoordinate(final long key) {
-+ this.key = key;
-+ }
-+
-+ @Override
-+ public int hashCode() {
-+ return (int)HashCommon.mix(this.key);
-+ }
-+
-+ @Override
-+ public boolean equals(final Object obj) {
-+ if (this == obj) {
-+ return true;
-+ }
-+
-+ if (!(obj instanceof ChunkCoordinate)) {
-+ return false;
-+ }
-+
-+ final ChunkCoordinate other = (ChunkCoordinate)obj;
-+
-+ return this.key == other.key;
-+ }
-+
-+ // This class is intended for HashMap/ConcurrentHashMap usage, which do treeify bin nodes if the chain
-+ // is too large. So we should implement compareTo to help.
-+ @Override
-+ public int compareTo(final RegionFileIOThread.ChunkCoordinate other) {
-+ return Long.compare(this.key, other.key);
-+ }
-+
-+ @Override
-+ public String toString() {
-+ return new ChunkPos(this.key).toString();
-+ }
-+ }
-+
-+ public static abstract class ChunkDataController {
-+
-+ // ConcurrentHashMap synchronizes per chain, so reduce the chance of task's hashes colliding.
-+ protected final ConcurrentHashMap tasks = new ConcurrentHashMap<>(8192, 0.10f);
-+
-+ public final RegionFileType type;
-+
-+ public ChunkDataController(final RegionFileType type) {
-+ this.type = type;
-+ }
-+
-+ public abstract RegionFileStorage getCache();
-+
-+ public abstract void writeData(final int chunkX, final int chunkZ, final CompoundTag compound) throws IOException;
-+
-+ public abstract CompoundTag readData(final int chunkX, final int chunkZ) throws IOException;
-+
-+ public boolean hasTasks() {
-+ return !this.tasks.isEmpty();
-+ }
-+
-+ public T computeForRegionFile(final int chunkX, final int chunkZ, final boolean existingOnly, final Function function) {
-+ final RegionFileStorage cache = this.getCache();
-+ final RegionFile regionFile;
-+ synchronized (cache) {
-+ try {
-+ regionFile = cache.getRegionFile(new ChunkPos(chunkX, chunkZ), existingOnly, true);
-+ } catch (final IOException ex) {
-+ throw new RuntimeException(ex);
-+ }
-+ }
-+
-+ try {
-+ return function.apply(regionFile);
-+ } finally {
-+ if (regionFile != null) {
-+ regionFile.fileLock.unlock();
-+ }
-+ }
-+ }
-+
-+ public T computeForRegionFileIfLoaded(final int chunkX, final int chunkZ, final Function function) {
-+ final RegionFileStorage cache = this.getCache();
-+ final RegionFile regionFile;
-+
-+ synchronized (cache) {
-+ regionFile = cache.getRegionFileIfLoaded(new ChunkPos(chunkX, chunkZ));
-+ if (regionFile != null) {
-+ regionFile.fileLock.lock();
-+ }
-+ }
-+
-+ try {
-+ return function.apply(regionFile);
-+ } finally {
-+ if (regionFile != null) {
-+ regionFile.fileLock.unlock();
-+ }
-+ }
-+ }
-+ }
-+
-+ static final class ChunkDataTask implements Runnable {
-+
-+ protected static final CompoundTag NOTHING_TO_WRITE = new CompoundTag();
-+
-+ private static final Logger LOGGER = LogUtils.getClassLogger();
-+
-+ RegionFileIOThread.InProgressRead inProgressRead;
-+ volatile CompoundTag inProgressWrite = NOTHING_TO_WRITE; // only needs to be acquire/release
-+
-+ boolean failedWrite;
-+
-+ final ServerLevel world;
-+ final int chunkX;
-+ final int chunkZ;
-+ final RegionFileIOThread.ChunkDataController taskController;
-+
-+ final PrioritisedExecutor.PrioritisedTask prioritisedTask;
-+
-+ /*
-+ * IO thread will perform reads before writes for a given chunk x and z
-+ *
-+ * How reads/writes are scheduled:
-+ *
-+ * If read is scheduled while scheduling write, take no special action and just schedule write
-+ * If read is scheduled while scheduling read and no write is scheduled, chain the read task
-+ *
-+ *
-+ * If write is scheduled while scheduling read, use the pending write data and ret immediately (so no read is scheduled)
-+ * If write is scheduled while scheduling write (ignore read in progress), overwrite the write in progress data
-+ *
-+ * This allows the reads and writes to act as if they occur synchronously to the thread scheduling them, however
-+ * it fails to properly propagate write failures thanks to writes overwriting each other
-+ */
-+
-+ public ChunkDataTask(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileIOThread.ChunkDataController taskController,
-+ final PrioritisedExecutor executor, final PrioritisedExecutor.Priority priority) {
-+ this.world = world;
-+ this.chunkX = chunkX;
-+ this.chunkZ = chunkZ;
-+ this.taskController = taskController;
-+ this.prioritisedTask = executor.createTask(this, priority);
-+ }
-+
-+ @Override
-+ public String toString() {
-+ return "Task for world: '" + this.world.getWorld().getName() + "' at (" + this.chunkX + "," + this.chunkZ +
-+ ") type: " + this.taskController.type.name() + ", hash: " + this.hashCode();
-+ }
-+
-+ @Override
-+ public void run() {
-+ final RegionFileIOThread.InProgressRead read = this.inProgressRead;
-+ final ChunkCoordinate chunkKey = new ChunkCoordinate(CoordinateUtils.getChunkKey(this.chunkX, this.chunkZ));
-+
-+ if (read != null) {
-+ final boolean[] canRead = new boolean[] { true };
-+
-+ if (read.waiters.isEmpty()) {
-+ // cancelled read? go to task controller to confirm
-+ final ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final ChunkCoordinate keyInMap, final ChunkDataTask valueInMap) -> {
-+ if (valueInMap == null) {
-+ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
-+ }
-+ if (valueInMap != ChunkDataTask.this) {
-+ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
-+ }
-+
-+ if (!read.waiters.isEmpty()) { // as per usual IntelliJ is unable to figure out that there are concurrent accesses.
-+ return valueInMap;
-+ } else {
-+ canRead[0] = false;
-+ }
-+
-+ return valueInMap.inProgressWrite == NOTHING_TO_WRITE ? null : valueInMap;
-+ });
-+
-+ if (inMap == null) {
-+ // read is cancelled - and no write pending, so we're done
-+ return;
-+ }
-+ // if there is a write in progress, we don't actually have to worry about waiters gaining new entries -
-+ // the readers will just use the in progress write, so the value in canRead is good to use without
-+ // further synchronisation.
-+ }
-+
-+ if (canRead[0]) {
-+ CompoundTag compound = null;
-+ Throwable throwable = null;
-+
-+ try {
-+ compound = this.taskController.readData(this.chunkX, this.chunkZ);
-+ } catch (final ThreadDeath thr) {
-+ throw thr;
-+ } catch (final Throwable thr) {
-+ throwable = thr;
-+ LOGGER.error("Failed to read chunk data for task: " + this.toString(), thr);
-+ }
-+ read.complete(this, compound, throwable);
-+ }
-+ }
-+
-+ CompoundTag write = this.inProgressWrite;
-+
-+ if (write == NOTHING_TO_WRITE) {
-+ final ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final ChunkCoordinate keyInMap, final ChunkDataTask valueInMap) -> {
-+ if (valueInMap == null) {
-+ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
-+ }
-+ if (valueInMap != ChunkDataTask.this) {
-+ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
-+ }
-+ return valueInMap.inProgressWrite == NOTHING_TO_WRITE ? null : valueInMap;
-+ });
-+
-+ if (inMap == null) {
-+ return; // set the task value to null, indicating we're done
-+ } // else: inProgressWrite changed, so now we have something to write
-+ }
-+
-+ for (;;) {
-+ write = this.inProgressWrite;
-+ final CompoundTag dataWritten = write;
-+
-+ boolean failedWrite = false;
-+
-+ try {
-+ this.taskController.writeData(this.chunkX, this.chunkZ, write);
-+ } catch (final ThreadDeath thr) {
-+ throw thr;
-+ } catch (final Throwable thr) {
-+ if (thr instanceof RegionFileStorage.RegionFileSizeException) {
-+ final int maxSize = RegionFile.MAX_CHUNK_SIZE / (1024 * 1024);
-+ LOGGER.error("Chunk at (" + this.chunkX + "," + this.chunkZ + ") in '" + this.world.getWorld().getName() + "' exceeds max size of " + maxSize + "MiB, it has been deleted from disk.");
-+ } else {
-+ failedWrite = thr instanceof IOException;
-+ LOGGER.error("Failed to write chunk data for task: " + this.toString(), thr);
-+ }
-+ }
-+
-+ final boolean finalFailWrite = failedWrite;
-+ final boolean[] done = new boolean[] { false };
-+
-+ this.taskController.tasks.compute(chunkKey, (final ChunkCoordinate keyInMap, final ChunkDataTask valueInMap) -> {
-+ if (valueInMap == null) {
-+ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
-+ }
-+ if (valueInMap != ChunkDataTask.this) {
-+ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
-+ }
-+ if (valueInMap.inProgressWrite == dataWritten) {
-+ valueInMap.failedWrite = finalFailWrite;
-+ done[0] = true;
-+ // keep the data in map if we failed the write so we can try to prevent data loss
-+ return finalFailWrite ? valueInMap : null;
-+ }
-+ // different data than expected, means we need to retry write
-+ return valueInMap;
-+ });
-+
-+ if (done[0]) {
-+ return;
-+ }
-+
-+ // fetch & write new data
-+ continue;
-+ }
-+ }
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/light/LightQueue.java b/src/main/java/io/papermc/paper/chunk/system/light/LightQueue.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..0b7a2b0ead4f3bc07bfd9a38c2b7cf024bd140c6
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/light/LightQueue.java
-@@ -0,0 +1,280 @@
-+package io.papermc.paper.chunk.system.light;
-+
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
-+import ca.spottedleaf.starlight.common.light.BlockStarLightEngine;
-+import ca.spottedleaf.starlight.common.light.SkyStarLightEngine;
-+import ca.spottedleaf.starlight.common.light.StarLightInterface;
-+import io.papermc.paper.chunk.system.scheduling.ChunkTaskScheduler;
-+import io.papermc.paper.util.CoordinateUtils;
-+import it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap;
-+import it.unimi.dsi.fastutil.shorts.ShortCollection;
-+import it.unimi.dsi.fastutil.shorts.ShortOpenHashSet;
-+import net.minecraft.core.BlockPos;
-+import net.minecraft.core.SectionPos;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.world.level.ChunkPos;
-+import java.util.ArrayList;
-+import java.util.HashSet;
-+import java.util.List;
-+import java.util.Set;
-+import java.util.concurrent.CompletableFuture;
-+import java.util.function.BooleanSupplier;
-+
-+public final class LightQueue {
-+
-+ protected final Long2ObjectOpenHashMap chunkTasks = new Long2ObjectOpenHashMap<>();
-+ protected final StarLightInterface manager;
-+ protected final ServerLevel world;
-+
-+ public LightQueue(final StarLightInterface manager) {
-+ this.manager = manager;
-+ this.world = ((ServerLevel)manager.getWorld());
-+ }
-+
-+ public void lowerPriority(final int chunkX, final int chunkZ, final PrioritisedExecutor.Priority priority) {
-+ final ChunkTasks task;
-+ synchronized (this) {
-+ task = this.chunkTasks.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-+ }
-+ if (task != null) {
-+ task.lowerPriority(priority);
-+ }
-+ }
-+
-+ public void setPriority(final int chunkX, final int chunkZ, final PrioritisedExecutor.Priority priority) {
-+ final ChunkTasks task;
-+ synchronized (this) {
-+ task = this.chunkTasks.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-+ }
-+ if (task != null) {
-+ task.setPriority(priority);
-+ }
-+ }
-+
-+ public void raisePriority(final int chunkX, final int chunkZ, final PrioritisedExecutor.Priority priority) {
-+ final ChunkTasks task;
-+ synchronized (this) {
-+ task = this.chunkTasks.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-+ }
-+ if (task != null) {
-+ task.raisePriority(priority);
-+ }
-+ }
-+
-+ public PrioritisedExecutor.Priority getPriority(final int chunkX, final int chunkZ) {
-+ final ChunkTasks task;
-+ synchronized (this) {
-+ task = this.chunkTasks.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-+ }
-+ if (task != null) {
-+ return task.getPriority();
-+ }
-+
-+ return PrioritisedExecutor.Priority.COMPLETING;
-+ }
-+
-+ public boolean isEmpty() {
-+ synchronized (this) {
-+ return this.chunkTasks.isEmpty();
-+ }
-+ }
-+
-+ public CompletableFuture queueBlockChange(final BlockPos pos) {
-+ final ChunkTasks tasks;
-+ synchronized (this) {
-+ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
-+ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this);
-+ });
-+ tasks.changedPositions.add(pos.immutable());
-+ }
-+
-+ tasks.schedule();
-+
-+ return tasks.onComplete;
-+ }
-+
-+ public CompletableFuture queueSectionChange(final SectionPos pos, final boolean newEmptyValue) {
-+ final ChunkTasks tasks;
-+ synchronized (this) {
-+ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
-+ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this);
-+ });
-+
-+ if (tasks.changedSectionSet == null) {
-+ tasks.changedSectionSet = new Boolean[this.manager.maxSection - this.manager.minSection + 1];
-+ }
-+ tasks.changedSectionSet[pos.getY() - this.manager.minSection] = Boolean.valueOf(newEmptyValue);
-+ }
-+
-+ tasks.schedule();
-+
-+ return tasks.onComplete;
-+ }
-+
-+ public CompletableFuture queueChunkLightTask(final ChunkPos pos, final BooleanSupplier lightTask, final PrioritisedExecutor.Priority priority) {
-+ final ChunkTasks tasks;
-+ synchronized (this) {
-+ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
-+ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this, priority);
-+ });
-+ if (tasks.lightTasks == null) {
-+ tasks.lightTasks = new ArrayList<>();
-+ }
-+ tasks.lightTasks.add(lightTask);
-+ }
-+
-+ tasks.schedule();
-+
-+ return tasks.onComplete;
-+ }
-+
-+ public CompletableFuture queueChunkSkylightEdgeCheck(final SectionPos pos, final ShortCollection sections) {
-+ final ChunkTasks tasks;
-+ synchronized (this) {
-+ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
-+ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this);
-+ });
-+
-+ ShortOpenHashSet queuedEdges = tasks.queuedEdgeChecksSky;
-+ if (queuedEdges == null) {
-+ queuedEdges = tasks.queuedEdgeChecksSky = new ShortOpenHashSet();
-+ }
-+ queuedEdges.addAll(sections);
-+ }
-+
-+ tasks.schedule();
-+
-+ return tasks.onComplete;
-+ }
-+
-+ public CompletableFuture queueChunkBlocklightEdgeCheck(final SectionPos pos, final ShortCollection sections) {
-+ final ChunkTasks tasks;
-+
-+ synchronized (this) {
-+ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
-+ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this);
-+ });
-+
-+ ShortOpenHashSet queuedEdges = tasks.queuedEdgeChecksBlock;
-+ if (queuedEdges == null) {
-+ queuedEdges = tasks.queuedEdgeChecksBlock = new ShortOpenHashSet();
-+ }
-+ queuedEdges.addAll(sections);
-+ }
-+
-+ tasks.schedule();
-+
-+ return tasks.onComplete;
-+ }
-+
-+ public void removeChunk(final ChunkPos pos) {
-+ final ChunkTasks tasks;
-+ synchronized (this) {
-+ tasks = this.chunkTasks.remove(CoordinateUtils.getChunkKey(pos));
-+ }
-+ if (tasks != null && tasks.cancel()) {
-+ tasks.onComplete.complete(null);
-+ }
-+ }
-+
-+ protected static final class ChunkTasks implements Runnable {
-+
-+ final Set changedPositions = new HashSet<>();
-+ Boolean[] changedSectionSet;
-+ ShortOpenHashSet queuedEdgeChecksSky;
-+ ShortOpenHashSet queuedEdgeChecksBlock;
-+ List lightTasks;
-+
-+ final CompletableFuture onComplete = new CompletableFuture<>();
-+
-+ public final long chunkCoordinate;
-+ private final StarLightInterface lightEngine;
-+ private final LightQueue queue;
-+ private final PrioritisedExecutor.PrioritisedTask task;
-+
-+ public ChunkTasks(final long chunkCoordinate, final StarLightInterface lightEngine, final LightQueue queue) {
-+ this(chunkCoordinate, lightEngine, queue, PrioritisedExecutor.Priority.NORMAL);
-+ }
-+
-+ public ChunkTasks(final long chunkCoordinate, final StarLightInterface lightEngine, final LightQueue queue,
-+ final PrioritisedExecutor.Priority priority) {
-+ this.chunkCoordinate = chunkCoordinate;
-+ this.lightEngine = lightEngine;
-+ this.queue = queue;
-+ this.task = queue.world.chunkTaskScheduler.lightExecutor.createTask(this, priority);
-+ }
-+
-+ public void schedule() {
-+ this.task.queue();
-+ }
-+
-+ public boolean cancel() {
-+ return this.task.cancel();
-+ }
-+
-+ public PrioritisedExecutor.Priority getPriority() {
-+ return this.task.getPriority();
-+ }
-+
-+ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
-+ this.task.lowerPriority(priority);
-+ }
-+
-+ public void setPriority(final PrioritisedExecutor.Priority priority) {
-+ this.task.setPriority(priority);
-+ }
-+
-+ public void raisePriority(final PrioritisedExecutor.Priority priority) {
-+ this.task.raisePriority(priority);
-+ }
-+
-+ @Override
-+ public void run() {
-+ final SkyStarLightEngine skyEngine = this.lightEngine.getSkyLightEngine();
-+ final BlockStarLightEngine blockEngine = this.lightEngine.getBlockLightEngine();
-+ try {
-+ synchronized (this.queue) {
-+ this.queue.chunkTasks.remove(this.chunkCoordinate);
-+ }
-+
-+ boolean litChunk = false;
-+ if (this.lightTasks != null) {
-+ for (final BooleanSupplier run : this.lightTasks) {
-+ if (run.getAsBoolean()) {
-+ litChunk = true;
-+ break;
-+ }
-+ }
-+ }
-+
-+ final long coordinate = this.chunkCoordinate;
-+ final int chunkX = CoordinateUtils.getChunkX(coordinate);
-+ final int chunkZ = CoordinateUtils.getChunkZ(coordinate);
-+
-+ final Set positions = this.changedPositions;
-+ final Boolean[] sectionChanges = this.changedSectionSet;
-+
-+ if (!litChunk) {
-+ if (skyEngine != null && (!positions.isEmpty() || sectionChanges != null)) {
-+ skyEngine.blocksChangedInChunk(this.lightEngine.getLightAccess(), chunkX, chunkZ, positions, sectionChanges);
-+ }
-+ if (blockEngine != null && (!positions.isEmpty() || sectionChanges != null)) {
-+ blockEngine.blocksChangedInChunk(this.lightEngine.getLightAccess(), chunkX, chunkZ, positions, sectionChanges);
-+ }
-+
-+ if (skyEngine != null && this.queuedEdgeChecksSky != null) {
-+ skyEngine.checkChunkEdges(this.lightEngine.getLightAccess(), chunkX, chunkZ, this.queuedEdgeChecksSky);
-+ }
-+ if (blockEngine != null && this.queuedEdgeChecksBlock != null) {
-+ blockEngine.checkChunkEdges(this.lightEngine.getLightAccess(), chunkX, chunkZ, this.queuedEdgeChecksBlock);
-+ }
-+ }
-+
-+ this.onComplete.complete(null);
-+ } finally {
-+ this.lightEngine.releaseSkyLightEngine(skyEngine);
-+ this.lightEngine.releaseBlockLightEngine(blockEngine);
-+ }
-+ }
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/poi/PoiChunk.java b/src/main/java/io/papermc/paper/chunk/system/poi/PoiChunk.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..d72041aa814ff179e6e29a45dcd359a91d426d47
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/poi/PoiChunk.java
-@@ -0,0 +1,213 @@
-+package io.papermc.paper.chunk.system.poi;
-+
-+import com.mojang.logging.LogUtils;
-+import com.mojang.serialization.Codec;
-+import com.mojang.serialization.DataResult;
-+import io.papermc.paper.util.CoordinateUtils;
-+import io.papermc.paper.util.TickThread;
-+import io.papermc.paper.util.WorldUtil;
-+import net.minecraft.SharedConstants;
-+import net.minecraft.nbt.CompoundTag;
-+import net.minecraft.nbt.NbtOps;
-+import net.minecraft.nbt.Tag;
-+import net.minecraft.resources.RegistryOps;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.world.entity.ai.village.poi.PoiManager;
-+import net.minecraft.world.entity.ai.village.poi.PoiSection;
-+import org.slf4j.Logger;
-+
-+import java.util.Optional;
-+
-+public final class PoiChunk {
-+
-+ private static final Logger LOGGER = LogUtils.getClassLogger();
-+
-+ public final ServerLevel world;
-+ public final int chunkX;
-+ public final int chunkZ;
-+ public final int minSection;
-+ public final int maxSection;
-+
-+ protected final PoiSection[] sections;
-+
-+ private boolean isDirty;
-+ private boolean loaded;
-+
-+ public PoiChunk(final ServerLevel world, final int chunkX, final int chunkZ, final int minSection, final int maxSection) {
-+ this(world, chunkX, chunkZ, minSection, maxSection, new PoiSection[maxSection - minSection + 1]);
-+ }
-+
-+ public PoiChunk(final ServerLevel world, final int chunkX, final int chunkZ, final int minSection, final int maxSection, final PoiSection[] sections) {
-+ this.world = world;
-+ this.chunkX = chunkX;
-+ this.chunkZ = chunkZ;
-+ this.minSection = minSection;
-+ this.maxSection = maxSection;
-+ this.sections = sections;
-+ if (this.sections.length != (maxSection - minSection + 1)) {
-+ throw new IllegalStateException("Incorrect length used, expected " + (maxSection - minSection + 1) + ", got " + this.sections.length);
-+ }
-+ }
-+
-+ public void load() {
-+ TickThread.ensureTickThread(this.world, this.chunkX, this.chunkZ, "Loading in poi chunk off-main");
-+ if (this.loaded) {
-+ return;
-+ }
-+ this.loaded = true;
-+ this.world.chunkSource.getPoiManager().loadInPoiChunk(this);
-+ }
-+
-+ public boolean isLoaded() {
-+ return this.loaded;
-+ }
-+
-+ public boolean isEmpty() {
-+ for (final PoiSection section : this.sections) {
-+ if (section != null && !section.isEmpty()) {
-+ return false;
-+ }
-+ }
-+
-+ return true;
-+ }
-+
-+ public PoiSection getOrCreateSection(final int chunkY) {
-+ if (chunkY >= this.minSection && chunkY <= this.maxSection) {
-+ final int idx = chunkY - this.minSection;
-+ final PoiSection ret = this.sections[idx];
-+ if (ret != null) {
-+ return ret;
-+ }
-+
-+ final PoiManager poiManager = this.world.getPoiManager();
-+ final long key = CoordinateUtils.getChunkSectionKey(this.chunkX, chunkY, this.chunkZ);
-+
-+ return this.sections[idx] = new PoiSection(() -> {
-+ poiManager.setDirty(key);
-+ });
-+ }
-+ throw new IllegalArgumentException("chunkY is out of bounds, chunkY: " + chunkY + " outside [" + this.minSection + "," + this.maxSection + "]");
-+ }
-+
-+ public PoiSection getSection(final int chunkY) {
-+ if (chunkY >= this.minSection && chunkY <= this.maxSection) {
-+ return this.sections[chunkY - this.minSection];
-+ }
-+ return null;
-+ }
-+
-+ public Optional getSectionForVanilla(final int chunkY) {
-+ if (chunkY >= this.minSection && chunkY <= this.maxSection) {
-+ final PoiSection ret = this.sections[chunkY - this.minSection];
-+ return ret == null ? Optional.empty() : ret.noAllocateOptional;
-+ }
-+ return Optional.empty();
-+ }
-+
-+ public boolean isDirty() {
-+ return this.isDirty;
-+ }
-+
-+ public void setDirty(final boolean dirty) {
-+ this.isDirty = dirty;
-+ }
-+
-+ // returns null if empty
-+ public CompoundTag save() {
-+ final RegistryOps registryOps = RegistryOps.create(NbtOps.INSTANCE, world.getPoiManager().registryAccess);
-+
-+ final CompoundTag ret = new CompoundTag();
-+ final CompoundTag sections = new CompoundTag();
-+ ret.put("Sections", sections);
-+
-+ ret.putInt("DataVersion", SharedConstants.getCurrentVersion().getDataVersion().getVersion());
-+
-+ final ServerLevel world = this.world;
-+ final PoiManager poiManager = world.getPoiManager();
-+ final int chunkX = this.chunkX;
-+ final int chunkZ = this.chunkZ;
-+
-+ for (int sectionY = this.minSection; sectionY <= this.maxSection; ++sectionY) {
-+ final PoiSection chunk = this.sections[sectionY - this.minSection];
-+ if (chunk == null || chunk.isEmpty()) {
-+ continue;
-+ }
-+
-+ final long key = CoordinateUtils.getChunkSectionKey(chunkX, sectionY, chunkZ);
-+ // codecs are honestly such a fucking disaster. What the fuck is this trash?
-+ final Codec codec = PoiSection.codec(() -> {
-+ poiManager.setDirty(key);
-+ });
-+
-+ final DataResult serializedResult = codec.encodeStart(registryOps, chunk);
-+ final int finalSectionY = sectionY;
-+ final Tag serialized = serializedResult.resultOrPartial((final String description) -> {
-+ LOGGER.error("Failed to serialize poi chunk for world: " + world.getWorld().getName() + ", chunk: (" + chunkX + "," + finalSectionY + "," + chunkZ + "); description: " + description);
-+ }).orElse(null);
-+ if (serialized == null) {
-+ // failed, should be logged from the resultOrPartial
-+ continue;
-+ }
-+
-+ sections.put(Integer.toString(sectionY), serialized);
-+ }
-+
-+ return sections.isEmpty() ? null : ret;
-+ }
-+
-+ public static PoiChunk empty(final ServerLevel world, final int chunkX, final int chunkZ) {
-+ final PoiChunk ret = new PoiChunk(world, chunkX, chunkZ, WorldUtil.getMinSection(world), WorldUtil.getMaxSection(world));
-+ ret.loaded = true;
-+ return ret;
-+ }
-+
-+ public static PoiChunk parse(final ServerLevel world, final int chunkX, final int chunkZ, final CompoundTag data) {
-+ final PoiChunk ret = empty(world, chunkX, chunkZ);
-+
-+ final RegistryOps registryOps = RegistryOps.create(NbtOps.INSTANCE, world.getPoiManager().registryAccess);
-+
-+ final CompoundTag sections = data.getCompound("Sections");
-+
-+ if (sections.isEmpty()) {
-+ // nothing to parse
-+ return ret;
-+ }
-+
-+ final PoiManager poiManager = world.getPoiManager();
-+
-+ boolean readAnything = false;
-+
-+ for (int sectionY = ret.minSection; sectionY <= ret.maxSection; ++sectionY) {
-+ final String key = Integer.toString(sectionY);
-+ if (!sections.contains(key)) {
-+ continue;
-+ }
-+
-+ final long coordinateKey = CoordinateUtils.getChunkSectionKey(chunkX, sectionY, chunkZ);
-+ // codecs are honestly such a fucking disaster. What the fuck is this trash?
-+ final Codec codec = PoiSection.codec(() -> {
-+ poiManager.setDirty(coordinateKey);
-+ });
-+
-+ final CompoundTag section = sections.getCompound(key);
-+ final DataResult deserializeResult = codec.parse(registryOps, section);
-+ final int finalSectionY = sectionY;
-+ final PoiSection deserialized = deserializeResult.resultOrPartial((final String description) -> {
-+ LOGGER.error("Failed to deserialize poi chunk for world: " + world.getWorld().getName() + ", chunk: (" + chunkX + "," + finalSectionY + "," + chunkZ + "); description: " + description);
-+ }).orElse(null);
-+
-+ if (deserialized == null || deserialized.isEmpty()) {
-+ // completely empty, no point in storing this
-+ continue;
-+ }
-+
-+ readAnything = true;
-+ ret.sections[sectionY - ret.minSection] = deserialized;
-+ }
-+
-+ ret.loaded = !readAnything; // Set loaded to false if we read anything to ensure proper callbacks to PoiManager are made on #load
-+
-+ return ret;
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkFullTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkFullTask.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..fb42d776f15f735fb59e972e00e2b512c23a8387
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkFullTask.java
-@@ -0,0 +1,121 @@
-+package io.papermc.paper.chunk.system.scheduling;
-+
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
-+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
-+import net.minecraft.server.level.ChunkMap;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.world.level.chunk.ChunkAccess;
-+import net.minecraft.world.level.chunk.ChunkStatus;
-+import net.minecraft.world.level.chunk.ImposterProtoChunk;
-+import net.minecraft.world.level.chunk.LevelChunk;
-+import net.minecraft.world.level.chunk.ProtoChunk;
-+import java.lang.invoke.VarHandle;
-+
-+public final class ChunkFullTask extends ChunkProgressionTask implements Runnable {
-+
-+ protected final NewChunkHolder chunkHolder;
-+ protected final ChunkAccess fromChunk;
-+ protected final PrioritisedExecutor.PrioritisedTask convertToFullTask;
-+
-+ public ChunkFullTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX, final int chunkZ,
-+ final NewChunkHolder chunkHolder, final ChunkAccess fromChunk, final PrioritisedExecutor.Priority priority) {
-+ super(scheduler, world, chunkX, chunkZ);
-+ this.chunkHolder = chunkHolder;
-+ this.fromChunk = fromChunk;
-+ this.convertToFullTask = scheduler.createChunkTask(chunkX, chunkZ, this, priority);
-+ }
-+
-+ @Override
-+ public ChunkStatus getTargetStatus() {
-+ return ChunkStatus.FULL;
-+ }
-+
-+ @Override
-+ public void run() {
-+ // See Vanilla protoChunkToFullChunk for what this function should be doing
-+ final LevelChunk chunk;
-+ try {
-+ if (this.fromChunk instanceof ImposterProtoChunk wrappedFull) {
-+ chunk = wrappedFull.getWrapped();
-+ } else {
-+ final ServerLevel world = this.world;
-+ final ProtoChunk protoChunk = (ProtoChunk)this.fromChunk;
-+ chunk = new LevelChunk(this.world, protoChunk, (final LevelChunk unused) -> {
-+ ChunkMap.postLoadProtoChunk(world, protoChunk.getEntities());
-+ });
-+ }
-+
-+ chunk.setChunkHolder(this.scheduler.chunkHolderManager.getChunkHolder(this.chunkX, this.chunkZ)); // replaces setFullStatus
-+ chunk.runPostLoad();
-+ // Unlike Vanilla, we load the entity chunk here, as we load the NBT in empty status (unlike Vanilla)
-+ // This brings entity addition back in line with older versions of the game
-+ // Since we load the NBT in the empty status, this will never block for I/O
-+ this.world.chunkTaskScheduler.chunkHolderManager.getOrCreateEntityChunk(this.chunkX, this.chunkZ, false);
-+
-+ // we don't need the entitiesInLevel trash, this system doesn't double run callbacks
-+ chunk.setLoaded(true);
-+ chunk.registerAllBlockEntitiesAfterLevelLoad();
-+ chunk.registerTickContainerInLevel(this.world);
-+ } catch (final Throwable throwable) {
-+ this.complete(null, throwable);
-+
-+ if (throwable instanceof ThreadDeath) {
-+ throw (ThreadDeath)throwable;
-+ }
-+ return;
-+ }
-+ this.complete(chunk, null);
-+ }
-+
-+ protected volatile boolean scheduled;
-+ protected static final VarHandle SCHEDULED_HANDLE = ConcurrentUtil.getVarHandle(ChunkFullTask.class, "scheduled", boolean.class);
-+
-+ @Override
-+ public boolean isScheduled() {
-+ return this.scheduled;
-+ }
-+
-+ @Override
-+ public void schedule() {
-+ if ((boolean)SCHEDULED_HANDLE.getAndSet((ChunkFullTask)this, true)) {
-+ throw new IllegalStateException("Cannot double call schedule()");
-+ }
-+ this.convertToFullTask.queue();
-+ }
-+
-+ @Override
-+ public void cancel() {
-+ if (this.convertToFullTask.cancel()) {
-+ this.complete(null, null);
-+ }
-+ }
-+
-+ @Override
-+ public PrioritisedExecutor.Priority getPriority() {
-+ return this.convertToFullTask.getPriority();
-+ }
-+
-+ @Override
-+ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
-+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+ this.convertToFullTask.lowerPriority(priority);
-+ }
-+
-+ @Override
-+ public void setPriority(final PrioritisedExecutor.Priority priority) {
-+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+ this.convertToFullTask.setPriority(priority);
-+ }
-+
-+ @Override
-+ public void raisePriority(final PrioritisedExecutor.Priority priority) {
-+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+ this.convertToFullTask.raisePriority(priority);
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..748cc48c6c42c694d1c9b685e96fbe6d8337d3f3
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java
-@@ -0,0 +1,1211 @@
-+package io.papermc.paper.chunk.system.scheduling;
-+
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
-+import ca.spottedleaf.concurrentutil.map.SWMRLong2ObjectHashTable;
-+import co.aikar.timings.Timing;
-+import com.google.common.collect.ImmutableList;
-+import com.google.gson.JsonArray;
-+import com.google.gson.JsonObject;
-+import com.mojang.logging.LogUtils;
-+import io.papermc.paper.chunk.system.io.RegionFileIOThread;
-+import io.papermc.paper.chunk.system.poi.PoiChunk;
-+import io.papermc.paper.util.CoordinateUtils;
-+import io.papermc.paper.util.TickThread;
-+import io.papermc.paper.util.misc.Delayed8WayDistancePropagator2D;
-+import io.papermc.paper.world.ChunkEntitySlices;
-+import it.unimi.dsi.fastutil.longs.Long2IntLinkedOpenHashMap;
-+import it.unimi.dsi.fastutil.longs.Long2IntMap;
-+import it.unimi.dsi.fastutil.longs.Long2IntOpenHashMap;
-+import it.unimi.dsi.fastutil.longs.Long2ObjectMap;
-+import it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap;
-+import it.unimi.dsi.fastutil.longs.LongArrayList;
-+import it.unimi.dsi.fastutil.longs.LongIterator;
-+import it.unimi.dsi.fastutil.objects.ObjectRBTreeSet;
-+import it.unimi.dsi.fastutil.objects.ReferenceLinkedOpenHashSet;
-+import net.minecraft.nbt.CompoundTag;
-+import io.papermc.paper.chunk.system.ChunkSystem;
-+import net.minecraft.server.MinecraftServer;
-+import net.minecraft.server.level.ChunkHolder;
-+import net.minecraft.server.level.ChunkMap;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.server.level.Ticket;
-+import net.minecraft.server.level.TicketType;
-+import net.minecraft.util.SortedArraySet;
-+import net.minecraft.util.Unit;
-+import net.minecraft.world.level.ChunkPos;
-+import net.minecraft.world.level.chunk.ChunkAccess;
-+import net.minecraft.world.level.chunk.ChunkStatus;
-+import org.bukkit.plugin.Plugin;
-+import org.slf4j.Logger;
-+import java.io.IOException;
-+import java.text.DecimalFormat;
-+import java.util.ArrayDeque;
-+import java.util.ArrayList;
-+import java.util.Collection;
-+import java.util.Collections;
-+import java.util.Iterator;
-+import java.util.List;
-+import java.util.Objects;
-+import java.util.concurrent.TimeUnit;
-+import java.util.concurrent.atomic.AtomicBoolean;
-+import java.util.concurrent.atomic.AtomicReference;
-+import java.util.concurrent.locks.LockSupport;
-+import java.util.concurrent.locks.ReentrantLock;
-+import java.util.function.Predicate;
-+
-+public final class ChunkHolderManager {
-+
-+ private static final Logger LOGGER = LogUtils.getClassLogger();
-+
-+ public static final int FULL_LOADED_TICKET_LEVEL = 33;
-+ public static final int BLOCK_TICKING_TICKET_LEVEL = 32;
-+ public static final int ENTITY_TICKING_TICKET_LEVEL = 31;
-+ public static final int MAX_TICKET_LEVEL = ChunkMap.MAX_CHUNK_DISTANCE; // inclusive
-+
-+ private static final long NO_TIMEOUT_MARKER = -1L;
-+
-+ final ReentrantLock ticketLock = new ReentrantLock();
-+
-+ private final SWMRLong2ObjectHashTable chunkHolders = new SWMRLong2ObjectHashTable<>(16384, 0.25f);
-+ private final Long2ObjectOpenHashMap>> tickets = new Long2ObjectOpenHashMap<>(8192, 0.25f);
-+ // what a disaster of a name
-+ // this is a map of removal tick to a map of chunks and the number of tickets a chunk has that are to expire that tick
-+ private final Long2ObjectOpenHashMap removeTickToChunkExpireTicketCount = new Long2ObjectOpenHashMap<>();
-+ private final ServerLevel world;
-+ private final ChunkTaskScheduler taskScheduler;
-+ private long currentTick;
-+
-+ private final ArrayDeque pendingFullLoadUpdate = new ArrayDeque<>();
-+ private final ObjectRBTreeSet autoSaveQueue = new ObjectRBTreeSet<>((final NewChunkHolder c1, final NewChunkHolder c2) -> {
-+ if (c1 == c2) {
-+ return 0;
-+ }
-+
-+ final int saveTickCompare = Long.compare(c1.lastAutoSave, c2.lastAutoSave);
-+
-+ if (saveTickCompare != 0) {
-+ return saveTickCompare;
-+ }
-+
-+ final long coord1 = CoordinateUtils.getChunkKey(c1.chunkX, c1.chunkZ);
-+ final long coord2 = CoordinateUtils.getChunkKey(c2.chunkX, c2.chunkZ);
-+
-+ if (coord1 == coord2) {
-+ throw new IllegalStateException("Duplicate chunkholder in auto save queue");
-+ }
-+
-+ return Long.compare(coord1, coord2);
-+ });
-+
-+ public ChunkHolderManager(final ServerLevel world, final ChunkTaskScheduler taskScheduler) {
-+ this.world = world;
-+ this.taskScheduler = taskScheduler;
-+ }
-+
-+ private long statusUpgradeId;
-+
-+ long getNextStatusUpgradeId() {
-+ return ++this.statusUpgradeId;
-+ }
-+
-+ public List getOldChunkHolders() {
-+ final List holders = this.getChunkHolders();
-+ final List ret = new ArrayList<>(holders.size());
-+ for (final NewChunkHolder holder : holders) {
-+ ret.add(holder.vanillaChunkHolder);
-+ }
-+ return ret;
-+ }
-+
-+ public List getChunkHolders() {
-+ final List ret = new ArrayList<>(this.chunkHolders.size());
-+ this.chunkHolders.forEachValue(ret::add);
-+ return ret;
-+ }
-+
-+ public int size() {
-+ return this.chunkHolders.size();
-+ }
-+
-+ public void close(final boolean save, final boolean halt) {
-+ TickThread.ensureTickThread("Closing world off-main");
-+ if (halt) {
-+ LOGGER.info("Waiting 60s for chunk system to halt for world '" + this.world.getWorld().getName() + "'");
-+ if (!this.taskScheduler.halt(true, TimeUnit.SECONDS.toNanos(60L))) {
-+ LOGGER.warn("Failed to halt world generation/loading tasks for world '" + this.world.getWorld().getName() + "'");
-+ } else {
-+ LOGGER.info("Halted chunk system for world '" + this.world.getWorld().getName() + "'");
-+ }
-+ }
-+
-+ if (save) {
-+ this.saveAllChunks(true, true, true);
-+ }
-+
-+ if (this.world.chunkDataControllerNew.hasTasks() || this.world.entityDataControllerNew.hasTasks() || this.world.poiDataControllerNew.hasTasks()) {
-+ RegionFileIOThread.flush();
-+ }
-+
-+ // kill regionfile cache
-+ try {
-+ this.world.chunkDataControllerNew.getCache().close();
-+ } catch (final IOException ex) {
-+ LOGGER.error("Failed to close chunk regionfile cache for world '" + this.world.getWorld().getName() + "'", ex);
-+ }
-+ try {
-+ this.world.entityDataControllerNew.getCache().close();
-+ } catch (final IOException ex) {
-+ LOGGER.error("Failed to close entity regionfile cache for world '" + this.world.getWorld().getName() + "'", ex);
-+ }
-+ try {
-+ this.world.poiDataControllerNew.getCache().close();
-+ } catch (final IOException ex) {
-+ LOGGER.error("Failed to close poi regionfile cache for world '" + this.world.getWorld().getName() + "'", ex);
-+ }
-+ }
-+
-+ void ensureInAutosave(final NewChunkHolder holder) {
-+ if (!this.autoSaveQueue.contains(holder)) {
-+ holder.lastAutoSave = MinecraftServer.currentTick;
-+ this.autoSaveQueue.add(holder);
-+ }
-+ }
-+
-+ public void autoSave() {
-+ final List reschedule = new ArrayList<>();
-+ final long currentTick = MinecraftServer.currentTickLong;
-+ final long maxSaveTime = currentTick - this.world.paperConfig().chunks.autoSaveInterval.value();
-+ for (int autoSaved = 0; autoSaved < this.world.paperConfig().chunks.maxAutoSaveChunksPerTick && !this.autoSaveQueue.isEmpty();) {
-+ final NewChunkHolder holder = this.autoSaveQueue.first();
-+
-+ if (holder.lastAutoSave > maxSaveTime) {
-+ break;
-+ }
-+
-+ this.autoSaveQueue.remove(holder);
-+
-+ holder.lastAutoSave = currentTick;
-+ if (holder.save(false, false) != null) {
-+ ++autoSaved;
-+ }
-+
-+ if (holder.getChunkStatus().isOrAfter(ChunkHolder.FullChunkStatus.BORDER)) {
-+ reschedule.add(holder);
-+ }
-+ }
-+
-+ for (final NewChunkHolder holder : reschedule) {
-+ if (holder.getChunkStatus().isOrAfter(ChunkHolder.FullChunkStatus.BORDER)) {
-+ this.autoSaveQueue.add(holder);
-+ }
-+ }
-+ }
-+
-+ public void saveAllChunks(final boolean flush, final boolean shutdown, final boolean logProgress) {
-+ final List holders = this.getChunkHolders();
-+
-+ if (logProgress) {
-+ LOGGER.info("Saving all chunkholders for world '" + this.world.getWorld().getName() + "'");
-+ }
-+
-+ final DecimalFormat format = new DecimalFormat("#0.00");
-+
-+ int saved = 0;
-+
-+ long start = System.nanoTime();
-+ long lastLog = start;
-+ boolean needsFlush = false;
-+ final int flushInterval = 50;
-+
-+ int savedChunk = 0;
-+ int savedEntity = 0;
-+ int savedPoi = 0;
-+
-+ for (int i = 0, len = holders.size(); i < len; ++i) {
-+ final NewChunkHolder holder = holders.get(i);
-+ try {
-+ final NewChunkHolder.SaveStat saveStat = holder.save(shutdown, false);
-+ if (saveStat != null) {
-+ ++saved;
-+ needsFlush = flush;
-+ if (saveStat.savedChunk()) {
-+ ++savedChunk;
-+ }
-+ if (saveStat.savedEntityChunk()) {
-+ ++savedEntity;
-+ }
-+ if (saveStat.savedPoiChunk()) {
-+ ++savedPoi;
-+ }
-+ }
-+ } catch (final ThreadDeath thr) {
-+ throw thr;
-+ } catch (final Throwable thr) {
-+ LOGGER.error("Failed to save chunk (" + holder.chunkX + "," + holder.chunkZ + ") in world '" + this.world.getWorld().getName() + "'", thr);
-+ }
-+ if (needsFlush && (saved % flushInterval) == 0) {
-+ needsFlush = false;
-+ RegionFileIOThread.partialFlush(flushInterval / 2);
-+ }
-+ if (logProgress) {
-+ final long currTime = System.nanoTime();
-+ if ((currTime - lastLog) > TimeUnit.SECONDS.toNanos(10L)) {
-+ lastLog = currTime;
-+ LOGGER.info("Saved " + saved + " chunks (" + format.format((double)(i+1)/(double)len * 100.0) + "%) in world '" + this.world.getWorld().getName() + "'");
-+ }
-+ }
-+ }
-+ if (flush) {
-+ RegionFileIOThread.flush();
-+ if (this.world.paperConfig().chunks.flushRegionsOnSave) {
-+ try {
-+ this.world.chunkSource.chunkMap.regionFileCache.flush();
-+ } catch (IOException ex) {
-+ LOGGER.error("Exception when flushing regions in world {}", this.world.getWorld().getName(), ex);
-+ }
-+ }
-+ }
-+ if (logProgress) {
-+ LOGGER.info("Saved " + savedChunk + " block chunks, " + savedEntity + " entity chunks, " + savedPoi + " poi chunks in world '" + this.world.getWorld().getName() + "' in " + format.format(1.0E-9 * (System.nanoTime() - start)) + "s");
-+ }
-+ }
-+
-+ protected final Long2IntLinkedOpenHashMap ticketLevelUpdates = new Long2IntLinkedOpenHashMap() {
-+ @Override
-+ protected void rehash(final int newN) {
-+ // no downsizing allowed
-+ if (newN < this.n) {
-+ return;
-+ }
-+ super.rehash(newN);
-+ }
-+ };
-+
-+ protected final Delayed8WayDistancePropagator2D ticketLevelPropagator = new Delayed8WayDistancePropagator2D(
-+ (final long coordinate, final byte oldLevel, final byte newLevel) -> {
-+ ChunkHolderManager.this.ticketLevelUpdates.putAndMoveToLast(coordinate, convertBetweenTicketLevels(newLevel));
-+ }
-+ );
-+ // function for converting between ticket levels and propagator levels and vice versa
-+ // the problem is the ticket level propagator will propagate from a set source down to zero, whereas mojang expects
-+ // levels to propagate from a set value up to a maximum value. so we need to convert the levels we put into the propagator
-+ // and the levels we get out of the propagator
-+
-+ public static int convertBetweenTicketLevels(final int level) {
-+ return ChunkMap.MAX_CHUNK_DISTANCE - level + 1;
-+ }
-+
-+ public boolean hasTickets() {
-+ this.ticketLock.lock();
-+ try {
-+ return !this.tickets.isEmpty();
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ }
-+
-+ public String getTicketDebugString(final long coordinate) {
-+ this.ticketLock.lock();
-+ try {
-+ final SortedArraySet> tickets = this.tickets.get(coordinate);
-+
-+ return tickets != null ? tickets.first().toString() : "no_ticket";
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ }
-+
-+ public Long2ObjectOpenHashMap>> getTicketsCopy() {
-+ this.ticketLock.lock();
-+ try {
-+ return this.tickets.clone();
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ }
-+
-+ public Collection getPluginChunkTickets(int x, int z) {
-+ ImmutableList.Builder ret;
-+ this.ticketLock.lock();
-+ try {
-+ SortedArraySet> tickets = this.tickets.get(ChunkPos.asLong(x, z));
-+
-+ if (tickets == null) {
-+ return Collections.emptyList();
-+ }
-+
-+ ret = ImmutableList.builder();
-+ for (Ticket> ticket : tickets) {
-+ if (ticket.getType() == TicketType.PLUGIN_TICKET) {
-+ ret.add((Plugin)ticket.key);
-+ }
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+
-+ return ret.build();
-+ }
-+
-+ protected final int getPropagatedTicketLevel(final long coordinate) {
-+ return convertBetweenTicketLevels(this.ticketLevelPropagator.getLevel(coordinate));
-+ }
-+
-+ protected final void updateTicketLevel(final long coordinate, final int ticketLevel) {
-+ if (ticketLevel > ChunkMap.MAX_CHUNK_DISTANCE) {
-+ this.ticketLevelPropagator.removeSource(coordinate);
-+ } else {
-+ this.ticketLevelPropagator.setSource(coordinate, convertBetweenTicketLevels(ticketLevel));
-+ }
-+ }
-+
-+ private static int getTicketLevelAt(SortedArraySet> tickets) {
-+ return !tickets.isEmpty() ? tickets.first().getTicketLevel() : MAX_TICKET_LEVEL + 1;
-+ }
-+
-+ public boolean addTicketAtLevel(final TicketType type, final ChunkPos chunkPos, final int level,
-+ final T identifier) {
-+ return this.addTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkPos), level, identifier);
-+ }
-+
-+ public boolean addTicketAtLevel(final TicketType type, final int chunkX, final int chunkZ, final int level,
-+ final T identifier) {
-+ return this.addTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkX, chunkZ), level, identifier);
-+ }
-+
-+ // supposed to return true if the ticket was added and did not replace another
-+ // but, we always return false if the ticket cannot be added
-+ public boolean addTicketAtLevel(final TicketType type, final long chunk, final int level, final T identifier) {
-+ final long removeDelay = Math.max(0, type.timeout);
-+ if (level > MAX_TICKET_LEVEL) {
-+ return false;
-+ }
-+
-+ this.ticketLock.lock();
-+ try {
-+ final long removeTick = removeDelay == 0 ? NO_TIMEOUT_MARKER : this.currentTick + removeDelay;
-+ final Ticket ticket = new Ticket<>(type, level, identifier, removeTick);
-+
-+ final SortedArraySet> ticketsAtChunk = this.tickets.computeIfAbsent(chunk, (final long keyInMap) -> {
-+ return SortedArraySet.create(4);
-+ });
-+
-+ final int levelBefore = getTicketLevelAt(ticketsAtChunk);
-+ final Ticket current = (Ticket)ticketsAtChunk.replace(ticket);
-+ final int levelAfter = getTicketLevelAt(ticketsAtChunk);
-+
-+ if (current != ticket) {
-+ final long oldRemovalTick = current.removalTick;
-+ if (removeTick != oldRemovalTick) {
-+ if (oldRemovalTick != NO_TIMEOUT_MARKER) {
-+ final Long2IntOpenHashMap removeCounts = this.removeTickToChunkExpireTicketCount.get(oldRemovalTick);
-+ final int prevCount = removeCounts.addTo(chunk, -1);
-+
-+ if (prevCount == 1) {
-+ removeCounts.remove(chunk);
-+ if (removeCounts.isEmpty()) {
-+ this.removeTickToChunkExpireTicketCount.remove(oldRemovalTick);
-+ }
-+ }
-+ }
-+ if (removeTick != NO_TIMEOUT_MARKER) {
-+ this.removeTickToChunkExpireTicketCount.computeIfAbsent(removeTick, (final long keyInMap) -> {
-+ return new Long2IntOpenHashMap();
-+ }).addTo(chunk, 1);
-+ }
-+ }
-+ } else {
-+ if (removeTick != NO_TIMEOUT_MARKER) {
-+ this.removeTickToChunkExpireTicketCount.computeIfAbsent(removeTick, (final long keyInMap) -> {
-+ return new Long2IntOpenHashMap();
-+ }).addTo(chunk, 1);
-+ }
-+ }
-+
-+ if (levelBefore != levelAfter) {
-+ this.updateTicketLevel(chunk, levelAfter);
-+ }
-+
-+ return current == ticket;
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ }
-+
-+ public boolean removeTicketAtLevel(final TicketType type, final ChunkPos chunkPos, final int level, final T identifier) {
-+ return this.removeTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkPos), level, identifier);
-+ }
-+
-+ public boolean removeTicketAtLevel(final TicketType type, final int chunkX, final int chunkZ, final int level, final T identifier) {
-+ return this.removeTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkX, chunkZ), level, identifier);
-+ }
-+
-+ public boolean removeTicketAtLevel(final TicketType type, final long chunk, final int level, final T identifier) {
-+ if (level > MAX_TICKET_LEVEL) {
-+ return false;
-+ }
-+
-+ this.ticketLock.lock();
-+ try {
-+ final SortedArraySet> ticketsAtChunk = this.tickets.get(chunk);
-+ if (ticketsAtChunk == null) {
-+ return false;
-+ }
-+
-+ final int oldLevel = getTicketLevelAt(ticketsAtChunk);
-+ final Ticket ticket = (Ticket)ticketsAtChunk.removeAndGet(new Ticket<>(type, level, identifier, -2L));
-+
-+ if (ticket == null) {
-+ return false;
-+ }
-+
-+ if (ticketsAtChunk.isEmpty()) {
-+ this.tickets.remove(chunk);
-+ }
-+
-+ final int newLevel = getTicketLevelAt(ticketsAtChunk);
-+
-+ final long removeTick = ticket.removalTick;
-+ if (removeTick != NO_TIMEOUT_MARKER) {
-+ final Long2IntOpenHashMap removeCounts = this.removeTickToChunkExpireTicketCount.get(removeTick);
-+ final int currCount = removeCounts.addTo(chunk, -1);
-+
-+ if (currCount == 1) {
-+ removeCounts.remove(chunk);
-+ if (removeCounts.isEmpty()) {
-+ this.removeTickToChunkExpireTicketCount.remove(removeTick);
-+ }
-+ }
-+ }
-+
-+ if (oldLevel != newLevel) {
-+ this.updateTicketLevel(chunk, newLevel);
-+ }
-+
-+ return true;
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ }
-+
-+ // atomic with respect to all add/remove/addandremove ticket calls for the given chunk
-+ public void addAndRemoveTickets(final long chunk, final TicketType addType, final int addLevel, final T addIdentifier,
-+ final TicketType removeType, final int removeLevel, final V removeIdentifier) {
-+ this.ticketLock.lock();
-+ try {
-+ this.addTicketAtLevel(addType, chunk, addLevel, addIdentifier);
-+ this.removeTicketAtLevel(removeType, chunk, removeLevel, removeIdentifier);
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ }
-+
-+ public void removeAllTicketsFor(final TicketType ticketType, final int ticketLevel, final T ticketIdentifier) {
-+ if (ticketLevel > MAX_TICKET_LEVEL) {
-+ return;
-+ }
-+
-+ this.ticketLock.lock();
-+ try {
-+ for (final LongIterator iterator = new LongArrayList(this.tickets.keySet()).longIterator(); iterator.hasNext();) {
-+ final long chunk = iterator.nextLong();
-+
-+ this.removeTicketAtLevel(ticketType, chunk, ticketLevel, ticketIdentifier);
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ }
-+
-+ public void tick() {
-+ TickThread.ensureTickThread("Cannot tick ticket manager off-main");
-+
-+ this.ticketLock.lock();
-+ try {
-+ final long tick = ++this.currentTick;
-+
-+ final Long2IntOpenHashMap toRemove = this.removeTickToChunkExpireTicketCount.remove(tick);
-+
-+ if (toRemove == null) {
-+ return;
-+ }
-+
-+ final Predicate> expireNow = (final Ticket> ticket) -> {
-+ return ticket.removalTick == tick;
-+ };
-+
-+ for (final LongIterator iterator = toRemove.keySet().longIterator(); iterator.hasNext();) {
-+ final long chunk = iterator.nextLong();
-+
-+ final SortedArraySet> tickets = this.tickets.get(chunk);
-+ tickets.removeIf(expireNow);
-+ if (tickets.isEmpty()) {
-+ this.tickets.remove(chunk);
-+ this.ticketLevelPropagator.removeSource(chunk);
-+ } else {
-+ this.ticketLevelPropagator.setSource(chunk, convertBetweenTicketLevels(tickets.first().getTicketLevel()));
-+ }
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+
-+ this.processTicketUpdates();
-+ }
-+
-+ public NewChunkHolder getChunkHolder(final int chunkX, final int chunkZ) {
-+ return this.chunkHolders.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-+ }
-+
-+ public NewChunkHolder getChunkHolder(final long position) {
-+ return this.chunkHolders.get(position);
-+ }
-+
-+ public void raisePriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
-+ final NewChunkHolder chunkHolder = this.getChunkHolder(x, z);
-+ if (chunkHolder != null) {
-+ chunkHolder.raisePriority(priority);
-+ }
-+ }
-+
-+ public void setPriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
-+ final NewChunkHolder chunkHolder = this.getChunkHolder(x, z);
-+ if (chunkHolder != null) {
-+ chunkHolder.setPriority(priority);
-+ }
-+ }
-+
-+ public void lowerPriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
-+ final NewChunkHolder chunkHolder = this.getChunkHolder(x, z);
-+ if (chunkHolder != null) {
-+ chunkHolder.lowerPriority(priority);
-+ }
-+ }
-+
-+ private NewChunkHolder createChunkHolder(final long position) {
-+ final NewChunkHolder ret = new NewChunkHolder(this.world, CoordinateUtils.getChunkX(position), CoordinateUtils.getChunkZ(position), this.taskScheduler);
-+
-+ ChunkSystem.onChunkHolderCreate(this.world, ret.vanillaChunkHolder);
-+ ret.vanillaChunkHolder.onChunkAdd();
-+
-+ return ret;
-+ }
-+
-+ // because this function creates the chunk holder without a ticket, it is the caller's responsibility to ensure
-+ // the chunk holder eventually unloads. this should only be used to avoid using processTicketUpdates to create chunkholders,
-+ // as processTicketUpdates may call plugin logic; in every other case a ticket is appropriate
-+ private NewChunkHolder getOrCreateChunkHolder(final int chunkX, final int chunkZ) {
-+ return this.getOrCreateChunkHolder(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-+ }
-+
-+ private NewChunkHolder getOrCreateChunkHolder(final long position) {
-+ if (!this.ticketLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Must hold ticket level update lock!");
-+ }
-+ if (!this.taskScheduler.schedulingLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Must hold scheduler lock!!");
-+ }
-+
-+ // we could just acquire these locks, but...
-+ // must own the locks because the caller needs to ensure that no unload can occur AFTER this function returns
-+
-+ NewChunkHolder current = this.chunkHolders.get(position);
-+ if (current != null) {
-+ return current;
-+ }
-+
-+ current = this.createChunkHolder(position);
-+ this.chunkHolders.put(position, current);
-+
-+ return current;
-+ }
-+
-+ private long entityLoadCounter;
-+
-+ public ChunkEntitySlices getOrCreateEntityChunk(final int chunkX, final int chunkZ, final boolean transientChunk) {
-+ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot create entity chunk off-main");
-+ ChunkEntitySlices ret;
-+
-+ NewChunkHolder current = this.getChunkHolder(chunkX, chunkZ);
-+ if (current != null && (ret = current.getEntityChunk()) != null && (transientChunk || !ret.isTransient())) {
-+ return ret;
-+ }
-+
-+ final AtomicBoolean isCompleted = new AtomicBoolean();
-+ final Thread waiter = Thread.currentThread();
-+ final Long entityLoadId;
-+ NewChunkHolder.GenericDataLoadTaskCallback loadTask = null;
-+ this.ticketLock.lock();
-+ try {
-+ entityLoadId = Long.valueOf(this.entityLoadCounter++);
-+ this.addTicketAtLevel(TicketType.ENTITY_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, entityLoadId);
-+ this.taskScheduler.schedulingLock.lock();
-+ try {
-+ current = this.getOrCreateChunkHolder(chunkX, chunkZ);
-+ if ((ret = current.getEntityChunk()) != null && (transientChunk || !ret.isTransient())) {
-+ this.removeTicketAtLevel(TicketType.ENTITY_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, entityLoadId);
-+ return ret;
-+ }
-+
-+ if (current.isEntityChunkNBTLoaded()) {
-+ isCompleted.setPlain(true);
-+ } else {
-+ loadTask = current.getOrLoadEntityData((final GenericDataLoadTask.TaskResult result) -> {
-+ if (!transientChunk) {
-+ isCompleted.set(true);
-+ LockSupport.unpark(waiter);
-+ }
-+ });
-+ final ChunkLoadTask.EntityDataLoadTask entityLoad = current.getEntityDataLoadTask();
-+
-+ if (entityLoad != null && !transientChunk) {
-+ entityLoad.raisePriority(PrioritisedExecutor.Priority.BLOCKING);
-+ }
-+ }
-+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+
-+ if (loadTask != null) {
-+ loadTask.schedule();
-+ }
-+
-+ if (!transientChunk) {
-+ // Note: no need to busy wait on the chunk queue, entity load will complete off-main
-+ boolean interrupted = false;
-+ while (!isCompleted.get()) {
-+ interrupted |= Thread.interrupted();
-+ LockSupport.park();
-+ }
-+
-+ if (interrupted) {
-+ Thread.currentThread().interrupt();
-+ }
-+ }
-+
-+ // now that the entity data is loaded, we can load it into the world
-+
-+ ret = current.loadInEntityChunk(transientChunk);
-+
-+ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
-+ this.addAndRemoveTickets(chunkKey,
-+ TicketType.UNKNOWN, MAX_TICKET_LEVEL, new ChunkPos(chunkX, chunkZ),
-+ TicketType.ENTITY_LOAD, MAX_TICKET_LEVEL, entityLoadId
-+ );
-+
-+ return ret;
-+ }
-+
-+ public PoiChunk getPoiChunkIfLoaded(final int chunkX, final int chunkZ, final boolean checkLoadInCallback) {
-+ final NewChunkHolder holder = this.getChunkHolder(chunkX, chunkZ);
-+ if (holder != null) {
-+ final PoiChunk ret = holder.getPoiChunk();
-+ return ret == null || (checkLoadInCallback && !ret.isLoaded()) ? null : ret;
-+ }
-+ return null;
-+ }
-+
-+ private long poiLoadCounter;
-+
-+ public PoiChunk loadPoiChunk(final int chunkX, final int chunkZ) {
-+ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot create poi chunk off-main");
-+ PoiChunk ret;
-+
-+ NewChunkHolder current = this.getChunkHolder(chunkX, chunkZ);
-+ if (current != null && (ret = current.getPoiChunk()) != null) {
-+ if (!ret.isLoaded()) {
-+ ret.load();
-+ }
-+ return ret;
-+ }
-+
-+ final AtomicReference completed = new AtomicReference<>();
-+ final AtomicBoolean isCompleted = new AtomicBoolean();
-+ final Thread waiter = Thread.currentThread();
-+ final Long poiLoadId;
-+ NewChunkHolder.GenericDataLoadTaskCallback loadTask = null;
-+ this.ticketLock.lock();
-+ try {
-+ poiLoadId = Long.valueOf(this.poiLoadCounter++);
-+ this.addTicketAtLevel(TicketType.POI_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, poiLoadId);
-+ this.taskScheduler.schedulingLock.lock();
-+ try {
-+ current = this.getOrCreateChunkHolder(chunkX, chunkZ);
-+ if (current.isPoiChunkLoaded()) {
-+ this.removeTicketAtLevel(TicketType.POI_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, poiLoadId);
-+ return current.getPoiChunk();
-+ }
-+
-+ loadTask = current.getOrLoadPoiData((final GenericDataLoadTask.TaskResult result) -> {
-+ completed.setPlain(result.left());
-+ isCompleted.set(true);
-+ LockSupport.unpark(waiter);
-+ });
-+ final ChunkLoadTask.PoiDataLoadTask poiLoad = current.getPoiDataLoadTask();
-+
-+ if (poiLoad != null) {
-+ poiLoad.raisePriority(PrioritisedExecutor.Priority.BLOCKING);
-+ }
-+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+
-+ if (loadTask != null) {
-+ loadTask.schedule();
-+ }
-+
-+ // Note: no need to busy wait on the chunk queue, poi load will complete off-main
-+
-+ boolean interrupted = false;
-+ while (!isCompleted.get()) {
-+ interrupted |= Thread.interrupted();
-+ LockSupport.park();
-+ }
-+
-+ if (interrupted) {
-+ Thread.currentThread().interrupt();
-+ }
-+
-+ ret = completed.getPlain();
-+
-+ ret.load();
-+
-+ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
-+ this.addAndRemoveTickets(chunkKey,
-+ TicketType.UNKNOWN, MAX_TICKET_LEVEL, new ChunkPos(chunkX, chunkZ),
-+ TicketType.POI_LOAD, MAX_TICKET_LEVEL, poiLoadId
-+ );
-+
-+ return ret;
-+ }
-+
-+ void addChangedStatuses(final List changedFullStatus) {
-+ if (changedFullStatus.isEmpty()) {
-+ return;
-+ }
-+ if (!TickThread.isTickThread()) {
-+ this.taskScheduler.scheduleChunkTask(() -> {
-+ final ArrayDeque pendingFullLoadUpdate = ChunkHolderManager.this.pendingFullLoadUpdate;
-+ for (int i = 0, len = changedFullStatus.size(); i < len; ++i) {
-+ pendingFullLoadUpdate.add(changedFullStatus.get(i));
-+ }
-+
-+ ChunkHolderManager.this.processPendingFullUpdate();
-+ }, PrioritisedExecutor.Priority.HIGHEST);
-+ } else {
-+ final ArrayDeque pendingFullLoadUpdate = this.pendingFullLoadUpdate;
-+ for (int i = 0, len = changedFullStatus.size(); i < len; ++i) {
-+ pendingFullLoadUpdate.add(changedFullStatus.get(i));
-+ }
-+ }
-+ }
-+
-+ final ReferenceLinkedOpenHashSet unloadQueue = new ReferenceLinkedOpenHashSet<>();
-+
-+ private void removeChunkHolder(final NewChunkHolder holder) {
-+ holder.killed = true;
-+ holder.vanillaChunkHolder.onChunkRemove();
-+ this.autoSaveQueue.remove(holder);
-+ ChunkSystem.onChunkHolderDelete(this.world, holder.vanillaChunkHolder);
-+ this.chunkHolders.remove(CoordinateUtils.getChunkKey(holder.chunkX, holder.chunkZ));
-+ }
-+
-+ // note: never call while inside the chunk system, this will absolutely break everything
-+ public void processUnloads() {
-+ TickThread.ensureTickThread("Cannot unload chunks off-main");
-+
-+ if (BLOCK_TICKET_UPDATES.get() == Boolean.TRUE) {
-+ throw new IllegalStateException("Cannot unload chunks recursively");
-+ }
-+ if (this.ticketLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Cannot hold ticket update lock while calling processUnloads");
-+ }
-+ if (this.taskScheduler.schedulingLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Cannot hold scheduling lock while calling processUnloads");
-+ }
-+
-+ final List unloadQueue;
-+ final List scheduleList = new ArrayList<>();
-+ this.ticketLock.lock();
-+ try {
-+ this.taskScheduler.schedulingLock.lock();
-+ try {
-+ if (this.unloadQueue.isEmpty()) {
-+ return;
-+ }
-+ // in order to ensure all chunks in the unload queue do not have a pending ticket level update,
-+ // process them now
-+ this.processTicketUpdates(false, false, scheduleList);
-+ unloadQueue = new ArrayList<>((int)(this.unloadQueue.size() * 0.05) + 1);
-+
-+ final int unloadCount = Math.max(50, (int)(this.unloadQueue.size() * 0.05));
-+ for (int i = 0; i < unloadCount && !this.unloadQueue.isEmpty(); ++i) {
-+ final NewChunkHolder chunkHolder = this.unloadQueue.removeFirst();
-+ if (chunkHolder.isSafeToUnload() != null) {
-+ LOGGER.error("Chunkholder " + chunkHolder + " is not safe to unload but is inside the unload queue?");
-+ continue;
-+ }
-+ final NewChunkHolder.UnloadState state = chunkHolder.unloadStage1();
-+ if (state == null) {
-+ // can unload immediately
-+ this.removeChunkHolder(chunkHolder);
-+ continue;
-+ }
-+ unloadQueue.add(state);
-+ }
-+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ // schedule tasks, we can't let processTicketUpdates do this because we call it holding the schedule lock
-+ for (int i = 0, len = scheduleList.size(); i < len; ++i) {
-+ scheduleList.get(i).schedule();
-+ }
-+
-+ final List toRemove = new ArrayList<>(unloadQueue.size());
-+
-+ final Boolean before = this.blockTicketUpdates();
-+ try {
-+ for (int i = 0, len = unloadQueue.size(); i < len; ++i) {
-+ final NewChunkHolder.UnloadState state = unloadQueue.get(i);
-+ final NewChunkHolder holder = state.holder();
-+
-+ holder.unloadStage2(state);
-+ toRemove.add(holder);
-+ }
-+ } finally {
-+ this.unblockTicketUpdates(before);
-+ }
-+
-+ this.ticketLock.lock();
-+ try {
-+ this.taskScheduler.schedulingLock.lock();
-+ try {
-+ for (int i = 0, len = toRemove.size(); i < len; ++i) {
-+ final NewChunkHolder holder = toRemove.get(i);
-+
-+ if (holder.unloadStage3()) {
-+ this.removeChunkHolder(holder);
-+ } else {
-+ // add cooldown so the next unload check is not immediately next tick
-+ this.addTicketAtLevel(TicketType.UNLOAD_COOLDOWN, holder.chunkX, holder.chunkZ, MAX_TICKET_LEVEL, Unit.INSTANCE);
-+ }
-+ }
-+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ }
-+
-+ private final ThreadLocal BLOCK_TICKET_UPDATES = ThreadLocal.withInitial(() -> {
-+ return Boolean.FALSE;
-+ });
-+
-+ public Boolean blockTicketUpdates() {
-+ final Boolean ret = BLOCK_TICKET_UPDATES.get();
-+ BLOCK_TICKET_UPDATES.set(Boolean.TRUE);
-+ return ret;
-+ }
-+
-+ public void unblockTicketUpdates(final Boolean before) {
-+ BLOCK_TICKET_UPDATES.set(before);
-+ }
-+
-+ public boolean processTicketUpdates() {
-+ return this.processTicketUpdates(true, true, null);
-+ }
-+
-+ private static final ThreadLocal> CURRENT_TICKET_UPDATE_SCHEDULING = new ThreadLocal<>();
-+
-+ static List getCurrentTicketUpdateScheduling() {
-+ return CURRENT_TICKET_UPDATE_SCHEDULING.get();
-+ }
-+
-+ private boolean processTicketUpdates(final boolean checkLocks, final boolean processFullUpdates, List scheduledTasks) {
-+ TickThread.ensureTickThread("Cannot process ticket levels off-main");
-+ if (BLOCK_TICKET_UPDATES.get() == Boolean.TRUE) {
-+ throw new IllegalStateException("Cannot update ticket level while unloading chunks or updating entity manager");
-+ }
-+ if (checkLocks && this.ticketLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Illegal recursive processTicketUpdates!");
-+ }
-+ if (checkLocks && this.taskScheduler.schedulingLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Cannot update ticket levels from a scheduler context!");
-+ }
-+
-+ List changedFullStatus = null;
-+
-+ final boolean isTickThread = TickThread.isTickThread();
-+
-+ boolean ret = false;
-+ final boolean canProcessFullUpdates = processFullUpdates & isTickThread;
-+ final boolean canProcessScheduling = scheduledTasks == null;
-+
-+ this.ticketLock.lock();
-+ try {
-+ final boolean levelsUpdated = this.ticketLevelPropagator.propagateUpdates();
-+ if (levelsUpdated) {
-+ // Unlike CB, ticket level updates cannot happen recursively. Thank god.
-+ if (!this.ticketLevelUpdates.isEmpty()) {
-+ ret = true;
-+
-+ // first the necessary chunkholders must be created, so just update the ticket levels
-+ for (final Iterator iterator = this.ticketLevelUpdates.long2IntEntrySet().fastIterator(); iterator.hasNext();) {
-+ final Long2IntMap.Entry entry = iterator.next();
-+ final long key = entry.getLongKey();
-+ final int newLevel = entry.getIntValue();
-+
-+ NewChunkHolder current = this.chunkHolders.get(key);
-+ if (current == null && newLevel > MAX_TICKET_LEVEL) {
-+ // not loaded and it shouldn't be loaded!
-+ iterator.remove();
-+ continue;
-+ }
-+
-+ final int currentLevel = current == null ? MAX_TICKET_LEVEL + 1 : current.getCurrentTicketLevel();
-+ if (currentLevel == newLevel) {
-+ // nothing to do
-+ iterator.remove();
-+ continue;
-+ }
-+
-+ if (current == null) {
-+ // must create
-+ current = this.createChunkHolder(key);
-+ this.chunkHolders.put(key, current);
-+ current.updateTicketLevel(newLevel);
-+ } else {
-+ current.updateTicketLevel(newLevel);
-+ }
-+ }
-+
-+ if (scheduledTasks == null) {
-+ scheduledTasks = new ArrayList<>();
-+ }
-+ changedFullStatus = new ArrayList<>();
-+
-+ // allow the chunkholders to process ticket level updates without needing to acquire the schedule lock every time
-+ final List prev = CURRENT_TICKET_UPDATE_SCHEDULING.get();
-+ CURRENT_TICKET_UPDATE_SCHEDULING.set(scheduledTasks);
-+ try {
-+ this.taskScheduler.schedulingLock.lock();
-+ try {
-+ for (final Iterator iterator = this.ticketLevelUpdates.long2IntEntrySet().fastIterator(); iterator.hasNext();) {
-+ final Long2IntMap.Entry entry = iterator.next();
-+ final long key = entry.getLongKey();
-+ final NewChunkHolder current = this.chunkHolders.get(key);
-+
-+ if (current == null) {
-+ throw new IllegalStateException("Expected chunk holder to be created");
-+ }
-+
-+ current.processTicketLevelUpdate(scheduledTasks, changedFullStatus);
-+ }
-+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
-+ }
-+ } finally {
-+ CURRENT_TICKET_UPDATE_SCHEDULING.set(prev);
-+ }
-+
-+ this.ticketLevelUpdates.clear();
-+ }
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+
-+ if (changedFullStatus != null) {
-+ this.addChangedStatuses(changedFullStatus);
-+ }
-+
-+ if (canProcessScheduling && scheduledTasks != null) {
-+ for (int i = 0, len = scheduledTasks.size(); i < len; ++i) {
-+ scheduledTasks.get(i).schedule();
-+ }
-+ }
-+
-+ if (canProcessFullUpdates) {
-+ ret |= this.processPendingFullUpdate();
-+ }
-+
-+ return ret;
-+ }
-+
-+ // only call on tick thread
-+ protected final boolean processPendingFullUpdate() {
-+ final ArrayDeque pendingFullLoadUpdate = this.pendingFullLoadUpdate;
-+
-+ boolean ret = false;
-+
-+ List changedFullStatus = new ArrayList<>();
-+
-+ NewChunkHolder holder;
-+ while ((holder = pendingFullLoadUpdate.poll()) != null) {
-+ ret |= holder.handleFullStatusChange(changedFullStatus);
-+
-+ if (!changedFullStatus.isEmpty()) {
-+ for (int i = 0, len = changedFullStatus.size(); i < len; ++i) {
-+ pendingFullLoadUpdate.add(changedFullStatus.get(i));
-+ }
-+ changedFullStatus.clear();
-+ }
-+ }
-+
-+ return ret;
-+ }
-+
-+ public JsonObject getDebugJsonForWatchdog() {
-+ // try and detect any potential deadlock that would require us to read unlocked
-+ try {
-+ if (this.ticketLock.tryLock(10, TimeUnit.SECONDS)) {
-+ try {
-+ if (this.taskScheduler.schedulingLock.tryLock(10, TimeUnit.SECONDS)) {
-+ try {
-+ return this.getDebugJsonNoLock();
-+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
-+ }
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ }
-+ } catch (final InterruptedException ignore) {}
-+
-+ LOGGER.error("Failed to acquire ticket and scheduling lock before timeout for world " + this.world.getWorld().getName());
-+
-+ // because we read without locks, it may throw exceptions for fastutil maps
-+ // so just try until it works...
-+ Throwable lastException = null;
-+ for (int count = 0;count < 1000;++count) {
-+ try {
-+ return this.getDebugJsonNoLock();
-+ } catch (final ThreadDeath death) {
-+ throw death;
-+ } catch (final Throwable thr) {
-+ lastException = thr;
-+ Thread.yield();
-+ LockSupport.parkNanos(10_000L);
-+ }
-+ }
-+
-+ // failed, return
-+ LOGGER.error("Failed to retrieve debug json for watchdog thread without locking", lastException);
-+ return null;
-+ }
-+
-+ private JsonObject getDebugJsonNoLock() {
-+ final JsonObject ret = new JsonObject();
-+ ret.addProperty("current_tick", Long.valueOf(this.currentTick));
-+
-+ final JsonArray unloadQueue = new JsonArray();
-+ ret.add("unload_queue", unloadQueue);
-+ for (final NewChunkHolder holder : this.unloadQueue) {
-+ final JsonObject coordinate = new JsonObject();
-+ unloadQueue.add(coordinate);
-+
-+ coordinate.addProperty("chunkX", Integer.valueOf(holder.chunkX));
-+ coordinate.addProperty("chunkZ", Integer.valueOf(holder.chunkZ));
-+ }
-+
-+ final JsonArray holders = new JsonArray();
-+ ret.add("chunkholders", holders);
-+
-+ for (final NewChunkHolder holder : this.getChunkHolders()) {
-+ holders.add(holder.getDebugJson());
-+ }
-+
-+ final JsonArray removeTickToChunkExpireTicketCount = new JsonArray();
-+ ret.add("remove_tick_to_chunk_expire_ticket_count", removeTickToChunkExpireTicketCount);
-+
-+ for (final Long2ObjectMap.Entry tickEntry : this.removeTickToChunkExpireTicketCount.long2ObjectEntrySet()) {
-+ final long tick = tickEntry.getLongKey();
-+ final Long2IntOpenHashMap coordinateToCount = tickEntry.getValue();
-+
-+ final JsonObject tickJson = new JsonObject();
-+ removeTickToChunkExpireTicketCount.add(tickJson);
-+
-+ tickJson.addProperty("tick", Long.valueOf(tick));
-+
-+ final JsonArray tickEntries = new JsonArray();
-+ tickJson.add("entries", tickEntries);
-+
-+ for (final Long2IntMap.Entry entry : coordinateToCount.long2IntEntrySet()) {
-+ final long coordinate = entry.getLongKey();
-+ final int count = entry.getIntValue();
-+
-+ final JsonObject entryJson = new JsonObject();
-+ tickEntries.add(entryJson);
-+
-+ entryJson.addProperty("chunkX", Long.valueOf(CoordinateUtils.getChunkX(coordinate)));
-+ entryJson.addProperty("chunkZ", Long.valueOf(CoordinateUtils.getChunkZ(coordinate)));
-+ entryJson.addProperty("count", Integer.valueOf(count));
-+ }
-+ }
-+
-+ final JsonArray allTicketsJson = new JsonArray();
-+ ret.add("tickets", allTicketsJson);
-+
-+ for (final Long2ObjectMap.Entry>> coordinateTickets : this.tickets.long2ObjectEntrySet()) {
-+ final long coordinate = coordinateTickets.getLongKey();
-+ final SortedArraySet> tickets = coordinateTickets.getValue();
-+
-+ final JsonObject coordinateJson = new JsonObject();
-+ allTicketsJson.add(coordinateJson);
-+
-+ coordinateJson.addProperty("chunkX", Long.valueOf(CoordinateUtils.getChunkX(coordinate)));
-+ coordinateJson.addProperty("chunkZ", Long.valueOf(CoordinateUtils.getChunkZ(coordinate)));
-+
-+ final JsonArray ticketsSerialized = new JsonArray();
-+ coordinateJson.add("tickets", ticketsSerialized);
-+
-+ for (final Ticket> ticket : tickets) {
-+ final JsonObject ticketSerialized = new JsonObject();
-+ ticketsSerialized.add(ticketSerialized);
-+
-+ ticketSerialized.addProperty("type", ticket.getType().toString());
-+ ticketSerialized.addProperty("level", Integer.valueOf(ticket.getTicketLevel()));
-+ ticketSerialized.addProperty("identifier", Objects.toString(ticket.key));
-+ ticketSerialized.addProperty("remove_tick", Long.valueOf(ticket.removalTick));
-+ }
-+ }
-+
-+ return ret;
-+ }
-+
-+ public JsonObject getDebugJson() {
-+ final List scheduleList = new ArrayList<>();
-+ try {
-+ final JsonObject ret;
-+ this.ticketLock.lock();
-+ try {
-+ this.taskScheduler.schedulingLock.lock();
-+ try {
-+ this.processTicketUpdates(false, false, scheduleList);
-+ ret = this.getDebugJsonNoLock();
-+ } finally {
-+ this.taskScheduler.schedulingLock.unlock();
-+ }
-+ } finally {
-+ this.ticketLock.unlock();
-+ }
-+ return ret;
-+ } finally {
-+ // schedule tasks, we can't let processTicketUpdates do this because we call it holding the schedule lock
-+ for (int i = 0, len = scheduleList.size(); i < len; ++i) {
-+ scheduleList.get(i).schedule();
-+ }
-+ }
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLightTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLightTask.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..53ddd7e9ac05e6a9eb809f329796e6d4f6bb2ab1
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLightTask.java
-@@ -0,0 +1,181 @@
-+package io.papermc.paper.chunk.system.scheduling;
-+
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
-+import ca.spottedleaf.starlight.common.light.StarLightEngine;
-+import ca.spottedleaf.starlight.common.light.StarLightInterface;
-+import io.papermc.paper.chunk.system.light.LightQueue;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.world.level.ChunkPos;
-+import net.minecraft.world.level.chunk.ChunkAccess;
-+import net.minecraft.world.level.chunk.ChunkStatus;
-+import net.minecraft.world.level.chunk.ProtoChunk;
-+import org.apache.logging.log4j.LogManager;
-+import org.apache.logging.log4j.Logger;
-+import java.util.function.BooleanSupplier;
-+
-+public final class ChunkLightTask extends ChunkProgressionTask {
-+
-+ private static final Logger LOGGER = LogManager.getLogger();
-+
-+ protected final ChunkAccess fromChunk;
-+
-+ private final LightTaskPriorityHolder priorityHolder;
-+
-+ public ChunkLightTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX, final int chunkZ,
-+ final ChunkAccess chunk, final PrioritisedExecutor.Priority priority) {
-+ super(scheduler, world, chunkX, chunkZ);
-+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+ this.priorityHolder = new LightTaskPriorityHolder(priority, this);
-+ this.fromChunk = chunk;
-+ }
-+
-+ @Override
-+ public boolean isScheduled() {
-+ return this.priorityHolder.isScheduled();
-+ }
-+
-+ @Override
-+ public ChunkStatus getTargetStatus() {
-+ return ChunkStatus.LIGHT;
-+ }
-+
-+ @Override
-+ public void schedule() {
-+ this.priorityHolder.schedule();
-+ }
-+
-+ @Override
-+ public void cancel() {
-+ this.priorityHolder.cancel();
-+ }
-+
-+ @Override
-+ public PrioritisedExecutor.Priority getPriority() {
-+ return this.priorityHolder.getPriority();
-+ }
-+
-+ @Override
-+ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
-+ this.priorityHolder.raisePriority(priority);
-+ }
-+
-+ @Override
-+ public void setPriority(final PrioritisedExecutor.Priority priority) {
-+ this.priorityHolder.setPriority(priority);
-+ }
-+
-+ @Override
-+ public void raisePriority(final PrioritisedExecutor.Priority priority) {
-+ this.priorityHolder.raisePriority(priority);
-+ }
-+
-+ private static final class LightTaskPriorityHolder extends PriorityHolder {
-+
-+ protected final ChunkLightTask task;
-+
-+ protected LightTaskPriorityHolder(final PrioritisedExecutor.Priority priority, final ChunkLightTask task) {
-+ super(priority);
-+ this.task = task;
-+ }
-+
-+ @Override
-+ protected void cancelScheduled() {
-+ final ChunkLightTask task = this.task;
-+ task.complete(null, null);
-+ }
-+
-+ @Override
-+ protected PrioritisedExecutor.Priority getScheduledPriority() {
-+ final ChunkLightTask task = this.task;
-+ return task.world.getChunkSource().getLightEngine().theLightEngine.lightQueue.getPriority(task.chunkX, task.chunkZ);
-+ }
-+
-+ @Override
-+ protected void scheduleTask(final PrioritisedExecutor.Priority priority) {
-+ final ChunkLightTask task = this.task;
-+ final StarLightInterface starLightInterface = task.world.getChunkSource().getLightEngine().theLightEngine;
-+ final LightQueue lightQueue = starLightInterface.lightQueue;
-+ lightQueue.queueChunkLightTask(new ChunkPos(task.chunkX, task.chunkZ), new LightTask(starLightInterface, task), priority);
-+ lightQueue.setPriority(task.chunkX, task.chunkZ, priority);
-+ }
-+
-+ @Override
-+ protected void lowerPriorityScheduled(final PrioritisedExecutor.Priority priority) {
-+ final ChunkLightTask task = this.task;
-+ final StarLightInterface starLightInterface = task.world.getChunkSource().getLightEngine().theLightEngine;
-+ final LightQueue lightQueue = starLightInterface.lightQueue;
-+ lightQueue.lowerPriority(task.chunkX, task.chunkZ, priority);
-+ }
-+
-+ @Override
-+ protected void setPriorityScheduled(final PrioritisedExecutor.Priority priority) {
-+ final ChunkLightTask task = this.task;
-+ final StarLightInterface starLightInterface = task.world.getChunkSource().getLightEngine().theLightEngine;
-+ final LightQueue lightQueue = starLightInterface.lightQueue;
-+ lightQueue.setPriority(task.chunkX, task.chunkZ, priority);
-+ }
-+
-+ @Override
-+ protected void raisePriorityScheduled(final PrioritisedExecutor.Priority priority) {
-+ final ChunkLightTask task = this.task;
-+ final StarLightInterface starLightInterface = task.world.getChunkSource().getLightEngine().theLightEngine;
-+ final LightQueue lightQueue = starLightInterface.lightQueue;
-+ lightQueue.raisePriority(task.chunkX, task.chunkZ, priority);
-+ }
-+ }
-+
-+ private static final class LightTask implements BooleanSupplier {
-+
-+ protected final StarLightInterface lightEngine;
-+ protected final ChunkLightTask task;
-+
-+ public LightTask(final StarLightInterface lightEngine, final ChunkLightTask task) {
-+ this.lightEngine = lightEngine;
-+ this.task = task;
-+ }
-+
-+ @Override
-+ public boolean getAsBoolean() {
-+ final ChunkLightTask task = this.task;
-+ // executed on light thread
-+ if (!task.priorityHolder.markExecuting()) {
-+ // cancelled
-+ return false;
-+ }
-+
-+ try {
-+ final Boolean[] emptySections = StarLightEngine.getEmptySectionsForChunk(task.fromChunk);
-+
-+ if (task.fromChunk.isLightCorrect() && task.fromChunk.getStatus().isOrAfter(ChunkStatus.LIGHT)) {
-+ this.lightEngine.forceLoadInChunk(task.fromChunk, emptySections);
-+ this.lightEngine.checkChunkEdges(task.chunkX, task.chunkZ);
-+ } else {
-+ task.fromChunk.setLightCorrect(false);
-+ this.lightEngine.lightChunk(task.fromChunk, emptySections);
-+ task.fromChunk.setLightCorrect(true);
-+ }
-+ // we need to advance status
-+ if (task.fromChunk instanceof ProtoChunk chunk && chunk.getStatus() == ChunkStatus.LIGHT.getParent()) {
-+ chunk.setStatus(ChunkStatus.LIGHT);
-+ }
-+ } catch (final Throwable thr) {
-+ if (!(thr instanceof ThreadDeath)) {
-+ LOGGER.fatal("Failed to light chunk " + task.fromChunk.getPos().toString() + " in world '" + this.lightEngine.getWorld().getWorld().getName() + "'", thr);
-+ }
-+
-+ task.complete(null, thr);
-+
-+ if (thr instanceof ThreadDeath) {
-+ throw (ThreadDeath)thr;
-+ }
-+
-+ return true;
-+ }
-+
-+ task.complete(task.fromChunk, null);
-+ return true;
-+ }
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLoadTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLoadTask.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..be6f3f6a57668a9bd50d0ea5f2dd2335355b69d6
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLoadTask.java
-@@ -0,0 +1,499 @@
-+package io.papermc.paper.chunk.system.scheduling;
-+
-+import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
-+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
-+import ca.spottedleaf.dataconverter.minecraft.MCDataConverter;
-+import ca.spottedleaf.dataconverter.minecraft.datatypes.MCTypeRegistry;
-+import com.mojang.logging.LogUtils;
-+import io.papermc.paper.chunk.system.io.RegionFileIOThread;
-+import io.papermc.paper.chunk.system.poi.PoiChunk;
-+import net.minecraft.SharedConstants;
-+import net.minecraft.core.registries.Registries;
-+import net.minecraft.nbt.CompoundTag;
-+import net.minecraft.server.level.ChunkMap;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.world.level.ChunkPos;
-+import net.minecraft.world.level.chunk.ChunkAccess;
-+import net.minecraft.world.level.chunk.ChunkStatus;
-+import net.minecraft.world.level.chunk.ProtoChunk;
-+import net.minecraft.world.level.chunk.UpgradeData;
-+import net.minecraft.world.level.chunk.storage.ChunkSerializer;
-+import net.minecraft.world.level.chunk.storage.EntityStorage;
-+import net.minecraft.world.level.levelgen.blending.BlendingData;
-+import org.slf4j.Logger;
-+import java.lang.invoke.VarHandle;
-+import java.util.Map;
-+import java.util.concurrent.atomic.AtomicInteger;
-+import java.util.function.Consumer;
-+
-+public final class ChunkLoadTask extends ChunkProgressionTask {
-+
-+ private static final Logger LOGGER = LogUtils.getClassLogger();
-+
-+ private final NewChunkHolder chunkHolder;
-+ private final ChunkDataLoadTask loadTask;
-+
-+ private boolean cancelled;
-+ private NewChunkHolder.GenericDataLoadTaskCallback entityLoadTask;
-+ private NewChunkHolder.GenericDataLoadTaskCallback poiLoadTask;
-+
-+ protected ChunkLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX, final int chunkZ,
-+ final NewChunkHolder chunkHolder, final PrioritisedExecutor.Priority priority) {
-+ super(scheduler, world, chunkX, chunkZ);
-+ this.chunkHolder = chunkHolder;
-+ this.loadTask = new ChunkDataLoadTask(scheduler, world, chunkX, chunkZ, priority);
-+ this.loadTask.addCallback((final GenericDataLoadTask.TaskResult result) -> {
-+ ChunkLoadTask.this.complete(result == null ? null : result.left(), result == null ? null : result.right());
-+ });
-+ }
-+
-+ @Override
-+ public ChunkStatus getTargetStatus() {
-+ return ChunkStatus.EMPTY;
-+ }
-+
-+ private boolean scheduled;
-+
-+ @Override
-+ public boolean isScheduled() {
-+ return this.scheduled;
-+ }
-+
-+ @Override
-+ public void schedule() {
-+ final NewChunkHolder.GenericDataLoadTaskCallback entityLoadTask;
-+ final NewChunkHolder.GenericDataLoadTaskCallback poiLoadTask;
-+
-+ final AtomicInteger count = new AtomicInteger();
-+ final Consumer> scheduleLoadTask = (final GenericDataLoadTask.TaskResult, ?> result) -> {
-+ if (count.decrementAndGet() == 0) {
-+ ChunkLoadTask.this.loadTask.schedule(false);
-+ }
-+ };
-+
-+ // NOTE: it is IMPOSSIBLE for getOrLoadEntityData/getOrLoadPoiData to complete synchronously, because
-+ // they must schedule a task to off main or to on main to complete
-+ this.scheduler.schedulingLock.lock();
-+ try {
-+ if (this.scheduled) {
-+ throw new IllegalStateException("schedule() called twice");
-+ }
-+ this.scheduled = true;
-+ if (this.cancelled) {
-+ return;
-+ }
-+ if (!this.chunkHolder.isEntityChunkNBTLoaded()) {
-+ entityLoadTask = this.chunkHolder.getOrLoadEntityData((Consumer)scheduleLoadTask);
-+ count.setPlain(count.getPlain() + 1);
-+ } else {
-+ entityLoadTask = null;
-+ }
-+
-+ if (!this.chunkHolder.isPoiChunkLoaded()) {
-+ poiLoadTask = this.chunkHolder.getOrLoadPoiData((Consumer)scheduleLoadTask);
-+ count.setPlain(count.getPlain() + 1);
-+ } else {
-+ poiLoadTask = null;
-+ }
-+
-+ this.entityLoadTask = entityLoadTask;
-+ this.poiLoadTask = poiLoadTask;
-+ } finally {
-+ this.scheduler.schedulingLock.unlock();
-+ }
-+
-+ if (entityLoadTask != null) {
-+ entityLoadTask.schedule();
-+ }
-+
-+ if (poiLoadTask != null) {
-+ poiLoadTask.schedule();
-+ }
-+
-+ if (entityLoadTask == null && poiLoadTask == null) {
-+ // no need to wait on those, we can schedule now
-+ this.loadTask.schedule(false);
-+ }
-+ }
-+
-+ @Override
-+ public void cancel() {
-+ // must be before load task access, so we can synchronise with the writes to the fields
-+ this.scheduler.schedulingLock.lock();
-+ try {
-+ this.cancelled = true;
-+ } finally {
-+ this.scheduler.schedulingLock.unlock();
-+ }
-+
-+ /*
-+ Note: The entityLoadTask/poiLoadTask do not complete when cancelled,
-+ but this is fine because if they are successfully cancelled then
-+ we will successfully cancel the load task, which will complete when cancelled
-+ */
-+
-+ if (this.entityLoadTask != null) {
-+ this.entityLoadTask.cancel();
-+ }
-+ if (this.poiLoadTask != null) {
-+ this.poiLoadTask.cancel();
-+ }
-+ this.loadTask.cancel();
-+ }
-+
-+ @Override
-+ public PrioritisedExecutor.Priority getPriority() {
-+ return this.loadTask.getPriority();
-+ }
-+
-+ @Override
-+ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
-+ final EntityDataLoadTask entityLoad = this.chunkHolder.getEntityDataLoadTask();
-+ if (entityLoad != null) {
-+ entityLoad.lowerPriority(priority);
-+ }
-+
-+ final PoiDataLoadTask poiLoad = this.chunkHolder.getPoiDataLoadTask();
-+
-+ if (poiLoad != null) {
-+ poiLoad.lowerPriority(priority);
-+ }
-+
-+ this.loadTask.lowerPriority(priority);
-+ }
-+
-+ @Override
-+ public void setPriority(final PrioritisedExecutor.Priority priority) {
-+ final EntityDataLoadTask entityLoad = this.chunkHolder.getEntityDataLoadTask();
-+ if (entityLoad != null) {
-+ entityLoad.setPriority(priority);
-+ }
-+
-+ final PoiDataLoadTask poiLoad = this.chunkHolder.getPoiDataLoadTask();
-+
-+ if (poiLoad != null) {
-+ poiLoad.setPriority(priority);
-+ }
-+
-+ this.loadTask.setPriority(priority);
-+ }
-+
-+ @Override
-+ public void raisePriority(final PrioritisedExecutor.Priority priority) {
-+ final EntityDataLoadTask entityLoad = this.chunkHolder.getEntityDataLoadTask();
-+ if (entityLoad != null) {
-+ entityLoad.raisePriority(priority);
-+ }
-+
-+ final PoiDataLoadTask poiLoad = this.chunkHolder.getPoiDataLoadTask();
-+
-+ if (poiLoad != null) {
-+ poiLoad.raisePriority(priority);
-+ }
-+
-+ this.loadTask.raisePriority(priority);
-+ }
-+
-+ protected static abstract class CallbackDataLoadTask extends GenericDataLoadTask {
-+
-+ private TaskResult result;
-+ private final MultiThreadedQueue>> waiters = new MultiThreadedQueue<>();
-+
-+ protected volatile boolean completed;
-+ protected static final VarHandle COMPLETED_HANDLE = ConcurrentUtil.getVarHandle(CallbackDataLoadTask.class, "completed", boolean.class);
-+
-+ protected CallbackDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
-+ final int chunkZ, final RegionFileIOThread.RegionFileType type,
-+ final PrioritisedExecutor.Priority priority) {
-+ super(scheduler, world, chunkX, chunkZ, type, priority);
-+ }
-+
-+ public void addCallback(final Consumer> consumer) {
-+ if (!this.waiters.add(consumer)) {
-+ try {
-+ consumer.accept(this.result);
-+ } catch (final Throwable throwable) {
-+ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
-+ "Consumer", ChunkTaskScheduler.stringIfNull(consumer),
-+ "Completed throwable", ChunkTaskScheduler.stringIfNull(this.result.right())
-+ ), throwable);
-+ if (throwable instanceof ThreadDeath) {
-+ throw (ThreadDeath)throwable;
-+ }
-+ }
-+ }
-+ }
-+
-+ @Override
-+ protected void onComplete(final TaskResult result) {
-+ if ((boolean)COMPLETED_HANDLE.getAndSet((CallbackDataLoadTask)this, (boolean)true)) {
-+ throw new IllegalStateException("Already completed");
-+ }
-+ this.result = result;
-+ Consumer> consumer;
-+ while ((consumer = this.waiters.pollOrBlockAdds()) != null) {
-+ try {
-+ consumer.accept(result);
-+ } catch (final Throwable throwable) {
-+ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
-+ "Consumer", ChunkTaskScheduler.stringIfNull(consumer),
-+ "Completed throwable", ChunkTaskScheduler.stringIfNull(result.right())
-+ ), throwable);
-+ if (throwable instanceof ThreadDeath) {
-+ throw (ThreadDeath)throwable;
-+ }
-+ return;
-+ }
-+ }
-+ }
-+ }
-+
-+ public final class ChunkDataLoadTask extends CallbackDataLoadTask {
-+ protected ChunkDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
-+ final int chunkZ, final PrioritisedExecutor.Priority priority) {
-+ super(scheduler, world, chunkX, chunkZ, RegionFileIOThread.RegionFileType.CHUNK_DATA, priority);
-+ }
-+
-+ @Override
-+ protected boolean hasOffMain() {
-+ return true;
-+ }
-+
-+ @Override
-+ protected boolean hasOnMain() {
-+ return true;
-+ }
-+
-+ @Override
-+ protected PrioritisedExecutor.PrioritisedTask createOffMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
-+ return this.scheduler.loadExecutor.createTask(run, priority);
-+ }
-+
-+ @Override
-+ protected PrioritisedExecutor.PrioritisedTask createOnMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
-+ return this.scheduler.createChunkTask(this.chunkX, this.chunkZ, run, priority);
-+ }
-+
-+ @Override
-+ protected TaskResult completeOnMainOffMain(final ChunkSerializer.InProgressChunkHolder data, final Throwable throwable) {
-+ if (data != null) {
-+ return null;
-+ }
-+
-+ final PoiChunk poiChunk = ChunkLoadTask.this.chunkHolder.getPoiChunk();
-+ if (poiChunk == null) {
-+ LOGGER.error("Expected poi chunk to be loaded with chunk for task " + this.toString());
-+ } else if (!poiChunk.isLoaded()) {
-+ // need to call poiChunk.load() on main
-+ return null;
-+ }
-+
-+ return new TaskResult<>(this.getEmptyChunk(), null);
-+ }
-+
-+ @Override
-+ protected TaskResult runOffMain(final CompoundTag data, final Throwable throwable) {
-+ if (throwable != null) {
-+ LOGGER.error("Failed to load chunk data for task: " + this.toString() + ", chunk data will be lost", throwable);
-+ return new TaskResult<>(null, null);
-+ }
-+
-+ if (data == null) {
-+ return new TaskResult<>(null, null);
-+ }
-+
-+ // need to convert data, and then deserialize it
-+
-+ try {
-+ final ChunkPos chunkPos = new ChunkPos(this.chunkX, this.chunkZ);
-+ final ChunkMap chunkMap = this.world.getChunkSource().chunkMap;
-+ // run converters
-+ // note: upgradeChunkTag copies the data already
-+ final CompoundTag converted = chunkMap.upgradeChunkTag(
-+ this.world.getTypeKey(), chunkMap.overworldDataStorage, data, chunkMap.generator.getTypeNameForDataFixer(),
-+ chunkPos, this.world
-+ );
-+ // deserialize
-+ final ChunkSerializer.InProgressChunkHolder chunkHolder = ChunkSerializer.loadChunk(
-+ this.world, chunkMap.getPoiManager(), chunkPos, converted, true
-+ );
-+
-+ return new TaskResult<>(chunkHolder, null);
-+ } catch (final ThreadDeath death) {
-+ throw death;
-+ } catch (final Throwable thr2) {
-+ LOGGER.error("Failed to parse chunk data for task: " + this.toString() + ", chunk data will be lost", thr2);
-+ return new TaskResult<>(null, thr2);
-+ }
-+ }
-+
-+ private ProtoChunk getEmptyChunk() {
-+ return new ProtoChunk(
-+ new ChunkPos(this.chunkX, this.chunkZ), UpgradeData.EMPTY, this.world,
-+ this.world.registryAccess().registryOrThrow(Registries.BIOME), (BlendingData)null
-+ );
-+ }
-+
-+ @Override
-+ protected TaskResult runOnMain(final ChunkSerializer.InProgressChunkHolder data, final Throwable throwable) {
-+ final PoiChunk poiChunk = ChunkLoadTask.this.chunkHolder.getPoiChunk();
-+ if (poiChunk == null) {
-+ LOGGER.error("Expected poi chunk to be loaded with chunk for task " + this.toString());
-+ } else {
-+ poiChunk.load();
-+ }
-+
-+ if (data == null || data.protoChunk == null) {
-+ // throwable could be non-null, but the off-main task will print its exceptions - so we don't need to care,
-+ // it's handled already
-+
-+ return new TaskResult<>(this.getEmptyChunk(), null);
-+ }
-+
-+ // have tasks to run (at this point, it's just the POI consistency checking)
-+ try {
-+ if (data.tasks != null) {
-+ for (int i = 0, len = data.tasks.size(); i < len; ++i) {
-+ data.tasks.poll().run();
-+ }
-+ }
-+
-+ return new TaskResult<>(data.protoChunk, null);
-+ } catch (final ThreadDeath death) {
-+ throw death;
-+ } catch (final Throwable thr2) {
-+ LOGGER.error("Failed to parse main tasks for task " + this.toString() + ", chunk data will be lost", thr2);
-+ return new TaskResult<>(this.getEmptyChunk(), null);
-+ }
-+ }
-+ }
-+
-+ public static final class PoiDataLoadTask extends CallbackDataLoadTask {
-+ public PoiDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
-+ final int chunkZ, final PrioritisedExecutor.Priority priority) {
-+ super(scheduler, world, chunkX, chunkZ, RegionFileIOThread.RegionFileType.POI_DATA, priority);
-+ }
-+
-+ @Override
-+ protected boolean hasOffMain() {
-+ return true;
-+ }
-+
-+ @Override
-+ protected boolean hasOnMain() {
-+ return false;
-+ }
-+
-+ @Override
-+ protected PrioritisedExecutor.PrioritisedTask createOffMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
-+ return this.scheduler.loadExecutor.createTask(run, priority);
-+ }
-+
-+ @Override
-+ protected PrioritisedExecutor.PrioritisedTask createOnMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
-+ throw new UnsupportedOperationException();
-+ }
-+
-+ @Override
-+ protected TaskResult completeOnMainOffMain(final PoiChunk data, final Throwable throwable) {
-+ throw new UnsupportedOperationException();
-+ }
-+
-+ @Override
-+ protected TaskResult runOffMain(CompoundTag data, final Throwable throwable) {
-+ if (throwable != null) {
-+ LOGGER.error("Failed to load poi data for task: " + this.toString() + ", poi data will be lost", throwable);
-+ return new TaskResult<>(PoiChunk.empty(this.world, this.chunkX, this.chunkZ), null);
-+ }
-+
-+ if (data == null || data.isEmpty()) {
-+ // nothing to do
-+ return new TaskResult<>(PoiChunk.empty(this.world, this.chunkX, this.chunkZ), null);
-+ }
-+
-+ try {
-+ data = data.copy(); // coming from the I/O thread, so we need to copy
-+ // run converters
-+ final int dataVersion = !data.contains(SharedConstants.DATA_VERSION_TAG, 99) ? 1945 : data.getInt(SharedConstants.DATA_VERSION_TAG);
-+ final CompoundTag converted = MCDataConverter.convertTag(
-+ MCTypeRegistry.POI_CHUNK, data, dataVersion, SharedConstants.getCurrentVersion().getDataVersion().getVersion()
-+ );
-+
-+ // now we need to parse it
-+ return new TaskResult<>(PoiChunk.parse(this.world, this.chunkX, this.chunkZ, converted), null);
-+ } catch (final ThreadDeath death) {
-+ throw death;
-+ } catch (final Throwable thr2) {
-+ LOGGER.error("Failed to run parse poi data for task: " + this.toString() + ", poi data will be lost", thr2);
-+ return new TaskResult<>(PoiChunk.empty(this.world, this.chunkX, this.chunkZ), null);
-+ }
-+ }
-+
-+ @Override
-+ protected TaskResult runOnMain(final PoiChunk data, final Throwable throwable) {
-+ throw new UnsupportedOperationException();
-+ }
-+ }
-+
-+ public static final class EntityDataLoadTask extends CallbackDataLoadTask {
-+
-+ public EntityDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
-+ final int chunkZ, final PrioritisedExecutor.Priority priority) {
-+ super(scheduler, world, chunkX, chunkZ, RegionFileIOThread.RegionFileType.ENTITY_DATA, priority);
-+ }
-+
-+ @Override
-+ protected boolean hasOffMain() {
-+ return true;
-+ }
-+
-+ @Override
-+ protected boolean hasOnMain() {
-+ return false;
-+ }
-+
-+ @Override
-+ protected PrioritisedExecutor.PrioritisedTask createOffMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
-+ return this.scheduler.loadExecutor.createTask(run, priority);
-+ }
-+
-+ @Override
-+ protected PrioritisedExecutor.PrioritisedTask createOnMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
-+ throw new UnsupportedOperationException();
-+ }
-+
-+ @Override
-+ protected TaskResult completeOnMainOffMain(final CompoundTag data, final Throwable throwable) {
-+ throw new UnsupportedOperationException();
-+ }
-+
-+ @Override
-+ protected TaskResult runOffMain(final CompoundTag data, final Throwable throwable) {
-+ if (throwable != null) {
-+ LOGGER.error("Failed to load entity data for task: " + this.toString() + ", entity data will be lost", throwable);
-+ return new TaskResult<>(null, null);
-+ }
-+
-+ if (data == null || data.isEmpty()) {
-+ // nothing to do
-+ return new TaskResult<>(null, null);
-+ }
-+
-+ try {
-+ // note: data comes from the I/O thread, so we need to copy it
-+ return new TaskResult<>(EntityStorage.upgradeChunkTag(data.copy()), null);
-+ } catch (final ThreadDeath death) {
-+ throw death;
-+ } catch (final Throwable thr2) {
-+ LOGGER.error("Failed to run converters for entity data for task: " + this.toString() + ", entity data will be lost", thr2);
-+ return new TaskResult<>(null, thr2);
-+ }
-+ }
-+
-+ @Override
-+ protected TaskResult runOnMain(final CompoundTag data, final Throwable throwable) {
-+ throw new UnsupportedOperationException();
-+ }
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkProgressionTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkProgressionTask.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..322675a470eacbf0e5452f4009c643f2d0b4ce24
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkProgressionTask.java
-@@ -0,0 +1,105 @@
-+package io.papermc.paper.chunk.system.scheduling;
-+
-+import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
-+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.world.level.chunk.ChunkAccess;
-+import net.minecraft.world.level.chunk.ChunkStatus;
-+import java.lang.invoke.VarHandle;
-+import java.util.Map;
-+import java.util.function.BiConsumer;
-+
-+public abstract class ChunkProgressionTask {
-+
-+ private final MultiThreadedQueue> waiters = new MultiThreadedQueue<>();
-+ private ChunkAccess completedChunk;
-+ private Throwable completedThrowable;
-+
-+ protected final ChunkTaskScheduler scheduler;
-+ protected final ServerLevel world;
-+ protected final int chunkX;
-+ protected final int chunkZ;
-+
-+ protected volatile boolean completed;
-+ protected static final VarHandle COMPLETED_HANDLE = ConcurrentUtil.getVarHandle(ChunkProgressionTask.class, "completed", boolean.class);
-+
-+ protected ChunkProgressionTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX, final int chunkZ) {
-+ this.scheduler = scheduler;
-+ this.world = world;
-+ this.chunkX = chunkX;
-+ this.chunkZ = chunkZ;
-+ }
-+
-+ // Used only for debug json
-+ public abstract boolean isScheduled();
-+
-+ // Note: It is the responsibility of the task to set the chunk's status once it has completed
-+ public abstract ChunkStatus getTargetStatus();
-+
-+ /* Only executed once */
-+ /* Implementations must be prepared to handle cases where cancel() is called before schedule() */
-+ public abstract void schedule();
-+
-+ /* May be called multiple times */
-+ public abstract void cancel();
-+
-+ public abstract PrioritisedExecutor.Priority getPriority();
-+
-+ /* Schedule lock is always held for the priority update calls */
-+
-+ public abstract void lowerPriority(final PrioritisedExecutor.Priority priority);
-+
-+ public abstract void setPriority(final PrioritisedExecutor.Priority priority);
-+
-+ public abstract void raisePriority(final PrioritisedExecutor.Priority priority);
-+
-+ public final void onComplete(final BiConsumer onComplete) {
-+ if (!this.waiters.add(onComplete)) {
-+ try {
-+ onComplete.accept(this.completedChunk, this.completedThrowable);
-+ } catch (final Throwable throwable) {
-+ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
-+ "Consumer", ChunkTaskScheduler.stringIfNull(onComplete),
-+ "Completed throwable", ChunkTaskScheduler.stringIfNull(this.completedThrowable)
-+ ), throwable);
-+ if (throwable instanceof ThreadDeath) {
-+ throw (ThreadDeath)throwable;
-+ }
-+ }
-+ }
-+ }
-+
-+ protected final void complete(final ChunkAccess chunk, final Throwable throwable) {
-+ try {
-+ this.complete0(chunk, throwable);
-+ } catch (final Throwable thr2) {
-+ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
-+ "Completed throwable", ChunkTaskScheduler.stringIfNull(throwable)
-+ ), thr2);
-+ if (thr2 instanceof ThreadDeath) {
-+ throw (ThreadDeath)thr2;
-+ }
-+ }
-+ }
-+
-+ private void complete0(final ChunkAccess chunk, final Throwable throwable) {
-+ if ((boolean)COMPLETED_HANDLE.getAndSet((ChunkProgressionTask)this, (boolean)true)) {
-+ throw new IllegalStateException("Already completed");
-+ }
-+ this.completedChunk = chunk;
-+ this.completedThrowable = throwable;
-+
-+ BiConsumer consumer;
-+ while ((consumer = this.waiters.pollOrBlockAdds()) != null) {
-+ consumer.accept(chunk, throwable);
-+ }
-+ }
-+
-+ @Override
-+ public String toString() {
-+ return "ChunkProgressionTask{class: " + this.getClass().getName() + ", for world: " + this.world.getWorld().getName() +
-+ ", chunk: (" + this.chunkX + "," + this.chunkZ + "), hashcode: " + System.identityHashCode(this) + ", priority: " + this.getPriority() +
-+ ", status: " + this.getTargetStatus().toString() + ", scheduled: " + this.isScheduled() + "}";
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkTaskScheduler.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkTaskScheduler.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..84cc9397237fa0c17aa1012dfb5683c90eb6d3b8
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkTaskScheduler.java
-@@ -0,0 +1,780 @@
-+package io.papermc.paper.chunk.system.scheduling;
-+
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadPool;
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadedTaskQueue;
-+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
-+import com.mojang.logging.LogUtils;
-+import io.papermc.paper.configuration.GlobalConfiguration;
-+import io.papermc.paper.util.CoordinateUtils;
-+import io.papermc.paper.util.TickThread;
-+import net.minecraft.CrashReport;
-+import net.minecraft.CrashReportCategory;
-+import net.minecraft.ReportedException;
-+import io.papermc.paper.util.MCUtil;
-+import net.minecraft.server.MinecraftServer;
-+import net.minecraft.server.level.ChunkHolder;
-+import net.minecraft.server.level.ChunkMap;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.server.level.TicketType;
-+import net.minecraft.world.level.ChunkPos;
-+import net.minecraft.world.level.chunk.ChunkAccess;
-+import net.minecraft.world.level.chunk.ChunkStatus;
-+import net.minecraft.world.level.chunk.LevelChunk;
-+import org.bukkit.Bukkit;
-+import org.slf4j.Logger;
-+import java.io.File;
-+import java.util.ArrayDeque;
-+import java.util.ArrayList;
-+import java.util.Arrays;
-+import java.util.Collections;
-+import java.util.List;
-+import java.util.Map;
-+import java.util.Objects;
-+import java.util.concurrent.atomic.AtomicBoolean;
-+import java.util.concurrent.atomic.AtomicLong;
-+import java.util.concurrent.locks.ReentrantLock;
-+import java.util.function.BooleanSupplier;
-+import java.util.function.Consumer;
-+
-+public final class ChunkTaskScheduler {
-+
-+ private static final Logger LOGGER = LogUtils.getClassLogger();
-+
-+ static int newChunkSystemIOThreads;
-+ static int newChunkSystemWorkerThreads;
-+ static int newChunkSystemGenParallelism;
-+ static int newChunkSystemLoadParallelism;
-+
-+ public static ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadPool workerThreads;
-+
-+ private static boolean initialised = false;
-+
-+ public static void init(final GlobalConfiguration.ChunkSystem config) {
-+ if (initialised) {
-+ return;
-+ }
-+ initialised = true;
-+ newChunkSystemIOThreads = config.ioThreads;
-+ newChunkSystemWorkerThreads = config.workerThreads;
-+ if (newChunkSystemIOThreads < 0) {
-+ newChunkSystemIOThreads = 1;
-+ } else {
-+ newChunkSystemIOThreads = Math.max(1, newChunkSystemIOThreads);
-+ }
-+ int defaultWorkerThreads = Runtime.getRuntime().availableProcessors() / 2;
-+ if (defaultWorkerThreads <= 4) {
-+ defaultWorkerThreads = defaultWorkerThreads <= 3 ? 1 : 2;
-+ } else {
-+ defaultWorkerThreads = defaultWorkerThreads / 2;
-+ }
-+ defaultWorkerThreads = Integer.getInteger("Paper.WorkerThreadCount", Integer.valueOf(defaultWorkerThreads));
-+
-+ if (newChunkSystemWorkerThreads < 0) {
-+ newChunkSystemWorkerThreads = defaultWorkerThreads;
-+ } else {
-+ newChunkSystemWorkerThreads = Math.max(1, newChunkSystemWorkerThreads);
-+ }
-+
-+ String newChunkSystemGenParallelism = config.genParallelism;
-+ if (newChunkSystemGenParallelism.equalsIgnoreCase("default")) {
-+ newChunkSystemGenParallelism = "true";
-+ }
-+ boolean useParallelGen;
-+ if (newChunkSystemGenParallelism.equalsIgnoreCase("on") || newChunkSystemGenParallelism.equalsIgnoreCase("enabled")
-+ || newChunkSystemGenParallelism.equalsIgnoreCase("true")) {
-+ useParallelGen = true;
-+ } else if (newChunkSystemGenParallelism.equalsIgnoreCase("off") || newChunkSystemGenParallelism.equalsIgnoreCase("disabled")
-+ || newChunkSystemGenParallelism.equalsIgnoreCase("false")) {
-+ useParallelGen = false;
-+ } else {
-+ throw new IllegalStateException("Invalid option for gen-parallelism: must be one of [on, off, enabled, disabled, true, false, default]");
-+ }
-+
-+ ChunkTaskScheduler.newChunkSystemGenParallelism = useParallelGen ? newChunkSystemWorkerThreads : 1;
-+ ChunkTaskScheduler.newChunkSystemLoadParallelism = newChunkSystemWorkerThreads;
-+
-+ io.papermc.paper.chunk.system.io.RegionFileIOThread.init(newChunkSystemIOThreads);
-+ workerThreads = new ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadPool(
-+ "Paper Chunk System Worker Pool", newChunkSystemWorkerThreads,
-+ (final Thread thread, final Integer id) -> {
-+ thread.setPriority(Thread.NORM_PRIORITY - 2);
-+ thread.setName("Tuinity Chunk System Worker #" + id.intValue());
-+ thread.setUncaughtExceptionHandler(io.papermc.paper.chunk.system.scheduling.NewChunkHolder.CHUNKSYSTEM_UNCAUGHT_EXCEPTION_HANDLER);
-+ }, (long)(20.0e6)); // 20ms
-+
-+ LOGGER.info("Chunk system is using " + newChunkSystemIOThreads + " I/O threads, " + newChunkSystemWorkerThreads + " worker threads, and gen parallelism of " + ChunkTaskScheduler.newChunkSystemGenParallelism + " threads");
-+ }
-+
-+ public final ServerLevel world;
-+ public final PrioritisedThreadPool workers;
-+ public final PrioritisedThreadPool.PrioritisedPoolExecutor lightExecutor;
-+ public final PrioritisedThreadPool.PrioritisedPoolExecutor genExecutor;
-+ public final PrioritisedThreadPool.PrioritisedPoolExecutor parallelGenExecutor;
-+ public final PrioritisedThreadPool.PrioritisedPoolExecutor loadExecutor;
-+
-+ private final PrioritisedThreadedTaskQueue mainThreadExecutor = new PrioritisedThreadedTaskQueue();
-+
-+ final ReentrantLock schedulingLock = new ReentrantLock();
-+ public final ChunkHolderManager chunkHolderManager;
-+
-+ static {
-+ ChunkStatus.EMPTY.writeRadius = 0;
-+ ChunkStatus.STRUCTURE_STARTS.writeRadius = 0;
-+ ChunkStatus.STRUCTURE_REFERENCES.writeRadius = 0;
-+ ChunkStatus.BIOMES.writeRadius = 0;
-+ ChunkStatus.NOISE.writeRadius = 0;
-+ ChunkStatus.SURFACE.writeRadius = 0;
-+ ChunkStatus.CARVERS.writeRadius = 0;
-+ ChunkStatus.LIQUID_CARVERS.writeRadius = 0;
-+ ChunkStatus.FEATURES.writeRadius = 1;
-+ ChunkStatus.LIGHT.writeRadius = 1;
-+ ChunkStatus.SPAWN.writeRadius = 0;
-+ ChunkStatus.HEIGHTMAPS.writeRadius = 0;
-+ ChunkStatus.FULL.writeRadius = 0;
-+
-+ /*
-+ It's important that the neighbour read radius is taken into account. If _any_ later status is using some chunk as
-+ a neighbour, it must be also safe if that neighbour is being generated. i.e for any status later than FEATURES,
-+ for a status to be parallel safe it must not read the block data from its neighbours.
-+ */
-+ final List parallelCapableStatus = Arrays.asList(
-+ // No-op executor.
-+ ChunkStatus.EMPTY,
-+
-+ // This is parallel capable, as CB has fixed the concurrency issue with stronghold generations.
-+ // Does not touch neighbour chunks.
-+ // TODO On another note, what the fuck is StructureFeatureManager.StructureCheck and why is it used? it's leaking
-+ ChunkStatus.STRUCTURE_STARTS,
-+
-+ // Surprisingly this is parallel capable. It is simply reading the already-created structure starts
-+ // into the structure references for the chunk. So while it reads from it neighbours, its neighbours
-+ // will not change, even if executed in parallel.
-+ ChunkStatus.STRUCTURE_REFERENCES,
-+
-+ // Safe. Mojang runs it in parallel as well.
-+ ChunkStatus.BIOMES,
-+
-+ // Safe. Mojang runs it in parallel as well.
-+ ChunkStatus.NOISE,
-+
-+ // Parallel safe. Only touches the target chunk. Biome retrieval is now noise based, which is
-+ // completely thread-safe.
-+ ChunkStatus.SURFACE,
-+
-+ // No global state is modified in the carvers. It only touches the specified chunk. So it is parallel safe.
-+ ChunkStatus.CARVERS,
-+
-+ // No-op executor. Was replaced in 1.18 with carvers, I think.
-+ ChunkStatus.LIQUID_CARVERS,
-+
-+ // FEATURES is not parallel safe. It writes to neighbours.
-+
-+ // LIGHT is not parallel safe. It also doesn't run on the generation executor, so no point.
-+
-+ // Only writes to the specified chunk. State is not read by later statuses. Parallel safe.
-+ // Note: it may look unsafe because it writes to a worldgenregion, but the region size is always 0 -
-+ // see the task margin.
-+ // However, if the neighbouring FEATURES chunk is unloaded, but then fails to load in again (for whatever
-+ // reason), then it would write to this chunk - and since this status reads blocks from itself, it's not
-+ // safe to execute this in parallel.
-+ // SPAWN
-+
-+ // No-op executor.
-+ ChunkStatus.HEIGHTMAPS
-+
-+ // FULL is executed on main.
-+ );
-+
-+ for (final ChunkStatus status : parallelCapableStatus) {
-+ status.isParallelCapable = true;
-+ }
-+ }
-+
-+ public ChunkTaskScheduler(final ServerLevel world, final PrioritisedThreadPool workers) {
-+ this.world = world;
-+ this.workers = workers;
-+
-+ final String worldName = world.getWorld().getName();
-+ this.genExecutor = workers.createExecutor("Chunk single-threaded generation executor for world '" + worldName + "'", 1);
-+ // same as genExecutor, as there are race conditions between updating blocks in FEATURE status while lighting chunks
-+ this.lightExecutor = this.genExecutor;
-+ this.parallelGenExecutor = newChunkSystemGenParallelism <= 1 ? this.genExecutor
-+ : workers.createExecutor("Chunk parallel generation executor for world '" + worldName + "'", newChunkSystemGenParallelism);
-+ this.loadExecutor = workers.createExecutor("Chunk load executor for world '" + worldName + "'", newChunkSystemLoadParallelism);
-+ this.chunkHolderManager = new ChunkHolderManager(world, this);
-+ }
-+
-+ private final AtomicBoolean failedChunkSystem = new AtomicBoolean();
-+
-+ public static Object stringIfNull(final Object obj) {
-+ return obj == null ? "null" : obj;
-+ }
-+
-+ public void unrecoverableChunkSystemFailure(final int chunkX, final int chunkZ, final Map objectsOfInterest, final Throwable thr) {
-+ final NewChunkHolder holder = this.chunkHolderManager.getChunkHolder(chunkX, chunkZ);
-+ LOGGER.error("Chunk system error at chunk (" + chunkX + "," + chunkZ + "), holder: " + holder + ", exception:", new Throwable(thr));
-+
-+ if (this.failedChunkSystem.getAndSet(true)) {
-+ return;
-+ }
-+
-+ final ReportedException reportedException = thr instanceof ReportedException ? (ReportedException)thr : new ReportedException(new CrashReport("Chunk system error", thr));
-+
-+ CrashReportCategory crashReportCategory = reportedException.getReport().addCategory("Chunk system details");
-+ crashReportCategory.setDetail("Chunk coordinate", new ChunkPos(chunkX, chunkZ).toString());
-+ crashReportCategory.setDetail("ChunkHolder", Objects.toString(holder));
-+ crashReportCategory.setDetail("unrecoverableChunkSystemFailure caller thread", Thread.currentThread().getName());
-+
-+ crashReportCategory = reportedException.getReport().addCategory("Chunk System Objects of Interest");
-+ for (final Map.Entry entry : objectsOfInterest.entrySet()) {
-+ if (entry.getValue() instanceof Throwable thrObject) {
-+ crashReportCategory.setDetailError(Objects.toString(entry.getKey()), thrObject);
-+ } else {
-+ crashReportCategory.setDetail(Objects.toString(entry.getKey()), Objects.toString(entry.getValue()));
-+ }
-+ }
-+
-+ final Runnable crash = () -> {
-+ throw new RuntimeException("Chunk system crash propagated from unrecoverableChunkSystemFailure", reportedException);
-+ };
-+
-+ // this may not be good enough, specifically thanks to stupid ass plugins swallowing exceptions
-+ this.scheduleChunkTask(chunkX, chunkZ, crash, PrioritisedExecutor.Priority.BLOCKING);
-+ // so, make the main thread pick it up
-+ MinecraftServer.chunkSystemCrash = new RuntimeException("Chunk system crash propagated from unrecoverableChunkSystemFailure", reportedException);
-+ }
-+
-+ public boolean executeMainThreadTask() {
-+ TickThread.ensureTickThread("Cannot execute main thread task off-main");
-+ return this.mainThreadExecutor.executeTask();
-+ }
-+
-+ public void raisePriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
-+ this.chunkHolderManager.raisePriority(x, z, priority);
-+ }
-+
-+ public void setPriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
-+ this.chunkHolderManager.setPriority(x, z, priority);
-+ }
-+
-+ public void lowerPriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
-+ this.chunkHolderManager.lowerPriority(x, z, priority);
-+ }
-+
-+ private final AtomicLong chunkLoadCounter = new AtomicLong();
-+
-+ public void scheduleTickingState(final int chunkX, final int chunkZ, final ChunkHolder.FullChunkStatus toStatus,
-+ final boolean addTicket, final PrioritisedExecutor.Priority priority,
-+ final Consumer onComplete) {
-+ if (!TickThread.isTickThread()) {
-+ this.scheduleChunkTask(chunkX, chunkZ, () -> {
-+ ChunkTaskScheduler.this.scheduleTickingState(chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
-+ }, priority);
-+ return;
-+ }
-+ if (this.chunkHolderManager.ticketLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Cannot schedule chunk load during ticket level update");
-+ }
-+ if (this.schedulingLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Cannot schedule chunk loading recursively");
-+ }
-+
-+ if (toStatus == ChunkHolder.FullChunkStatus.INACCESSIBLE) {
-+ throw new IllegalArgumentException("Cannot wait for INACCESSIBLE status");
-+ }
-+
-+ final int minLevel = 33 - (toStatus.ordinal() - 1);
-+ final Long chunkReference = addTicket ? Long.valueOf(this.chunkLoadCounter.getAndIncrement()) : null;
-+ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
-+
-+ if (addTicket) {
-+ this.chunkHolderManager.addTicketAtLevel(TicketType.CHUNK_LOAD, chunkKey, minLevel, chunkReference);
-+ this.chunkHolderManager.processTicketUpdates();
-+ }
-+
-+ final Consumer loadCallback = (final LevelChunk chunk) -> {
-+ try {
-+ if (onComplete != null) {
-+ onComplete.accept(chunk);
-+ }
-+ } finally {
-+ if (addTicket) {
-+ ChunkTaskScheduler.this.chunkHolderManager.addAndRemoveTickets(chunkKey,
-+ TicketType.UNKNOWN, minLevel, new ChunkPos(chunkKey),
-+ TicketType.CHUNK_LOAD, minLevel, chunkReference
-+ );
-+ }
-+ }
-+ };
-+
-+ final boolean scheduled;
-+ final LevelChunk chunk;
-+ this.chunkHolderManager.ticketLock.lock();
-+ try {
-+ this.schedulingLock.lock();
-+ try {
-+ final NewChunkHolder chunkHolder = this.chunkHolderManager.getChunkHolder(chunkKey);
-+ if (chunkHolder == null || chunkHolder.getTicketLevel() > minLevel) {
-+ scheduled = false;
-+ chunk = null;
-+ } else {
-+ final ChunkHolder.FullChunkStatus currStatus = chunkHolder.getChunkStatus();
-+ if (currStatus.isOrAfter(toStatus)) {
-+ scheduled = false;
-+ chunk = (LevelChunk)chunkHolder.getCurrentChunk();
-+ } else {
-+ scheduled = true;
-+ chunk = null;
-+
-+ final int radius = toStatus.ordinal() - 1; // 0 -> BORDER, 1 -> TICKING, 2 -> ENTITY_TICKING
-+ for (int dz = -radius; dz <= radius; ++dz) {
-+ for (int dx = -radius; dx <= radius; ++dx) {
-+ final NewChunkHolder neighbour =
-+ (dx | dz) == 0 ? chunkHolder : this.chunkHolderManager.getChunkHolder(dx + chunkX, dz + chunkZ);
-+ if (neighbour != null) {
-+ neighbour.raisePriority(priority);
-+ }
-+ }
-+ }
-+
-+ // ticket level should schedule for us
-+ chunkHolder.addFullStatusConsumer(toStatus, loadCallback);
-+ }
-+ }
-+ } finally {
-+ this.schedulingLock.unlock();
-+ }
-+ } finally {
-+ this.chunkHolderManager.ticketLock.unlock();
-+ }
-+
-+ if (!scheduled) {
-+ // couldn't schedule
-+ try {
-+ loadCallback.accept(chunk);
-+ } catch (final ThreadDeath thr) {
-+ throw thr;
-+ } catch (final Throwable thr) {
-+ LOGGER.error("Failed to process chunk full status callback", thr);
-+ }
-+ }
-+ }
-+
-+ public void scheduleChunkLoad(final int chunkX, final int chunkZ, final boolean gen, final ChunkStatus toStatus, final boolean addTicket,
-+ final PrioritisedExecutor.Priority priority, final Consumer onComplete) {
-+ if (gen) {
-+ this.scheduleChunkLoad(chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
-+ return;
-+ }
-+ this.scheduleChunkLoad(chunkX, chunkZ, ChunkStatus.EMPTY, addTicket, priority, (final ChunkAccess chunk) -> {
-+ if (chunk == null) {
-+ onComplete.accept(null);
-+ } else {
-+ if (chunk.getStatus().isOrAfter(toStatus)) {
-+ this.scheduleChunkLoad(chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
-+ } else {
-+ onComplete.accept(null);
-+ }
-+ }
-+ });
-+ }
-+
-+ public void scheduleChunkLoad(final int chunkX, final int chunkZ, final ChunkStatus toStatus, final boolean addTicket,
-+ final PrioritisedExecutor.Priority priority, final Consumer onComplete) {
-+ if (!TickThread.isTickThread()) {
-+ this.scheduleChunkTask(chunkX, chunkZ, () -> {
-+ ChunkTaskScheduler.this.scheduleChunkLoad(chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
-+ }, priority);
-+ return;
-+ }
-+ if (this.chunkHolderManager.ticketLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Cannot schedule chunk load during ticket level update");
-+ }
-+ if (this.schedulingLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Cannot schedule chunk loading recursively");
-+ }
-+
-+ if (toStatus == ChunkStatus.FULL) {
-+ this.scheduleTickingState(chunkX, chunkZ, ChunkHolder.FullChunkStatus.BORDER, addTicket, priority, (Consumer)onComplete);
-+ return;
-+ }
-+
-+ final int minLevel = 33 + ChunkStatus.getDistance(toStatus);
-+ final Long chunkReference = addTicket ? Long.valueOf(this.chunkLoadCounter.getAndIncrement()) : null;
-+ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
-+
-+ if (addTicket) {
-+ this.chunkHolderManager.addTicketAtLevel(TicketType.CHUNK_LOAD, chunkKey, minLevel, chunkReference);
-+ this.chunkHolderManager.processTicketUpdates();
-+ }
-+
-+ final Consumer loadCallback = (final ChunkAccess chunk) -> {
-+ try {
-+ if (onComplete != null) {
-+ onComplete.accept(chunk);
-+ }
-+ } finally {
-+ if (addTicket) {
-+ ChunkTaskScheduler.this.chunkHolderManager.addAndRemoveTickets(chunkKey,
-+ TicketType.UNKNOWN, minLevel, new ChunkPos(chunkKey),
-+ TicketType.CHUNK_LOAD, minLevel, chunkReference
-+ );
-+ }
-+ }
-+ };
-+
-+ final List tasks = new ArrayList<>();
-+
-+ final boolean scheduled;
-+ final ChunkAccess chunk;
-+ this.chunkHolderManager.ticketLock.lock();
-+ try {
-+ this.schedulingLock.lock();
-+ try {
-+ final NewChunkHolder chunkHolder = this.chunkHolderManager.getChunkHolder(chunkKey);
-+ if (chunkHolder == null || chunkHolder.getTicketLevel() > minLevel) {
-+ scheduled = false;
-+ chunk = null;
-+ } else {
-+ final ChunkStatus genStatus = chunkHolder.getCurrentGenStatus();
-+ if (genStatus != null && genStatus.isOrAfter(toStatus)) {
-+ scheduled = false;
-+ chunk = chunkHolder.getCurrentChunk();
-+ } else {
-+ scheduled = true;
-+ chunk = null;
-+ chunkHolder.raisePriority(priority);
-+
-+ if (!chunkHolder.upgradeGenTarget(toStatus)) {
-+ this.schedule(chunkX, chunkZ, toStatus, chunkHolder, tasks);
-+ }
-+ chunkHolder.addStatusConsumer(toStatus, loadCallback);
-+ }
-+ }
-+ } finally {
-+ this.schedulingLock.unlock();
-+ }
-+ } finally {
-+ this.chunkHolderManager.ticketLock.unlock();
-+ }
-+
-+ for (int i = 0, len = tasks.size(); i < len; ++i) {
-+ tasks.get(i).schedule();
-+ }
-+
-+ if (!scheduled) {
-+ // couldn't schedule
-+ try {
-+ loadCallback.accept(chunk);
-+ } catch (final ThreadDeath thr) {
-+ throw thr;
-+ } catch (final Throwable thr) {
-+ LOGGER.error("Failed to process chunk status callback", thr);
-+ }
-+ }
-+ }
-+
-+ private ChunkProgressionTask createTask(final int chunkX, final int chunkZ, final ChunkAccess chunk,
-+ final NewChunkHolder chunkHolder, final List neighbours,
-+ final ChunkStatus toStatus, final PrioritisedExecutor.Priority initialPriority) {
-+ if (toStatus == ChunkStatus.EMPTY) {
-+ return new ChunkLoadTask(this, this.world, chunkX, chunkZ, chunkHolder, initialPriority);
-+ }
-+ if (toStatus == ChunkStatus.LIGHT) {
-+ return new ChunkLightTask(this, this.world, chunkX, chunkZ, chunk, initialPriority);
-+ }
-+ if (toStatus == ChunkStatus.FULL) {
-+ return new ChunkFullTask(this, this.world, chunkX, chunkZ, chunkHolder, chunk, initialPriority);
-+ }
-+
-+ return new ChunkUpgradeGenericStatusTask(this, this.world, chunkX, chunkZ, chunk, neighbours, toStatus, initialPriority);
-+ }
-+
-+ ChunkProgressionTask schedule(final int chunkX, final int chunkZ, final ChunkStatus targetStatus, final NewChunkHolder chunkHolder,
-+ final List allTasks) {
-+ return this.schedule(chunkX, chunkZ, targetStatus, chunkHolder, allTasks, chunkHolder.getEffectivePriority());
-+ }
-+
-+ // rets new task scheduled for the _specified_ chunk
-+ // note: this must hold the scheduling lock
-+ // minPriority is only used to pass the priority through to neighbours, as priority calculation has not yet been done
-+ // schedule will ignore the generation target, so it should be checked by the caller to ensure the target is not regressed!
-+ private ChunkProgressionTask schedule(final int chunkX, final int chunkZ, final ChunkStatus targetStatus,
-+ final NewChunkHolder chunkHolder, final List allTasks,
-+ final PrioritisedExecutor.Priority minPriority) {
-+ if (!this.schedulingLock.isHeldByCurrentThread()) {
-+ throw new IllegalStateException("Not holding scheduling lock");
-+ }
-+
-+ if (chunkHolder.hasGenerationTask()) {
-+ chunkHolder.upgradeGenTarget(targetStatus);
-+ return null;
-+ }
-+
-+ final PrioritisedExecutor.Priority requestedPriority = PrioritisedExecutor.Priority.max(minPriority, chunkHolder.getEffectivePriority());
-+ final ChunkStatus currentGenStatus = chunkHolder.getCurrentGenStatus();
-+ final ChunkAccess chunk = chunkHolder.getCurrentChunk();
-+
-+ if (currentGenStatus == null) {
-+ // not yet loaded
-+ final ChunkProgressionTask task = this.createTask(
-+ chunkX, chunkZ, chunk, chunkHolder, Collections.emptyList(), ChunkStatus.EMPTY, requestedPriority
-+ );
-+
-+ allTasks.add(task);
-+
-+ final List chunkHolderNeighbours = new ArrayList<>(1);
-+ chunkHolderNeighbours.add(chunkHolder);
-+
-+ chunkHolder.setGenerationTarget(targetStatus);
-+ chunkHolder.setGenerationTask(task, ChunkStatus.EMPTY, chunkHolderNeighbours);
-+
-+ return task;
-+ }
-+
-+ if (currentGenStatus.isOrAfter(targetStatus)) {
-+ // nothing to do
-+ return null;
-+ }
-+
-+ // we know for sure now that we want to schedule _something_, so set the target
-+ chunkHolder.setGenerationTarget(targetStatus);
-+
-+ final ChunkStatus chunkRealStatus = chunk.getStatus();
-+ final ChunkStatus toStatus = currentGenStatus.getNextStatus();
-+
-+ // if this chunk has already generated up to or past the specified status, then we don't
-+ // need the neighbours AT ALL.
-+ final int neighbourReadRadius = chunkRealStatus.isOrAfter(toStatus) ? toStatus.loadRange : toStatus.getRange();
-+
-+ boolean unGeneratedNeighbours = false;
-+
-+ // copied from MCUtil.getSpiralOutChunks
-+ for (int r = 1; r <= neighbourReadRadius; r++) {
-+ int x = -r;
-+ int z = r;
-+
-+ // Iterates the edge of half of the box; then negates for other half.
-+ while (x <= r && z > -r) {
-+ final int radius = Math.max(Math.abs(x), Math.abs(z));
-+ final ChunkStatus requiredNeighbourStatus = ChunkMap.getDependencyStatus(toStatus, radius);
-+
-+ unGeneratedNeighbours |= this.checkNeighbour(
-+ chunkX + x, chunkZ + z, requiredNeighbourStatus, chunkHolder, allTasks, requestedPriority
-+ );
-+ unGeneratedNeighbours |= this.checkNeighbour(
-+ chunkX - x, chunkZ - z, requiredNeighbourStatus, chunkHolder, allTasks, requestedPriority
-+ );
-+
-+ if (x < r) {
-+ x++;
-+ } else {
-+ z--;
-+ }
-+ }
-+ }
-+
-+ if (unGeneratedNeighbours) {
-+ // can't schedule, but neighbour completion will schedule for us when they're ALL done
-+
-+ // propagate our priority to neighbours
-+ chunkHolder.recalculateNeighbourPriorities();
-+ return null;
-+ }
-+
-+ // need to gather neighbours
-+
-+ final List neighbours;
-+ final List chunkHolderNeighbours;
-+ if (neighbourReadRadius <= 0) {
-+ neighbours = new ArrayList<>(1);
-+ chunkHolderNeighbours = new ArrayList<>(1);
-+ neighbours.add(chunk);
-+ chunkHolderNeighbours.add(chunkHolder);
-+ } else {
-+ // the iteration order is _very_ important, as all generation statuses expect a certain order such that:
-+ // chunkAtRelative = neighbours.get(relX + relZ * (2 * radius + 1))
-+ neighbours = new ArrayList<>((2 * neighbourReadRadius + 1) * (2 * neighbourReadRadius + 1));
-+ chunkHolderNeighbours = new ArrayList<>((2 * neighbourReadRadius + 1) * (2 * neighbourReadRadius + 1));
-+ for (int dz = -neighbourReadRadius; dz <= neighbourReadRadius; ++dz) {
-+ for (int dx = -neighbourReadRadius; dx <= neighbourReadRadius; ++dx) {
-+ final NewChunkHolder holder = (dx | dz) == 0 ? chunkHolder : this.chunkHolderManager.getChunkHolder(dx + chunkX, dz + chunkZ);
-+ neighbours.add(holder.getChunkForNeighbourAccess());
-+ chunkHolderNeighbours.add(holder);
-+ }
-+ }
-+ }
-+
-+ final ChunkProgressionTask task = this.createTask(chunkX, chunkZ, chunk, chunkHolder, neighbours, toStatus, chunkHolder.getEffectivePriority());
-+ allTasks.add(task);
-+
-+ chunkHolder.setGenerationTask(task, toStatus, chunkHolderNeighbours);
-+
-+ return task;
-+ }
-+
-+ // rets true if the neighbour is not at the required status, false otherwise
-+ private boolean checkNeighbour(final int chunkX, final int chunkZ, final ChunkStatus requiredStatus, final NewChunkHolder center,
-+ final List tasks, final PrioritisedExecutor.Priority minPriority) {
-+ final NewChunkHolder chunkHolder = this.chunkHolderManager.getChunkHolder(chunkX, chunkZ);
-+
-+ if (chunkHolder == null) {
-+ throw new IllegalStateException("Missing chunkholder when required");
-+ }
-+
-+ final ChunkStatus holderStatus = chunkHolder.getCurrentGenStatus();
-+ if (holderStatus != null && holderStatus.isOrAfter(requiredStatus)) {
-+ return false;
-+ }
-+
-+ if (chunkHolder.hasFailedGeneration()) {
-+ return true;
-+ }
-+
-+ center.addGenerationBlockingNeighbour(chunkHolder);
-+ chunkHolder.addWaitingNeighbour(center, requiredStatus);
-+
-+ if (chunkHolder.upgradeGenTarget(requiredStatus)) {
-+ return true;
-+ }
-+
-+ // not at status required, so we need to schedule its generation
-+ this.schedule(
-+ chunkX, chunkZ, requiredStatus, chunkHolder, tasks, minPriority
-+ );
-+
-+ return true;
-+ }
-+
-+ /**
-+ * @deprecated Chunk tasks must be tied to coordinates in the future
-+ */
-+ @Deprecated
-+ public PrioritisedExecutor.PrioritisedTask scheduleChunkTask(final Runnable run) {
-+ return this.scheduleChunkTask(run, PrioritisedExecutor.Priority.NORMAL);
-+ }
-+
-+ /**
-+ * @deprecated Chunk tasks must be tied to coordinates in the future
-+ */
-+ @Deprecated
-+ public PrioritisedExecutor.PrioritisedTask scheduleChunkTask(final Runnable run, final PrioritisedExecutor.Priority priority) {
-+ return this.mainThreadExecutor.queueRunnable(run, priority);
-+ }
-+
-+ public PrioritisedExecutor.PrioritisedTask createChunkTask(final int chunkX, final int chunkZ, final Runnable run) {
-+ return this.createChunkTask(chunkX, chunkZ, run, PrioritisedExecutor.Priority.NORMAL);
-+ }
-+
-+ public PrioritisedExecutor.PrioritisedTask createChunkTask(final int chunkX, final int chunkZ, final Runnable run,
-+ final PrioritisedExecutor.Priority priority) {
-+ return this.mainThreadExecutor.createTask(run, priority);
-+ }
-+
-+ public PrioritisedExecutor.PrioritisedTask scheduleChunkTask(final int chunkX, final int chunkZ, final Runnable run) {
-+ return this.mainThreadExecutor.queueRunnable(run);
-+ }
-+
-+ public PrioritisedExecutor.PrioritisedTask scheduleChunkTask(final int chunkX, final int chunkZ, final Runnable run,
-+ final PrioritisedExecutor.Priority priority) {
-+ return this.mainThreadExecutor.queueRunnable(run, priority);
-+ }
-+
-+ public void executeTasksUntil(final BooleanSupplier exit) {
-+ if (Bukkit.isPrimaryThread()) {
-+ this.mainThreadExecutor.executeConditionally(exit);
-+ } else {
-+ long counter = 1L;
-+ while (!exit.getAsBoolean()) {
-+ counter = ConcurrentUtil.linearLongBackoff(counter, 100_000L, 5_000_000L); // 100us, 5ms
-+ }
-+ }
-+ }
-+
-+ public boolean halt(final boolean sync, final long maxWaitNS) {
-+ this.lightExecutor.halt();
-+ this.genExecutor.halt();
-+ this.parallelGenExecutor.halt();
-+ this.loadExecutor.halt();
-+ final long time = System.nanoTime();
-+ if (sync) {
-+ for (long failures = 9L;; failures = ConcurrentUtil.linearLongBackoff(failures, 500_000L, 50_000_000L)) {
-+ if (
-+ !this.lightExecutor.isActive() &&
-+ !this.genExecutor.isActive() &&
-+ !this.parallelGenExecutor.isActive() &&
-+ !this.loadExecutor.isActive()
-+ ) {
-+ return true;
-+ }
-+ if ((System.nanoTime() - time) >= maxWaitNS) {
-+ return false;
-+ }
-+ }
-+ }
-+
-+ return true;
-+ }
-+
-+ public static final ArrayDeque WAITING_CHUNKS = new ArrayDeque<>(); // stack
-+
-+ public static final class ChunkInfo {
-+
-+ public final int chunkX;
-+ public final int chunkZ;
-+ public final ServerLevel world;
-+
-+ public ChunkInfo(final int chunkX, final int chunkZ, final ServerLevel world) {
-+ this.chunkX = chunkX;
-+ this.chunkZ = chunkZ;
-+ this.world = world;
-+ }
-+
-+ @Override
-+ public String toString() {
-+ return "[( " + this.chunkX + "," + this.chunkZ + ") in '" + this.world.getWorld().getName() + "']";
-+ }
-+ }
-+
-+ public static void pushChunkWait(final ServerLevel world, final int chunkX, final int chunkZ) {
-+ synchronized (WAITING_CHUNKS) {
-+ WAITING_CHUNKS.push(new ChunkInfo(chunkX, chunkZ, world));
-+ }
-+ }
-+
-+ public static void popChunkWait() {
-+ synchronized (WAITING_CHUNKS) {
-+ WAITING_CHUNKS.pop();
-+ }
-+ }
-+
-+ public static ChunkInfo[] getChunkInfos() {
-+ synchronized (WAITING_CHUNKS) {
-+ return WAITING_CHUNKS.toArray(new ChunkInfo[0]);
-+ }
-+ }
-+
-+ public static void dumpAllChunkLoadInfo(final boolean longPrint) {
-+ final ChunkInfo[] chunkInfos = getChunkInfos();
-+ if (chunkInfos.length > 0) {
-+ LOGGER.error("Chunk wait task info below: ");
-+ for (final ChunkInfo chunkInfo : chunkInfos) {
-+ final NewChunkHolder holder = chunkInfo.world.chunkTaskScheduler.chunkHolderManager.getChunkHolder(chunkInfo.chunkX, chunkInfo.chunkZ);
-+ LOGGER.error("Chunk wait: " + chunkInfo);
-+ LOGGER.error("Chunk holder: " + holder);
-+ }
-+
-+ if (longPrint) {
-+ final File file = new File(new File(new File("."), "debug"), "chunks-watchdog.txt");
-+ LOGGER.error("Writing chunk information dump to " + file);
-+ try {
-+ MCUtil.dumpChunks(file, true);
-+ LOGGER.error("Successfully written chunk information!");
-+ } catch (final Throwable thr) {
-+ MinecraftServer.LOGGER.warn("Failed to dump chunk information to file " + file.toString(), thr);
-+ }
-+ }
-+ }
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkUpgradeGenericStatusTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkUpgradeGenericStatusTask.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..73ce0909bd89244835a0d0f2030a25871461f1e0
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkUpgradeGenericStatusTask.java
-@@ -0,0 +1,209 @@
-+package io.papermc.paper.chunk.system.scheduling;
-+
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
-+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
-+import com.mojang.datafixers.util.Either;
-+import com.mojang.logging.LogUtils;
-+import net.minecraft.server.level.ChunkHolder;
-+import net.minecraft.server.level.ChunkMap;
-+import net.minecraft.server.level.ServerChunkCache;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.world.level.chunk.ChunkAccess;
-+import net.minecraft.world.level.chunk.ChunkStatus;
-+import net.minecraft.world.level.chunk.ProtoChunk;
-+import org.slf4j.Logger;
-+import java.lang.invoke.VarHandle;
-+import java.util.List;
-+import java.util.Map;
-+import java.util.concurrent.CompletableFuture;
-+
-+public final class ChunkUpgradeGenericStatusTask extends ChunkProgressionTask implements Runnable {
-+
-+ private static final Logger LOGGER = LogUtils.getClassLogger();
-+
-+ protected final ChunkAccess fromChunk;
-+ protected final ChunkStatus fromStatus;
-+ protected final ChunkStatus toStatus;
-+ protected final List neighbours;
-+
-+ protected final PrioritisedExecutor.PrioritisedTask generateTask;
-+
-+ public ChunkUpgradeGenericStatusTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
-+ final int chunkZ, final ChunkAccess chunk, final List neighbours,
-+ final ChunkStatus toStatus, final PrioritisedExecutor.Priority priority) {
-+ super(scheduler, world, chunkX, chunkZ);
-+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+ this.fromChunk = chunk;
-+ this.fromStatus = chunk.getStatus();
-+ this.toStatus = toStatus;
-+ this.neighbours = neighbours;
-+ this.generateTask = (this.toStatus.isParallelCapable ? this.scheduler.parallelGenExecutor : this.scheduler.genExecutor)
-+ .createTask(this, priority);
-+ }
-+
-+ @Override
-+ public ChunkStatus getTargetStatus() {
-+ return this.toStatus;
-+ }
-+
-+ private boolean isEmptyTask() {
-+ // must use fromStatus here to avoid any race condition with run() overwriting the status
-+ final boolean generation = !this.fromStatus.isOrAfter(this.toStatus);
-+ return (generation && this.toStatus.isEmptyGenStatus()) || (!generation && this.toStatus.isEmptyLoadStatus());
-+ }
-+
-+ @Override
-+ public void run() {
-+ final ChunkAccess chunk = this.fromChunk;
-+
-+ final ServerChunkCache serverChunkCache = this.world.chunkSource;
-+ final ChunkMap chunkMap = serverChunkCache.chunkMap;
-+
-+ final CompletableFuture> completeFuture;
-+
-+ final boolean generation;
-+ boolean completing = false;
-+
-+ // note: should optimise the case where the chunk does not need to execute the status, because
-+ // schedule() calls this synchronously if it will run through that path
-+
-+ try {
-+ generation = !chunk.getStatus().isOrAfter(this.toStatus);
-+ if (generation) {
-+ if (this.toStatus.isEmptyGenStatus()) {
-+ if (chunk instanceof ProtoChunk) {
-+ ((ProtoChunk)chunk).setStatus(this.toStatus);
-+ }
-+ completing = true;
-+ this.complete(chunk, null);
-+ return;
-+ }
-+ completeFuture = this.toStatus.generate(Runnable::run, this.world, chunkMap.generator, chunkMap.structureTemplateManager,
-+ serverChunkCache.getLightEngine(), null, this.neighbours, false)
-+ .whenComplete((final Either either, final Throwable throwable) -> {
-+ final ChunkAccess newChunk = (either == null) ? null : either.left().orElse(null);
-+ if (newChunk instanceof ProtoChunk) {
-+ ((ProtoChunk)newChunk).setStatus(ChunkUpgradeGenericStatusTask.this.toStatus);
-+ }
-+ }
-+ );
-+ } else {
-+ if (this.toStatus.isEmptyLoadStatus()) {
-+ completing = true;
-+ this.complete(chunk, null);
-+ return;
-+ }
-+ completeFuture = this.toStatus.load(this.world, chunkMap.structureTemplateManager, serverChunkCache.getLightEngine(), null, chunk);
-+ }
-+ } catch (final Throwable throwable) {
-+ if (!completing) {
-+ this.complete(null, throwable);
-+
-+ if (throwable instanceof ThreadDeath) {
-+ throw (ThreadDeath)throwable;
-+ }
-+ return;
-+ }
-+
-+ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
-+ "Target status", ChunkTaskScheduler.stringIfNull(this.toStatus),
-+ "From status", ChunkTaskScheduler.stringIfNull(this.fromStatus),
-+ "Generation task", this
-+ ), throwable);
-+
-+ if (!(throwable instanceof ThreadDeath)) {
-+ LOGGER.error("Failed to complete status for chunk: status:" + this.toStatus + ", chunk: (" + this.chunkX + "," + this.chunkZ + "), world: " + this.world.getWorld().getName(), throwable);
-+ } else {
-+ // ensure the chunk system can respond, then die
-+ throw (ThreadDeath)throwable;
-+ }
-+ return;
-+ }
-+
-+ if (!completeFuture.isDone() && !this.toStatus.warnedAboutNoImmediateComplete.getAndSet(true)) {
-+ LOGGER.warn("Future status not complete after scheduling: " + this.toStatus.toString() + ", generate: " + generation);
-+ }
-+
-+ final Either either;
-+ final ChunkAccess newChunk;
-+
-+ try {
-+ either = completeFuture.join();
-+ newChunk = (either == null) ? null : either.left().orElse(null);
-+ } catch (final Throwable throwable) {
-+ this.complete(null, throwable);
-+ // ensure the chunk system can respond, then die
-+ if (throwable instanceof ThreadDeath) {
-+ throw (ThreadDeath)throwable;
-+ }
-+ return;
-+ }
-+
-+ if (newChunk == null) {
-+ this.complete(null, new IllegalStateException("Chunk for status: " + ChunkUpgradeGenericStatusTask.this.toStatus.toString() + ", generation: " + generation + " should not be null! Either: " + either).fillInStackTrace());
-+ return;
-+ }
-+
-+ this.complete(newChunk, null);
-+ }
-+
-+ protected volatile boolean scheduled;
-+ protected static final VarHandle SCHEDULED_HANDLE = ConcurrentUtil.getVarHandle(ChunkUpgradeGenericStatusTask.class, "scheduled", boolean.class);
-+
-+ @Override
-+ public boolean isScheduled() {
-+ return this.scheduled;
-+ }
-+
-+ @Override
-+ public void schedule() {
-+ if ((boolean)SCHEDULED_HANDLE.getAndSet((ChunkUpgradeGenericStatusTask)this, true)) {
-+ throw new IllegalStateException("Cannot double call schedule()");
-+ }
-+ if (this.isEmptyTask()) {
-+ if (this.generateTask.cancel()) {
-+ this.run();
-+ }
-+ } else {
-+ this.generateTask.queue();
-+ }
-+ }
-+
-+ @Override
-+ public void cancel() {
-+ if (this.generateTask.cancel()) {
-+ this.complete(null, null);
-+ }
-+ }
-+
-+ @Override
-+ public PrioritisedExecutor.Priority getPriority() {
-+ return this.generateTask.getPriority();
-+ }
-+
-+ @Override
-+ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
-+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+ this.generateTask.lowerPriority(priority);
-+ }
-+
-+ @Override
-+ public void setPriority(final PrioritisedExecutor.Priority priority) {
-+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+ this.generateTask.setPriority(priority);
-+ }
-+
-+ @Override
-+ public void raisePriority(final PrioritisedExecutor.Priority priority) {
-+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+ this.generateTask.raisePriority(priority);
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/GenericDataLoadTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/GenericDataLoadTask.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..396d72c00e47cf1669ae20dc839c1c961b1f262a
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/GenericDataLoadTask.java
-@@ -0,0 +1,746 @@
-+package io.papermc.paper.chunk.system.scheduling;
-+
-+import ca.spottedleaf.concurrentutil.completable.Completable;
-+import ca.spottedleaf.concurrentutil.executor.Cancellable;
-+import ca.spottedleaf.concurrentutil.executor.standard.DelayedPrioritisedTask;
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
-+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
-+import com.mojang.logging.LogUtils;
-+import io.papermc.paper.chunk.system.io.RegionFileIOThread;
-+import net.minecraft.nbt.CompoundTag;
-+import net.minecraft.server.level.ServerLevel;
-+import org.slf4j.Logger;
-+import java.lang.invoke.VarHandle;
-+import java.util.Map;
-+import java.util.concurrent.atomic.AtomicBoolean;
-+import java.util.concurrent.atomic.AtomicLong;
-+import java.util.function.BiConsumer;
-+
-+public abstract class GenericDataLoadTask {
-+
-+ private static final Logger LOGGER = LogUtils.getClassLogger();
-+
-+ protected static final CompoundTag CANCELLED_DATA = new CompoundTag();
-+
-+ // reference count is the upper 32 bits
-+ protected final AtomicLong stageAndReferenceCount = new AtomicLong(STAGE_NOT_STARTED);
-+
-+ protected static final long STAGE_MASK = 0xFFFFFFFFL;
-+ protected static final long STAGE_CANCELLED = 0xFFFFFFFFL;
-+ protected static final long STAGE_NOT_STARTED = 0L;
-+ protected static final long STAGE_LOADING = 1L;
-+ protected static final long STAGE_PROCESSING = 2L;
-+ protected static final long STAGE_COMPLETED = 3L;
-+
-+ // for loading data off disk
-+ protected final LoadDataFromDiskTask loadDataFromDiskTask;
-+ // processing off-main
-+ protected final PrioritisedExecutor.PrioritisedTask processOffMain;
-+ // processing on-main
-+ protected final PrioritisedExecutor.PrioritisedTask processOnMain;
-+
-+ protected final ChunkTaskScheduler scheduler;
-+ protected final ServerLevel world;
-+ protected final int chunkX;
-+ protected final int chunkZ;
-+ protected final RegionFileIOThread.RegionFileType type;
-+
-+ public GenericDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
-+ final int chunkZ, final RegionFileIOThread.RegionFileType type,
-+ final PrioritisedExecutor.Priority priority) {
-+ this.scheduler = scheduler;
-+ this.world = world;
-+ this.chunkX = chunkX;
-+ this.chunkZ = chunkZ;
-+ this.type = type;
-+
-+ final ProcessOnMainTask mainTask;
-+ if (this.hasOnMain()) {
-+ mainTask = new ProcessOnMainTask();
-+ this.processOnMain = this.createOnMain(mainTask, priority);
-+ } else {
-+ mainTask = null;
-+ this.processOnMain = null;
-+ }
-+
-+ final ProcessOffMainTask offMainTask;
-+ if (this.hasOffMain()) {
-+ offMainTask = new ProcessOffMainTask(mainTask);
-+ this.processOffMain = this.createOffMain(offMainTask, priority);
-+ } else {
-+ offMainTask = null;
-+ this.processOffMain = null;
-+ }
-+
-+ if (this.processOffMain == null && this.processOnMain == null) {
-+ throw new IllegalStateException("Illegal class implementation: " + this.getClass().getName() + ", should be able to schedule at least one task!");
-+ }
-+
-+ this.loadDataFromDiskTask = new LoadDataFromDiskTask(world, chunkX, chunkZ, type, new DataLoadCallback(offMainTask, mainTask), priority);
-+ }
-+
-+ public static final record TaskResult(L left, R right) {}
-+
-+ protected abstract boolean hasOffMain();
-+
-+ protected abstract boolean hasOnMain();
-+
-+ protected abstract PrioritisedExecutor.PrioritisedTask createOffMain(final Runnable run, final PrioritisedExecutor.Priority priority);
-+
-+ protected abstract PrioritisedExecutor.PrioritisedTask createOnMain(final Runnable run, final PrioritisedExecutor.Priority priority);
-+
-+ protected abstract TaskResult runOffMain(final CompoundTag data, final Throwable throwable);
-+
-+ protected abstract TaskResult runOnMain(final OnMain data, final Throwable throwable);
-+
-+ protected abstract void onComplete(final TaskResult result);
-+
-+ protected abstract TaskResult completeOnMainOffMain(final OnMain data, final Throwable throwable);
-+
-+ @Override
-+ public String toString() {
-+ return "GenericDataLoadTask{class: " + this.getClass().getName() + ", world: " + this.world.getWorld().getName() +
-+ ", chunk: (" + this.chunkX + "," + this.chunkZ + "), hashcode: " + System.identityHashCode(this) + ", priority: " + this.getPriority() +
-+ ", type: " + this.type.toString() + "}";
-+ }
-+
-+ public PrioritisedExecutor.Priority getPriority() {
-+ if (this.processOnMain != null) {
-+ return this.processOnMain.getPriority();
-+ } else {
-+ return this.processOffMain.getPriority();
-+ }
-+ }
-+
-+ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
-+ // can't lower I/O tasks, we don't know what they affect
-+ if (this.processOffMain != null) {
-+ this.processOffMain.lowerPriority(priority);
-+ }
-+ if (this.processOnMain != null) {
-+ this.processOnMain.lowerPriority(priority);
-+ }
-+ }
-+
-+ public void setPriority(final PrioritisedExecutor.Priority priority) {
-+ // can't lower I/O tasks, we don't know what they affect
-+ this.loadDataFromDiskTask.raisePriority(priority);
-+ if (this.processOffMain != null) {
-+ this.processOffMain.setPriority(priority);
-+ }
-+ if (this.processOnMain != null) {
-+ this.processOnMain.setPriority(priority);
-+ }
-+ }
-+
-+ public void raisePriority(final PrioritisedExecutor.Priority priority) {
-+ // can't lower I/O tasks, we don't know what they affect
-+ this.loadDataFromDiskTask.raisePriority(priority);
-+ if (this.processOffMain != null) {
-+ this.processOffMain.raisePriority(priority);
-+ }
-+ if (this.processOnMain != null) {
-+ this.processOnMain.raisePriority(priority);
-+ }
-+ }
-+
-+ // returns whether scheduleNow() needs to be called
-+ public boolean schedule(final boolean delay) {
-+ if (this.stageAndReferenceCount.get() != STAGE_NOT_STARTED ||
-+ !this.stageAndReferenceCount.compareAndSet(STAGE_NOT_STARTED, (1L << 32) | STAGE_LOADING)) {
-+ // try and increment reference count
-+ int failures = 0;
-+ for (long curr = this.stageAndReferenceCount.get();;) {
-+ if ((curr & STAGE_MASK) == STAGE_CANCELLED || (curr & STAGE_MASK) == STAGE_COMPLETED) {
-+ // cancelled or completed, nothing to do here
-+ return false;
-+ }
-+
-+ if (curr == (curr = this.stageAndReferenceCount.compareAndExchange(curr, curr + (1L << 32)))) {
-+ // successful
-+ return false;
-+ }
-+
-+ ++failures;
-+ for (int i = 0; i < failures; ++i) {
-+ ConcurrentUtil.backoff();
-+ }
-+ }
-+ }
-+
-+ if (!delay) {
-+ this.scheduleNow();
-+ return false;
-+ }
-+ return true;
-+ }
-+
-+ public void scheduleNow() {
-+ this.loadDataFromDiskTask.schedule(); // will schedule the rest
-+ }
-+
-+ // assumes the current stage cannot be completed
-+ // returns false if cancelled, returns true if can proceed
-+ private boolean advanceStage(final long expect, final long to) {
-+ int failures = 0;
-+ for (long curr = this.stageAndReferenceCount.get();;) {
-+ if ((curr & STAGE_MASK) != expect) {
-+ // must be cancelled
-+ return false;
-+ }
-+
-+ final long newVal = (curr & ~STAGE_MASK) | to;
-+ if (curr == (curr = this.stageAndReferenceCount.compareAndExchange(curr, newVal))) {
-+ return true;
-+ }
-+
-+ ++failures;
-+ for (int i = 0; i < failures; ++i) {
-+ ConcurrentUtil.backoff();
-+ }
-+ }
-+ }
-+
-+ public boolean cancel() {
-+ int failures = 0;
-+ for (long curr = this.stageAndReferenceCount.get();;) {
-+ if ((curr & STAGE_MASK) == STAGE_COMPLETED || (curr & STAGE_MASK) == STAGE_CANCELLED) {
-+ return false;
-+ }
-+
-+ if ((curr & STAGE_MASK) == STAGE_NOT_STARTED || (curr & ~STAGE_MASK) == (1L << 32)) {
-+ // no other references, so we can cancel
-+ final long newVal = STAGE_CANCELLED;
-+ if (curr == (curr = this.stageAndReferenceCount.compareAndExchange(curr, newVal))) {
-+ this.loadDataFromDiskTask.cancel();
-+ if (this.processOffMain != null) {
-+ this.processOffMain.cancel();
-+ }
-+ if (this.processOnMain != null) {
-+ this.processOnMain.cancel();
-+ }
-+ this.onComplete(null);
-+ return true;
-+ }
-+ } else {
-+ if ((curr & ~STAGE_MASK) == (0L << 32)) {
-+ throw new IllegalStateException("Reference count cannot be zero here");
-+ }
-+ // just decrease the reference count
-+ final long newVal = curr - (1L << 32);
-+ if (curr == (curr = this.stageAndReferenceCount.compareAndExchange(curr, newVal))) {
-+ return false;
-+ }
-+ }
-+
-+ ++failures;
-+ for (int i = 0; i < failures; ++i) {
-+ ConcurrentUtil.backoff();
-+ }
-+ }
-+ }
-+
-+ protected final class DataLoadCallback implements BiConsumer {
-+
-+ protected final ProcessOffMainTask offMainTask;
-+ protected final ProcessOnMainTask onMainTask;
-+
-+ public DataLoadCallback(final ProcessOffMainTask offMainTask, final ProcessOnMainTask onMainTask) {
-+ this.offMainTask = offMainTask;
-+ this.onMainTask = onMainTask;
-+ }
-+
-+ @Override
-+ public void accept(final CompoundTag compoundTag, final Throwable throwable) {
-+ if (GenericDataLoadTask.this.stageAndReferenceCount.get() == STAGE_CANCELLED) {
-+ // don't try to schedule further
-+ return;
-+ }
-+
-+ try {
-+ if (compoundTag == CANCELLED_DATA) {
-+ // cancelled, except this isn't possible
-+ LOGGER.error("Data callback says cancelled, but stage does not?");
-+ return;
-+ }
-+
-+ // get off of the regionfile callback ASAP, no clue what locks are held right now...
-+ if (GenericDataLoadTask.this.processOffMain != null) {
-+ this.offMainTask.data = compoundTag;
-+ this.offMainTask.throwable = throwable;
-+ GenericDataLoadTask.this.processOffMain.queue();
-+ return;
-+ } else {
-+ // no off-main task, so go straight to main
-+ this.onMainTask.data = (OnMain)compoundTag;
-+ this.onMainTask.throwable = throwable;
-+ GenericDataLoadTask.this.processOnMain.queue();
-+ }
-+ } catch (final ThreadDeath death) {
-+ throw death;
-+ } catch (final Throwable thr2) {
-+ LOGGER.error("Failed I/O callback for task: " + GenericDataLoadTask.this.toString(), thr2);
-+ GenericDataLoadTask.this.scheduler.unrecoverableChunkSystemFailure(
-+ GenericDataLoadTask.this.chunkX, GenericDataLoadTask.this.chunkZ, Map.of(
-+ "Callback throwable", ChunkTaskScheduler.stringIfNull(throwable)
-+ ), thr2);
-+ }
-+ }
-+ }
-+
-+ protected final class ProcessOffMainTask implements Runnable {
-+
-+ protected CompoundTag data;
-+ protected Throwable throwable;
-+ protected final ProcessOnMainTask schedule;
-+
-+ public ProcessOffMainTask(final ProcessOnMainTask schedule) {
-+ this.schedule = schedule;
-+ }
-+
-+ @Override
-+ public void run() {
-+ if (!GenericDataLoadTask.this.advanceStage(STAGE_LOADING, this.schedule == null ? STAGE_COMPLETED : STAGE_PROCESSING)) {
-+ // cancelled
-+ return;
-+ }
-+ final TaskResult newData = GenericDataLoadTask.this.runOffMain(this.data, this.throwable);
-+
-+ if (GenericDataLoadTask.this.stageAndReferenceCount.get() == STAGE_CANCELLED) {
-+ // don't try to schedule further
-+ return;
-+ }
-+
-+ if (this.schedule != null) {
-+ final TaskResult syncComplete = GenericDataLoadTask.this.completeOnMainOffMain(newData.left, newData.right);
-+
-+ if (syncComplete != null) {
-+ if (GenericDataLoadTask.this.advanceStage(STAGE_PROCESSING, STAGE_COMPLETED)) {
-+ GenericDataLoadTask.this.onComplete(syncComplete);
-+ } // else: cancelled
-+ return;
-+ }
-+
-+ this.schedule.data = newData.left;
-+ this.schedule.throwable = newData.right;
-+
-+ GenericDataLoadTask.this.processOnMain.queue();
-+ } else {
-+ GenericDataLoadTask.this.onComplete((TaskResult)newData);
-+ }
-+ }
-+ }
-+
-+ protected final class ProcessOnMainTask implements Runnable {
-+
-+ protected OnMain data;
-+ protected Throwable throwable;
-+
-+ @Override
-+ public void run() {
-+ if (!GenericDataLoadTask.this.advanceStage(STAGE_PROCESSING, STAGE_COMPLETED)) {
-+ // cancelled
-+ return;
-+ }
-+ final TaskResult result = GenericDataLoadTask.this.runOnMain(this.data, this.throwable);
-+
-+ GenericDataLoadTask.this.onComplete(result);
-+ }
-+ }
-+
-+ public static final class LoadDataFromDiskTask {
-+
-+ protected volatile int priority;
-+ protected static final VarHandle PRIORITY_HANDLE = ConcurrentUtil.getVarHandle(LoadDataFromDiskTask.class, "priority", int.class);
-+
-+ protected static final int PRIORITY_EXECUTED = Integer.MIN_VALUE >>> 0;
-+ protected static final int PRIORITY_LOAD_SCHEDULED = Integer.MIN_VALUE >>> 1;
-+ protected static final int PRIORITY_UNLOAD_SCHEDULED = Integer.MIN_VALUE >>> 2;
-+
-+ protected static final int PRIORITY_FLAGS = ~Character.MAX_VALUE;
-+
-+ protected final int getPriorityVolatile() {
-+ return (int)PRIORITY_HANDLE.getVolatile((LoadDataFromDiskTask)this);
-+ }
-+
-+ protected final int compareAndExchangePriorityVolatile(final int expect, final int update) {
-+ return (int)PRIORITY_HANDLE.compareAndExchange((LoadDataFromDiskTask)this, (int)expect, (int)update);
-+ }
-+
-+ protected final int getAndOrPriorityVolatile(final int val) {
-+ return (int)PRIORITY_HANDLE.getAndBitwiseOr((LoadDataFromDiskTask)this, (int)val);
-+ }
-+
-+ protected final void setPriorityPlain(final int val) {
-+ PRIORITY_HANDLE.set((LoadDataFromDiskTask)this, (int)val);
-+ }
-+
-+ private final ServerLevel world;
-+ private final int chunkX;
-+ private final int chunkZ;
-+
-+ private final RegionFileIOThread.RegionFileType type;
-+ private Cancellable dataLoadTask;
-+ private Cancellable dataUnloadCancellable;
-+ private DelayedPrioritisedTask dataUnloadTask;
-+
-+ private final BiConsumer onComplete;
-+
-+ // onComplete should be caller sensitive, it may complete synchronously with schedule() - which does
-+ // hold a priority lock.
-+ public LoadDataFromDiskTask(final ServerLevel world, final int chunkX, final int chunkZ,
-+ final RegionFileIOThread.RegionFileType type,
-+ final BiConsumer onComplete,
-+ final PrioritisedExecutor.Priority priority) {
-+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+ this.world = world;
-+ this.chunkX = chunkX;
-+ this.chunkZ = chunkZ;
-+ this.type = type;
-+ this.onComplete = onComplete;
-+ this.setPriorityPlain(priority.priority);
-+ }
-+
-+ private void complete(final CompoundTag data, final Throwable throwable) {
-+ try {
-+ this.onComplete.accept(data, throwable);
-+ } catch (final Throwable thr2) {
-+ this.world.chunkTaskScheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
-+ "Completed throwable", ChunkTaskScheduler.stringIfNull(throwable),
-+ "Regionfile type", ChunkTaskScheduler.stringIfNull(this.type)
-+ ), thr2);
-+ if (thr2 instanceof ThreadDeath) {
-+ throw (ThreadDeath)thr2;
-+ }
-+ }
-+ }
-+
-+ protected boolean markExecuting() {
-+ return (this.getAndOrPriorityVolatile(PRIORITY_EXECUTED) & PRIORITY_EXECUTED) == 0;
-+ }
-+
-+ protected boolean isMarkedExecuted() {
-+ return (this.getPriorityVolatile() & PRIORITY_EXECUTED) != 0;
-+ }
-+
-+ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
-+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+
-+ int failures = 0;
-+ for (int curr = this.getPriorityVolatile();;) {
-+ if ((curr & PRIORITY_EXECUTED) != 0) {
-+ // cancelled or executed
-+ return;
-+ }
-+
-+ if ((curr & PRIORITY_LOAD_SCHEDULED) != 0) {
-+ RegionFileIOThread.lowerPriority(this.world, this.chunkX, this.chunkZ, this.type, priority);
-+ return;
-+ }
-+
-+ if ((curr & PRIORITY_UNLOAD_SCHEDULED) != 0) {
-+ if (this.dataUnloadTask != null) {
-+ this.dataUnloadTask.lowerPriority(priority);
-+ }
-+ // no return - we need to propagate priority
-+ }
-+
-+ if (!priority.isHigherPriority(curr & ~PRIORITY_FLAGS)) {
-+ return;
-+ }
-+
-+ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority.priority | (curr & PRIORITY_FLAGS)))) {
-+ return;
-+ }
-+
-+ // failed, retry
-+
-+ ++failures;
-+ for (int i = 0; i < failures; ++i) {
-+ ConcurrentUtil.backoff();
-+ }
-+ }
-+ }
-+
-+ public void setPriority(final PrioritisedExecutor.Priority priority) {
-+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+
-+ int failures = 0;
-+ for (int curr = this.getPriorityVolatile();;) {
-+ if ((curr & PRIORITY_EXECUTED) != 0) {
-+ // cancelled or executed
-+ return;
-+ }
-+
-+ if ((curr & PRIORITY_LOAD_SCHEDULED) != 0) {
-+ RegionFileIOThread.setPriority(this.world, this.chunkX, this.chunkZ, this.type, priority);
-+ return;
-+ }
-+
-+ if ((curr & PRIORITY_UNLOAD_SCHEDULED) != 0) {
-+ if (this.dataUnloadTask != null) {
-+ this.dataUnloadTask.setPriority(priority);
-+ }
-+ // no return - we need to propagate priority
-+ }
-+
-+ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority.priority | (curr & PRIORITY_FLAGS)))) {
-+ return;
-+ }
-+
-+ // failed, retry
-+
-+ ++failures;
-+ for (int i = 0; i < failures; ++i) {
-+ ConcurrentUtil.backoff();
-+ }
-+ }
-+ }
-+
-+ public void raisePriority(final PrioritisedExecutor.Priority priority) {
-+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
-+ throw new IllegalArgumentException("Invalid priority " + priority);
-+ }
-+
-+ int failures = 0;
-+ for (int curr = this.getPriorityVolatile();;) {
-+ if ((curr & PRIORITY_EXECUTED) != 0) {
-+ // cancelled or executed
-+ return;
-+ }
-+
-+ if ((curr & PRIORITY_LOAD_SCHEDULED) != 0) {
-+ RegionFileIOThread.raisePriority(this.world, this.chunkX, this.chunkZ, this.type, priority);
-+ return;
-+ }
-+
-+ if ((curr & PRIORITY_UNLOAD_SCHEDULED) != 0) {
-+ if (this.dataUnloadTask != null) {
-+ this.dataUnloadTask.raisePriority(priority);
-+ }
-+ // no return - we need to propagate priority
-+ }
-+
-+ if (!priority.isLowerPriority(curr & ~PRIORITY_FLAGS)) {
-+ return;
-+ }
-+
-+ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority.priority | (curr & PRIORITY_FLAGS)))) {
-+ return;
-+ }
-+
-+ // failed, retry
-+
-+ ++failures;
-+ for (int i = 0; i < failures; ++i) {
-+ ConcurrentUtil.backoff();
-+ }
-+ }
-+ }
-+
-+ public void cancel() {
-+ if ((this.getAndOrPriorityVolatile(PRIORITY_EXECUTED) & PRIORITY_EXECUTED) != 0) {
-+ // cancelled or executed already
-+ return;
-+ }
-+
-+ // OK if we miss the field read, the task cannot complete if the cancelled bit is set and
-+ // the write to dataLoadTask will check for the cancelled bit
-+ if (this.dataUnloadCancellable != null) {
-+ this.dataUnloadCancellable.cancel();
-+ }
-+
-+ if (this.dataLoadTask != null) {
-+ this.dataLoadTask.cancel();
-+ }
-+
-+ this.complete(CANCELLED_DATA, null);
-+ }
-+
-+ private final AtomicBoolean scheduled = new AtomicBoolean();
-+
-+ public void schedule() {
-+ if (this.scheduled.getAndSet(true)) {
-+ throw new IllegalStateException("schedule() called twice");
-+ }
-+ int priority = this.getPriorityVolatile();
-+
-+ if ((priority & PRIORITY_EXECUTED) != 0) {
-+ // cancelled
-+ return;
-+ }
-+
-+ final BiConsumer consumer = (final CompoundTag data, final Throwable thr) -> {
-+ // because cancelScheduled() cannot actually stop this task from executing in every case, we need
-+ // to mark complete here to ensure we do not double complete
-+ if (LoadDataFromDiskTask.this.markExecuting()) {
-+ LoadDataFromDiskTask.this.complete(data, thr);
-+ } // else: cancelled
-+ };
-+
-+ final PrioritisedExecutor.Priority initialPriority = PrioritisedExecutor.Priority.getPriority(priority);
-+ boolean scheduledUnload = false;
-+
-+ final NewChunkHolder holder = this.world.chunkTaskScheduler.chunkHolderManager.getChunkHolder(this.chunkX, this.chunkZ);
-+ if (holder != null) {
-+ final BiConsumer unloadConsumer = (final CompoundTag data, final Throwable thr) -> {
-+ if (data != null) {
-+ consumer.accept(data, null);
-+ } else {
-+ // need to schedule task
-+ LoadDataFromDiskTask.this.schedule(false, consumer, PrioritisedExecutor.Priority.getPriority(LoadDataFromDiskTask.this.getPriorityVolatile() & ~PRIORITY_FLAGS));
-+ }
-+ };
-+ Cancellable unloadCancellable = null;
-+ CompoundTag syncComplete = null;
-+ final NewChunkHolder.UnloadTask unloadTask = holder.getUnloadTask(this.type); // can be null if no task exists
-+ final Completable unloadCompletable = unloadTask == null ? null : unloadTask.completable();
-+ if (unloadCompletable != null) {
-+ unloadCancellable = unloadCompletable.addAsynchronousWaiter(unloadConsumer);
-+ if (unloadCancellable == null) {
-+ syncComplete = unloadCompletable.getResult();
-+ }
-+ }
-+
-+ if (syncComplete != null) {
-+ consumer.accept(syncComplete, null);
-+ return;
-+ }
-+
-+ if (unloadCancellable != null) {
-+ scheduledUnload = true;
-+ this.dataUnloadCancellable = unloadCancellable;
-+ this.dataUnloadTask = unloadTask.task();
-+ }
-+ }
-+
-+ this.schedule(scheduledUnload, consumer, initialPriority);
-+ }
-+
-+ private void schedule(final boolean scheduledUnload, final BiConsumer consumer, final PrioritisedExecutor.Priority initialPriority) {
-+ int priority = this.getPriorityVolatile();
-+
-+ if ((priority & PRIORITY_EXECUTED) != 0) {
-+ // cancelled
-+ return;
-+ }
-+
-+ if (!scheduledUnload) {
-+ this.dataLoadTask = RegionFileIOThread.loadDataAsync(
-+ this.world, this.chunkX, this.chunkZ, this.type, consumer,
-+ initialPriority.isHigherPriority(PrioritisedExecutor.Priority.NORMAL), initialPriority
-+ );
-+ }
-+
-+ int failures = 0;
-+ for (;;) {
-+ if (priority == (priority = this.compareAndExchangePriorityVolatile(priority, priority | (scheduledUnload ? PRIORITY_UNLOAD_SCHEDULED : PRIORITY_LOAD_SCHEDULED)))) {
-+ return;
-+ }
-+
-+ if ((priority & PRIORITY_EXECUTED) != 0) {
-+ // cancelled or executed
-+ if (this.dataUnloadCancellable != null) {
-+ this.dataUnloadCancellable.cancel();
-+ }
-+
-+ if (this.dataLoadTask != null) {
-+ this.dataLoadTask.cancel();
-+ }
-+ return;
-+ }
-+
-+ if (scheduledUnload) {
-+ if (this.dataUnloadTask != null) {
-+ this.dataUnloadTask.setPriority(PrioritisedExecutor.Priority.getPriority(priority & ~PRIORITY_FLAGS));
-+ }
-+ } else {
-+ RegionFileIOThread.setPriority(this.world, this.chunkX, this.chunkZ, this.type, PrioritisedExecutor.Priority.getPriority(priority & ~PRIORITY_FLAGS));
-+ }
-+
-+ ++failures;
-+ for (int i = 0; i < failures; ++i) {
-+ ConcurrentUtil.backoff();
-+ }
-+ }
-+ }
-+
-+ /*
-+ private static final class LoadDataPriorityHolder extends PriorityHolder {
-+
-+ protected final LoadDataFromDiskTask task;
-+
-+ protected LoadDataPriorityHolder(final PrioritisedExecutor.Priority priority, final LoadDataFromDiskTask task) {
-+ super(priority);
-+ this.task = task;
-+ }
-+
-+ @Override
-+ protected void cancelScheduled() {
-+ final Cancellable dataLoadTask = this.task.dataLoadTask;
-+ if (dataLoadTask != null) {
-+ // OK if we miss the field read, the task cannot complete if the cancelled bit is set and
-+ // the write to dataLoadTask will check for the cancelled bit
-+ this.task.dataLoadTask.cancel();
-+ }
-+ this.task.complete(CANCELLED_DATA, null);
-+ }
-+
-+ @Override
-+ protected PrioritisedExecutor.Priority getScheduledPriority() {
-+ final LoadDataFromDiskTask task = this.task;
-+ return RegionFileIOThread.getPriority(task.world, task.chunkX, task.chunkZ, task.type);
-+ }
-+
-+ @Override
-+ protected void scheduleTask(final PrioritisedExecutor.Priority priority) {
-+ final LoadDataFromDiskTask task = this.task;
-+ final BiConsumer consumer = (final CompoundTag data, final Throwable thr) -> {
-+ // because cancelScheduled() cannot actually stop this task from executing in every case, we need
-+ // to mark complete here to ensure we do not double complete
-+ if (LoadDataPriorityHolder.this.markExecuting()) {
-+ LoadDataPriorityHolder.this.task.complete(data, thr);
-+ } // else: cancelled
-+ };
-+ task.dataLoadTask = RegionFileIOThread.loadDataAsync(
-+ task.world, task.chunkX, task.chunkZ, task.type, consumer,
-+ priority.isHigherPriority(PrioritisedExecutor.Priority.NORMAL), priority
-+ );
-+ if (this.isMarkedExecuted()) {
-+ // if we are marked as completed, it could be:
-+ // 1. we were cancelled
-+ // 2. the consumer was completed
-+ // in the 2nd case, cancel() does nothing
-+ // in the 1st case, we ensure cancel() is called as it is possible for the cancelling thread
-+ // to miss the field write here
-+ task.dataLoadTask.cancel();
-+ }
-+ }
-+
-+ @Override
-+ protected void lowerPriorityScheduled(final PrioritisedExecutor.Priority priority) {
-+ final LoadDataFromDiskTask task = this.task;
-+ RegionFileIOThread.lowerPriority(task.world, task.chunkX, task.chunkZ, task.type, priority);
-+ }
-+
-+ @Override
-+ protected void setPriorityScheduled(final PrioritisedExecutor.Priority priority) {
-+ final LoadDataFromDiskTask task = this.task;
-+ RegionFileIOThread.setPriority(task.world, task.chunkX, task.chunkZ, task.type, priority);
-+ }
-+
-+ @Override
-+ protected void raisePriorityScheduled(final PrioritisedExecutor.Priority priority) {
-+ final LoadDataFromDiskTask task = this.task;
-+ RegionFileIOThread.raisePriority(task.world, task.chunkX, task.chunkZ, task.type, priority);
-+ }
-+ }
-+ */
-+ }
-+}
-diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/NewChunkHolder.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/NewChunkHolder.java
-new file mode 100644
-index 0000000000000000000000000000000000000000..8013dd333e27aa5fd0beb431fa32491eec9f5246
---- /dev/null
-+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/NewChunkHolder.java
-@@ -0,0 +1,2077 @@
-+package io.papermc.paper.chunk.system.scheduling;
-+
-+import ca.spottedleaf.concurrentutil.completable.Completable;
-+import ca.spottedleaf.concurrentutil.executor.Cancellable;
-+import ca.spottedleaf.concurrentutil.executor.standard.DelayedPrioritisedTask;
-+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
-+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
-+import com.google.gson.JsonArray;
-+import com.google.gson.JsonElement;
-+import com.google.gson.JsonObject;
-+import com.google.gson.JsonPrimitive;
-+import com.mojang.logging.LogUtils;
-+import io.papermc.paper.chunk.system.io.RegionFileIOThread;
-+import io.papermc.paper.chunk.system.poi.PoiChunk;
-+import io.papermc.paper.util.CoordinateUtils;
-+import io.papermc.paper.util.TickThread;
-+import io.papermc.paper.util.WorldUtil;
-+import io.papermc.paper.world.ChunkEntitySlices;
-+import it.unimi.dsi.fastutil.objects.Reference2ObjectLinkedOpenHashMap;
-+import it.unimi.dsi.fastutil.objects.Reference2ObjectMap;
-+import it.unimi.dsi.fastutil.objects.Reference2ObjectOpenHashMap;
-+import it.unimi.dsi.fastutil.objects.ReferenceLinkedOpenHashSet;
-+import net.minecraft.nbt.CompoundTag;
-+import net.minecraft.server.level.ChunkHolder;
-+import net.minecraft.server.level.ChunkMap;
-+import net.minecraft.server.level.ServerLevel;
-+import net.minecraft.server.level.TicketType;
-+import net.minecraft.world.entity.Entity;
-+import net.minecraft.world.level.ChunkPos;
-+import net.minecraft.world.level.chunk.ChunkAccess;
-+import net.minecraft.world.level.chunk.ChunkStatus;
-+import net.minecraft.world.level.chunk.ImposterProtoChunk;
-+import net.minecraft.world.level.chunk.LevelChunk;
-+import net.minecraft.world.level.chunk.storage.ChunkSerializer;
-+import net.minecraft.world.level.chunk.storage.EntityStorage;
-+import org.slf4j.Logger;
-+import java.lang.invoke.VarHandle;
-+import java.util.ArrayList;
-+import java.util.Iterator;
-+import java.util.List;
-+import java.util.Map;
-+import java.util.Objects;
-+import java.util.concurrent.atomic.AtomicBoolean;
-+import java.util.function.Consumer;
-+
-+public final class NewChunkHolder {
-+
-+ private static final Logger LOGGER = LogUtils.getClassLogger();
-+
-+ public static final Thread.UncaughtExceptionHandler CHUNKSYSTEM_UNCAUGHT_EXCEPTION_HANDLER = new Thread.UncaughtExceptionHandler() {
-+ @Override
-+ public void uncaughtException(final Thread thread, final Throwable throwable) {
-+ if (!(throwable instanceof ThreadDeath)) {
-+ LOGGER.error("Uncaught exception in thread " + thread.getName(), throwable);
-+ }
-+ }
-+ };
-+
-+ public final ServerLevel world;
-+ public final int chunkX;
-+ public final int chunkZ;
-+
-+ public final ChunkTaskScheduler scheduler;
-+
-+ // load/unload state
-+
-+ // chunk data state
-+
-+ private ChunkEntitySlices entityChunk;
-+ // entity chunk that is loaded, but not yet deserialized
-+ private CompoundTag pendingEntityChunk;
-+
-+ ChunkEntitySlices loadInEntityChunk(final boolean transientChunk) {
-+ TickThread.ensureTickThread(this.world, this.chunkX, this.chunkZ, "Cannot sync load entity data off-main");
-+ final CompoundTag entityChunk;
-+ final ChunkEntitySlices ret;
-+ this.scheduler.schedulingLock.lock();
-+ try {
-+ if (this.entityChunk != null && (transientChunk || !this.entityChunk.isTransient())) {
-+ return this.entityChunk;
-+ }
-+ final CompoundTag pendingEntityChunk = this.pendingEntityChunk;
-+ if (!transientChunk && pendingEntityChunk == null) {
-+ throw new IllegalStateException("Must load entity data from disk before loading in the entity chunk!");
-+ }
-+
-+ if (this.entityChunk == null) {
-+ ret = this.entityChunk = new ChunkEntitySlices(
-+ this.world, this.chunkX, this.chunkZ, this.getChunkStatus(),
-+ WorldUtil.getMinSection(this.world), WorldUtil.getMaxSection(this.world)
-+ );
-+
-+ ret.setTransient(transientChunk);
-+
-+ this.world.getEntityLookup().entitySectionLoad(this.chunkX, this.chunkZ, ret);
-+ } else {
-+ // transientChunk = false here
-+ ret = this.entityChunk;
-+ this.entityChunk.setTransient(false);
-+ }
-+
-+ if (!transientChunk) {
-+ this.pendingEntityChunk = null;
-+ entityChunk = pendingEntityChunk == EMPTY_ENTITY_CHUNK ? null : pendingEntityChunk;
-+ } else {
-+ entityChunk = null;
-+ }
-+ } finally {
-+ this.scheduler.schedulingLock.unlock();
-+ }
-+
-+ if (!transientChunk) {
-+ if (entityChunk != null) {
-+ final List