From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Spottedleaf
Date: Thu, 11 Mar 2021 02:32:30 -0800
Subject: [PATCH] Rewrite chunk system
Rebased patches:
New player chunk loader system
Make ChunkStatus.EMPTY not rely on the main thread for completion
In order to do this, we need to push the POI consistency checks
to a later status. Since FULL is the only other status that
uses the main thread, it can go there.
The consistency checks are only really for when a desync occurs,
and so that delaying the check only matters when the chunk data
has desync'd. As long as the desync is sorted before the
chunk is full loaded (i.e before setBlock can occur on
a chunk), it should not matter.
This change is primarily due to behavioural changes
in the chunk task queue brought by region threading -
which is to split the queue into separate regions. As such,
it is required that in order for the sync load to complete
that the region owning the chunk drain and execute the task
while ticking. However, that is not always possible in
region threading. Thus, removing the main thread reliance allows
the chunk to progress without requiring a tick thread.
Specifically, this allows far sync loads (outside of a specific
regions bounds) to occur without issue - namely with structure
searching.
Increase parallelism for neighbour writing chunk statuses
Namely, everything after FEATURES. By creating a dependency
chain indicating what chunks are in use, we can safely
schedule completely independent tasks in parallel. This
will allow the chunk system to scale beyond 10 threads
per world.
Properly cancel chunk load tasks that were not scheduled
Since the chunk load task was not scheduled, the entity/poi load
task fields will not be set, but the task complete counter
will not be adjusted. Thus, the chunk load task will not complete.
To resolve this, detect when the entity/poi tasks were not scheduled
and decrement the task complete counter in such cases.
Mark POI/Entity load tasks as completed before releasing scheduling lock
It must be marked as completed during that lock hold since the
waiters field is set to null. Thus, any other thread attempting
a cancellation will fail to remove from waiters. Also, any
other thread attempting to cancel may set the completed field
to true which would cause accept() to fail as well.
Completion was always designed to happen while holding the
scheduling lock to prevent these race conditions. The code
was originally set up to complete while not holding the
scheduling lock to avoid invoking callbacks while holding the
lock, however the access to the completion field was not
considered.
Resolve this by marking the callback as completed during the
lock, but invoking the accept() function after releasing
the lock. This will prevent any cancellation attempts to be
blocked, and allow the current thread to complete the callback
without any issues.
Cache whether region files do not exist
The repeated I/O of creating the directory for the regionfile
or for checking if the file exists can be heavy in
when pushing chunk generation extremely hard - as each chunk gen
request may effectively go through to the I/O thread.
Use coordinate-based locking to increase chunk system parallelism
A significant overhead in Folia comes from the chunk system's
locks, the ticket lock and the scheduling lock. The public
test server, which had ~330 players, had signficant performance
problems with these locks: ~80% of the time spent ticking
was _waiting_ for the locks to free. Given that it used
around 15 cores total at peak, this is a complete and utter loss
of potential.
To address this issue, I have replaced the ticket lock and scheduling
lock with two ReentrantAreaLocks. The ReentrantAreaLock takes a
shift, which is used internally to group positions into sections.
This grouping is neccessary, as the possible radius of area that
needs to be acquired for any given lock usage is up to 64. As such,
the shift is critical to reduce the number of areas required to lock
for any lock operation. Currently, it is set to a shift of 6, which
is identical to the ticket level propagation shift (and, it must be
at least the ticket level propagation shift AND the region shift).
The chunk system locking changes required a complete rewrite of the
chunk system tick, chunk system unload, and chunk system ticket level
propagation - as all of the previous logic only works with a single
global lock.
This does introduce two other section shifts: the lock shift, and the
ticket shift. The lock shift is simply what shift the area locks use,
and the ticket shift represents the size of the ticket sections.
Currently, these values are just set to the region shift for simplicity.
However, they are not arbitrary: the lock shift must be at least the size
of the ticket shift and must be at least the size of the region shift.
The ticket shift must also be >= the ceil(log2(max ticket level source)).
The chunk system's ticket propagator is now global state, instead of
region state. This cleans up the logic for ticket levels significantly,
and removes usage of the region lock in this area, but it also means
that the addition of a ticket no longer creates a region. To alleviate
the side effects of this change, the global tick thread now processes
ticket level updates for each world every tick to guarantee eventual
ticket level processing. The chunk system also provides a hook to
process ticket level changes in a given _section_, so that the
region queue can guarantee that after adding its reference counter
that the region section is created/exists/wont be destroyed.
The ticket propagator operates by updating the sources in a single ticket
section, and propagating the updates to its 1 radius neighbours. This
allows the ticket updates to occur in parallel or selectively (see above).
Currently, the process ticket level update function operates by
polling from a concurrent queue of sections to update and simply
invoking the single section update logic. This allows the function
to operate completely in parallel, provided the queue is ordered right.
Additionally, this limits the area used in the ticket/scheduling lock
when processing updates, which should massively increase parallelism compared
to before.
The chunk system ticket addition for expirable ticket types has been modified
to no longer track exact tick deadlines, as this relies on what region the
ticket is in. Instead, the chunk system tracks a map of
lock section -> (chunk coordinate -> expire ticket count) and every ticket
has been changed to have a removeDelay count that is decremented each tick.
Each region searches its own sections to find tickets to try to expire.
Chunk system unloading has been modified to track unloads by lock section.
The ordering is determined by which section a chunk resides in.
The unload process now removes from unload sections and processes
the full unload stages (1, 2, 3) before moving to the next section, if possible.
This allows the unload logic to only hold one lock section at a time for
each lock, which is a massive parallelism increase.
In stress testing, these changes lowered the locking overhead to only 5%
from ~70%, which completely fix the original problem as described.
== AT ==
public net.minecraft.server.level.ChunkHolder pos
public net.minecraft.server.level.ChunkMap overworldDataStorage
public-f net.minecraft.world.level.chunk.storage.RegionFileStorage
public net.minecraft.server.level.ChunkMap getPoiManager()Lnet/minecraft/world/entity/ai/village/poi/PoiManager;
diff --git a/src/main/java/ca/spottedleaf/concurrentutil/lock/ReentrantAreaLock.java b/src/main/java/ca/spottedleaf/concurrentutil/lock/ReentrantAreaLock.java
new file mode 100644
index 0000000000000000000000000000000000000000..4fd9a0cd8f1e6ae1a97e963dc7731a80bc6fac5b
--- /dev/null
+++ b/src/main/java/ca/spottedleaf/concurrentutil/lock/ReentrantAreaLock.java
@@ -0,0 +1,395 @@
+package ca.spottedleaf.concurrentutil.lock;
+
+import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
+import it.unimi.dsi.fastutil.HashCommon;
+import java.util.ArrayList;
+import java.util.List;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.locks.LockSupport;
+
+public final class ReentrantAreaLock {
+
+ public final int coordinateShift;
+
+ // aggressive load factor to reduce contention
+ private final ConcurrentHashMap nodes = new ConcurrentHashMap<>(128, 0.2f);
+
+ public ReentrantAreaLock(final int coordinateShift) {
+ this.coordinateShift = coordinateShift;
+ }
+
+ public boolean isHeldByCurrentThread(final int x, final int z) {
+ final Thread currThread = Thread.currentThread();
+ final int shift = this.coordinateShift;
+ final int sectionX = x >> shift;
+ final int sectionZ = z >> shift;
+
+ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
+ final Node node = this.nodes.get(coordinate);
+
+ return node != null && node.thread == currThread;
+ }
+
+ public boolean isHeldByCurrentThread(final int centerX, final int centerZ, final int radius) {
+ return this.isHeldByCurrentThread(centerX - radius, centerZ - radius, centerX + radius, centerZ + radius);
+ }
+
+ public boolean isHeldByCurrentThread(final int fromX, final int fromZ, final int toX, final int toZ) {
+ if (fromX > toX || fromZ > toZ) {
+ throw new IllegalArgumentException();
+ }
+
+ final Thread currThread = Thread.currentThread();
+ final int shift = this.coordinateShift;
+ final int fromSectionX = fromX >> shift;
+ final int fromSectionZ = fromZ >> shift;
+ final int toSectionX = toX >> shift;
+ final int toSectionZ = toZ >> shift;
+
+ for (int currZ = fromSectionZ; currZ <= toSectionZ; ++currZ) {
+ for (int currX = fromSectionX; currX <= toSectionX; ++currX) {
+ final Coordinate coordinate = new Coordinate(Coordinate.key(currX, currZ));
+
+ final Node node = this.nodes.get(coordinate);
+
+ if (node == null || node.thread != currThread) {
+ return false;
+ }
+ }
+ }
+
+ return true;
+ }
+
+ public Node tryLock(final int x, final int z) {
+ return this.tryLock(x, z, x, z);
+ }
+
+ public Node tryLock(final int centerX, final int centerZ, final int radius) {
+ return this.tryLock(centerX - radius, centerZ - radius, centerX + radius, centerZ + radius);
+ }
+
+ public Node tryLock(final int fromX, final int fromZ, final int toX, final int toZ) {
+ if (fromX > toX || fromZ > toZ) {
+ throw new IllegalArgumentException();
+ }
+
+ final Thread currThread = Thread.currentThread();
+ final int shift = this.coordinateShift;
+ final int fromSectionX = fromX >> shift;
+ final int fromSectionZ = fromZ >> shift;
+ final int toSectionX = toX >> shift;
+ final int toSectionZ = toZ >> shift;
+
+ final List areaAffected = new ArrayList<>();
+
+ final Node ret = new Node(this, areaAffected, currThread);
+
+ boolean failed = false;
+
+ // try to fast acquire area
+ for (int currZ = fromSectionZ; currZ <= toSectionZ; ++currZ) {
+ for (int currX = fromSectionX; currX <= toSectionX; ++currX) {
+ final Coordinate coordinate = new Coordinate(Coordinate.key(currX, currZ));
+
+ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
+
+ if (prev == null) {
+ areaAffected.add(coordinate);
+ continue;
+ }
+
+ if (prev.thread != currThread) {
+ failed = true;
+ break;
+ }
+ }
+ }
+
+ if (!failed) {
+ return ret;
+ }
+
+ // failed, undo logic
+ if (!areaAffected.isEmpty()) {
+ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
+ final Coordinate key = areaAffected.get(i);
+
+ if (this.nodes.remove(key) != ret) {
+ throw new IllegalStateException();
+ }
+ }
+
+ areaAffected.clear();
+
+ // since we inserted, we need to drain waiters
+ Thread unpark;
+ while ((unpark = ret.pollOrBlockAdds()) != null) {
+ LockSupport.unpark(unpark);
+ }
+ }
+
+ return null;
+ }
+
+ public Node lock(final int x, final int z) {
+ final Thread currThread = Thread.currentThread();
+ final int shift = this.coordinateShift;
+ final int sectionX = x >> shift;
+ final int sectionZ = z >> shift;
+
+ final List areaAffected = new ArrayList<>(1);
+
+ final Node ret = new Node(this, areaAffected, currThread);
+ final Coordinate coordinate = new Coordinate(Coordinate.key(sectionX, sectionZ));
+
+ for (long failures = 0L;;) {
+ final Node park;
+
+ // try to fast acquire area
+ {
+ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
+
+ if (prev == null) {
+ areaAffected.add(coordinate);
+ return ret;
+ } else if (prev.thread != currThread) {
+ park = prev;
+ } else {
+ // only one node we would want to acquire, and it's owned by this thread already
+ return ret;
+ }
+ }
+
+ ++failures;
+
+ if (failures > 128L && park.add(currThread)) {
+ LockSupport.park();
+ } else {
+ // high contention, spin wait
+ if (failures < 128L) {
+ for (long i = 0; i < failures; ++i) {
+ Thread.onSpinWait();
+ }
+ failures = failures << 1;
+ } else if (failures < 1_200L) {
+ LockSupport.parkNanos(1_000L);
+ failures = failures + 1L;
+ } else { // scale 0.1ms (100us) per failure
+ Thread.yield();
+ LockSupport.parkNanos(100_000L * failures);
+ failures = failures + 1L;
+ }
+ }
+ }
+ }
+
+ public Node lock(final int centerX, final int centerZ, final int radius) {
+ return this.lock(centerX - radius, centerZ - radius, centerX + radius, centerZ + radius);
+ }
+
+ public Node lock(final int fromX, final int fromZ, final int toX, final int toZ) {
+ if (fromX > toX || fromZ > toZ) {
+ throw new IllegalArgumentException();
+ }
+
+ final Thread currThread = Thread.currentThread();
+ final int shift = this.coordinateShift;
+ final int fromSectionX = fromX >> shift;
+ final int fromSectionZ = fromZ >> shift;
+ final int toSectionX = toX >> shift;
+ final int toSectionZ = toZ >> shift;
+
+ if (((fromSectionX ^ toSectionX) | (fromSectionZ ^ toSectionZ)) == 0) {
+ return this.lock(fromX, fromZ);
+ }
+
+ final List areaAffected = new ArrayList<>();
+
+ final Node ret = new Node(this, areaAffected, currThread);
+
+ for (long failures = 0L;;) {
+ Node park = null;
+ boolean addedToArea = false;
+ boolean alreadyOwned = false;
+ boolean allOwned = true;
+
+ // try to fast acquire area
+ for (int currZ = fromSectionZ; currZ <= toSectionZ; ++currZ) {
+ for (int currX = fromSectionX; currX <= toSectionX; ++currX) {
+ final Coordinate coordinate = new Coordinate(Coordinate.key(currX, currZ));
+
+ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
+
+ if (prev == null) {
+ addedToArea = true;
+ allOwned = false;
+ areaAffected.add(coordinate);
+ continue;
+ }
+
+ if (prev.thread != currThread) {
+ park = prev;
+ alreadyOwned = true;
+ break;
+ }
+ }
+ }
+
+ if (park == null) {
+ if (alreadyOwned && !allOwned) {
+ throw new IllegalStateException("Improper lock usage: Should never acquire intersecting areas");
+ }
+ return ret;
+ }
+
+ // failed, undo logic
+ if (addedToArea) {
+ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
+ final Coordinate key = areaAffected.get(i);
+
+ if (this.nodes.remove(key) != ret) {
+ throw new IllegalStateException();
+ }
+ }
+
+ areaAffected.clear();
+
+ // since we inserted, we need to drain waiters
+ Thread unpark;
+ while ((unpark = ret.pollOrBlockAdds()) != null) {
+ LockSupport.unpark(unpark);
+ }
+ }
+
+ ++failures;
+
+ if (failures > 128L && park.add(currThread)) {
+ LockSupport.park(park);
+ } else {
+ // high contention, spin wait
+ if (failures < 128L) {
+ for (long i = 0; i < failures; ++i) {
+ Thread.onSpinWait();
+ }
+ failures = failures << 1;
+ } else if (failures < 1_200L) {
+ LockSupport.parkNanos(1_000L);
+ failures = failures + 1L;
+ } else { // scale 0.1ms (100us) per failure
+ Thread.yield();
+ LockSupport.parkNanos(100_000L * failures);
+ failures = failures + 1L;
+ }
+ }
+
+ if (addedToArea) {
+ // try again, so we need to allow adds so that other threads can properly block on us
+ ret.allowAdds();
+ }
+ }
+ }
+
+ public void unlock(final Node node) {
+ if (node.lock != this) {
+ throw new IllegalStateException("Unlock target lock mismatch");
+ }
+
+ final List areaAffected = node.areaAffected;
+
+ if (areaAffected.isEmpty()) {
+ // here we are not in the node map, and so do not need to remove from the node map or unblock any waiters
+ return;
+ }
+
+ // remove from node map; allowing other threads to lock
+ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
+ final Coordinate coordinate = areaAffected.get(i);
+ if (this.nodes.remove(coordinate) != node) {
+ throw new IllegalStateException();
+ }
+ }
+
+ Thread unpark;
+ while ((unpark = node.pollOrBlockAdds()) != null) {
+ LockSupport.unpark(unpark);
+ }
+ }
+
+ public static final class Node extends MultiThreadedQueue {
+
+ private final ReentrantAreaLock lock;
+ private final List areaAffected;
+ private final Thread thread;
+ //private final Throwable WHO_CREATED_MY_ASS = new Throwable();
+
+ private Node(final ReentrantAreaLock lock, final List areaAffected, final Thread thread) {
+ this.lock = lock;
+ this.areaAffected = areaAffected;
+ this.thread = thread;
+ }
+
+ @Override
+ public String toString() {
+ return "Node{" +
+ "areaAffected=" + this.areaAffected +
+ ", thread=" + this.thread +
+ '}';
+ }
+ }
+
+ private static final class Coordinate implements Comparable {
+
+ public final long key;
+
+ public Coordinate(final long key) {
+ this.key = key;
+ }
+
+ public Coordinate(final int x, final int z) {
+ this.key = key(x, z);
+ }
+
+ public static long key(final int x, final int z) {
+ return ((long)z << 32) | (x & 0xFFFFFFFFL);
+ }
+
+ public static int x(final long key) {
+ return (int)key;
+ }
+
+ public static int z(final long key) {
+ return (int)(key >>> 32);
+ }
+
+ @Override
+ public int hashCode() {
+ return (int)HashCommon.mix(this.key);
+ }
+
+ @Override
+ public boolean equals(final Object obj) {
+ if (this == obj) {
+ return true;
+ }
+
+ if (!(obj instanceof Coordinate other)) {
+ return false;
+ }
+
+ return this.key == other.key;
+ }
+
+ // This class is intended for HashMap/ConcurrentHashMap usage, which do treeify bin nodes if the chain
+ // is too large. So we should implement compareTo to help.
+ @Override
+ public int compareTo(final Coordinate other) {
+ return Long.compare(this.key, other.key);
+ }
+
+ @Override
+ public String toString() {
+ return "[" + x(this.key) + "," + z(this.key) + "]";
+ }
+ }
+}
diff --git a/src/main/java/ca/spottedleaf/concurrentutil/lock/SyncReentrantAreaLock.java b/src/main/java/ca/spottedleaf/concurrentutil/lock/SyncReentrantAreaLock.java
new file mode 100644
index 0000000000000000000000000000000000000000..64b5803d002b2968841a5ddee987f98b72964e87
--- /dev/null
+++ b/src/main/java/ca/spottedleaf/concurrentutil/lock/SyncReentrantAreaLock.java
@@ -0,0 +1,217 @@
+package ca.spottedleaf.concurrentutil.lock;
+
+import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
+import it.unimi.dsi.fastutil.longs.Long2ReferenceOpenHashMap;
+import it.unimi.dsi.fastutil.longs.LongArrayList;
+import java.util.concurrent.locks.LockSupport;
+
+// not concurrent, unlike ReentrantAreaLock
+// no incorrect lock usage detection (acquiring intersecting areas)
+// this class is nothing more than a performance reference for ReentrantAreaLock
+public final class SyncReentrantAreaLock {
+
+ private final int coordinateShift;
+
+ // aggressive load factor to reduce contention
+ private final Long2ReferenceOpenHashMap nodes = new Long2ReferenceOpenHashMap<>(128, 0.2f);
+
+ public SyncReentrantAreaLock(final int coordinateShift) {
+ this.coordinateShift = coordinateShift;
+ }
+
+ private static long key(final int x, final int z) {
+ return ((long)z << 32) | (x & 0xFFFFFFFFL);
+ }
+
+ public Node lock(final int x, final int z) {
+ final Thread currThread = Thread.currentThread();
+ final int shift = this.coordinateShift;
+ final int sectionX = x >> shift;
+ final int sectionZ = z >> shift;
+
+ final LongArrayList areaAffected = new LongArrayList();
+
+ final Node ret = new Node(this, areaAffected, currThread);
+
+ final long coordinate = key(sectionX, sectionZ);
+
+ for (long failures = 0L;;) {
+ final Node park;
+
+ synchronized (this) {
+ // try to fast acquire area
+ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
+
+ if (prev == null) {
+ areaAffected.add(coordinate);
+ return ret;
+ } else if (prev.thread != currThread) {
+ park = prev;
+ } else {
+ // only one node we would want to acquire, and it's owned by this thread already
+ return ret;
+ }
+ }
+
+ ++failures;
+
+ if (failures > 128L && park.add(currThread)) {
+ LockSupport.park();
+ } else {
+ // high contention, spin wait
+ if (failures < 128L) {
+ for (long i = 0; i < failures; ++i) {
+ Thread.onSpinWait();
+ }
+ failures = failures << 1;
+ } else if (failures < 1_200L) {
+ LockSupport.parkNanos(1_000L);
+ failures = failures + 1L;
+ } else { // scale 0.1ms (100us) per failure
+ Thread.yield();
+ LockSupport.parkNanos(100_000L * failures);
+ failures = failures + 1L;
+ }
+ }
+ }
+ }
+
+ public Node lock(final int centerX, final int centerZ, final int radius) {
+ return this.lock(centerX - radius, centerZ - radius, centerX + radius, centerZ + radius);
+ }
+
+ public Node lock(final int fromX, final int fromZ, final int toX, final int toZ) {
+ if (fromX > toX || fromZ > toZ) {
+ throw new IllegalArgumentException();
+ }
+
+ final Thread currThread = Thread.currentThread();
+ final int shift = this.coordinateShift;
+ final int fromSectionX = fromX >> shift;
+ final int fromSectionZ = fromZ >> shift;
+ final int toSectionX = toX >> shift;
+ final int toSectionZ = toZ >> shift;
+
+ final LongArrayList areaAffected = new LongArrayList();
+
+ final Node ret = new Node(this, areaAffected, currThread);
+
+ for (long failures = 0L;;) {
+ Node park = null;
+ boolean addedToArea = false;
+
+ synchronized (this) {
+ // try to fast acquire area
+ for (int currZ = fromSectionZ; currZ <= toSectionZ; ++currZ) {
+ for (int currX = fromSectionX; currX <= toSectionX; ++currX) {
+ final long coordinate = key(currX, currZ);
+
+ final Node prev = this.nodes.putIfAbsent(coordinate, ret);
+
+ if (prev == null) {
+ addedToArea = true;
+ areaAffected.add(coordinate);
+ continue;
+ }
+
+ if (prev.thread != currThread) {
+ park = prev;
+ break;
+ }
+ }
+ }
+
+ if (park == null) {
+ return ret;
+ }
+
+ // failed, undo logic
+ if (!areaAffected.isEmpty()) {
+ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
+ final long key = areaAffected.getLong(i);
+
+ if (!this.nodes.remove(key, ret)) {
+ throw new IllegalStateException();
+ }
+ }
+ }
+ }
+
+ if (addedToArea) {
+ areaAffected.clear();
+ // since we inserted, we need to drain waiters
+ Thread unpark;
+ while ((unpark = ret.pollOrBlockAdds()) != null) {
+ LockSupport.unpark(unpark);
+ }
+ }
+
+ ++failures;
+
+ if (failures > 128L && park.add(currThread)) {
+ LockSupport.park();
+ } else {
+ // high contention, spin wait
+ if (failures < 128L) {
+ for (long i = 0; i < failures; ++i) {
+ Thread.onSpinWait();
+ }
+ failures = failures << 1;
+ } else if (failures < 1_200L) {
+ LockSupport.parkNanos(1_000L);
+ failures = failures + 1L;
+ } else { // scale 0.1ms (100us) per failure
+ Thread.yield();
+ LockSupport.parkNanos(100_000L * failures);
+ failures = failures + 1L;
+ }
+ }
+
+ if (addedToArea) {
+ // try again, so we need to allow adds so that other threads can properly block on us
+ ret.allowAdds();
+ }
+ }
+ }
+
+ public void unlock(final Node node) {
+ if (node.lock != this) {
+ throw new IllegalStateException("Unlock target lock mismatch");
+ }
+
+ final LongArrayList areaAffected = node.areaAffected;
+
+ if (areaAffected.isEmpty()) {
+ // here we are not in the node map, and so do not need to remove from the node map or unblock any waiters
+ return;
+ }
+
+ // remove from node map; allowing other threads to lock
+ synchronized (this) {
+ for (int i = 0, len = areaAffected.size(); i < len; ++i) {
+ final long coordinate = areaAffected.getLong(i);
+ if (!this.nodes.remove(coordinate, node)) {
+ throw new IllegalStateException();
+ }
+ }
+ }
+
+ Thread unpark;
+ while ((unpark = node.pollOrBlockAdds()) != null) {
+ LockSupport.unpark(unpark);
+ }
+ }
+
+ public static final class Node extends MultiThreadedQueue {
+
+ private final SyncReentrantAreaLock lock;
+ private final LongArrayList areaAffected;
+ private final Thread thread;
+
+ private Node(final SyncReentrantAreaLock lock, final LongArrayList areaAffected, final Thread thread) {
+ this.lock = lock;
+ this.areaAffected = areaAffected;
+ this.thread = thread;
+ }
+ }
+}
diff --git a/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java b/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java
index 499c069d64692872924963d3a7ac39664b20468d..ef8ea36b2acefb935afda01396d2699e2921f396 100644
--- a/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java
+++ b/src/main/java/ca/spottedleaf/starlight/common/light/StarLightInterface.java
@@ -41,14 +41,14 @@ public final class StarLightInterface {
protected final ArrayDeque cachedSkyPropagators;
protected final ArrayDeque cachedBlockPropagators;
- protected final LightQueue lightQueue = new LightQueue(this);
+ public final io.papermc.paper.chunk.system.light.LightQueue lightQueue; // Paper - replace light queue
protected final LayerLightEventListener skyReader;
protected final LayerLightEventListener blockReader;
protected final boolean isClientSide;
- protected final int minSection;
- protected final int maxSection;
+ public final int minSection; // Paper - public
+ public final int maxSection; // Paper - public
protected final int minLightSection;
protected final int maxLightSection;
@@ -182,6 +182,7 @@ public final class StarLightInterface {
StarLightInterface.this.sectionChange(pos, notReady);
}
};
+ this.lightQueue = new io.papermc.paper.chunk.system.light.LightQueue(this); // Paper - replace light queue
}
public boolean hasSkyLight() {
@@ -333,7 +334,7 @@ public final class StarLightInterface {
return this.lightAccess;
}
- protected final SkyStarLightEngine getSkyLightEngine() {
+ public final SkyStarLightEngine getSkyLightEngine() { // Paper - public
if (this.cachedSkyPropagators == null) {
return null;
}
@@ -348,7 +349,7 @@ public final class StarLightInterface {
return ret;
}
- protected final void releaseSkyLightEngine(final SkyStarLightEngine engine) {
+ public final void releaseSkyLightEngine(final SkyStarLightEngine engine) { // Paper - public
if (this.cachedSkyPropagators == null) {
return;
}
@@ -357,7 +358,7 @@ public final class StarLightInterface {
}
}
- protected final BlockStarLightEngine getBlockLightEngine() {
+ public final BlockStarLightEngine getBlockLightEngine() { // Paper - public
if (this.cachedBlockPropagators == null) {
return null;
}
@@ -372,7 +373,7 @@ public final class StarLightInterface {
return ret;
}
- protected final void releaseBlockLightEngine(final BlockStarLightEngine engine) {
+ public final void releaseBlockLightEngine(final BlockStarLightEngine engine) { // Paper - public
if (this.cachedBlockPropagators == null) {
return;
}
@@ -381,7 +382,7 @@ public final class StarLightInterface {
}
}
- public LightQueue.ChunkTasks blockChange(final BlockPos pos) {
+ public io.papermc.paper.chunk.system.light.LightQueue.ChunkTasks blockChange(final BlockPos pos) { // Paper - rewrite chunk system
if (this.world == null || pos.getY() < WorldUtil.getMinBlockY(this.world) || pos.getY() > WorldUtil.getMaxBlockY(this.world)) { // empty world
return null;
}
@@ -389,7 +390,7 @@ public final class StarLightInterface {
return this.lightQueue.queueBlockChange(pos);
}
- public LightQueue.ChunkTasks sectionChange(final SectionPos pos, final boolean newEmptyValue) {
+ public io.papermc.paper.chunk.system.light.LightQueue.ChunkTasks sectionChange(final SectionPos pos, final boolean newEmptyValue) { // Paper - rewrite chunk system
if (this.world == null) { // empty world
return null;
}
@@ -519,57 +520,15 @@ public final class StarLightInterface {
}
public void scheduleChunkLight(final ChunkPos pos, final Runnable run) {
- this.lightQueue.queueChunkLighting(pos, run);
+ throw new UnsupportedOperationException("No longer implemented, use the new lightQueue field to queue tasks"); // Paper - replace light queue
}
public void removeChunkTasks(final ChunkPos pos) {
- this.lightQueue.removeChunk(pos);
+ throw new UnsupportedOperationException("No longer implemented, use the new lightQueue field to queue tasks"); // Paper - replace light queue
}
public void propagateChanges() {
- if (this.lightQueue.isEmpty()) {
- return;
- }
-
- final SkyStarLightEngine skyEngine = this.getSkyLightEngine();
- final BlockStarLightEngine blockEngine = this.getBlockLightEngine();
-
- try {
- LightQueue.ChunkTasks task;
- while ((task = this.lightQueue.removeFirstTask()) != null) {
- if (task.lightTasks != null) {
- for (final Runnable run : task.lightTasks) {
- run.run();
- }
- }
-
- final long coordinate = task.chunkCoordinate;
- final int chunkX = CoordinateUtils.getChunkX(coordinate);
- final int chunkZ = CoordinateUtils.getChunkZ(coordinate);
-
- final Set positions = task.changedPositions;
- final Boolean[] sectionChanges = task.changedSectionSet;
-
- if (skyEngine != null && (!positions.isEmpty() || sectionChanges != null)) {
- skyEngine.blocksChangedInChunk(this.lightAccess, chunkX, chunkZ, positions, sectionChanges);
- }
- if (blockEngine != null && (!positions.isEmpty() || sectionChanges != null)) {
- blockEngine.blocksChangedInChunk(this.lightAccess, chunkX, chunkZ, positions, sectionChanges);
- }
-
- if (skyEngine != null && task.queuedEdgeChecksSky != null) {
- skyEngine.checkChunkEdges(this.lightAccess, chunkX, chunkZ, task.queuedEdgeChecksSky);
- }
- if (blockEngine != null && task.queuedEdgeChecksBlock != null) {
- blockEngine.checkChunkEdges(this.lightAccess, chunkX, chunkZ, task.queuedEdgeChecksBlock);
- }
-
- task.onComplete.complete(null);
- }
- } finally {
- this.releaseSkyLightEngine(skyEngine);
- this.releaseBlockLightEngine(blockEngine);
- }
+ throw new UnsupportedOperationException("No longer implemented, task draining is now performed by the light thread"); // Paper - replace light queue
}
public static final class LightQueue {
diff --git a/src/main/java/co/aikar/timings/WorldTimingsHandler.java b/src/main/java/co/aikar/timings/WorldTimingsHandler.java
index 2f0d9b953802dee821cfde82d22b0567cce8ee91..22687667ec69a954261e55e59261286ac1b8b8cd 100644
--- a/src/main/java/co/aikar/timings/WorldTimingsHandler.java
+++ b/src/main/java/co/aikar/timings/WorldTimingsHandler.java
@@ -59,6 +59,16 @@ public class WorldTimingsHandler {
public final Timing miscMobSpawning;
+ public final Timing poiUnload;
+ public final Timing chunkUnload;
+ public final Timing poiSaveDataSerialization;
+ public final Timing chunkSave;
+ public final Timing chunkSaveDataSerialization;
+ public final Timing chunkSaveIOWait;
+ public final Timing chunkUnloadPrepareSave;
+ public final Timing chunkUnloadPOISerialization;
+ public final Timing chunkUnloadDataSave;
+
public WorldTimingsHandler(Level server) {
String name = ((PrimaryLevelData) server.getLevelData()).getLevelName() + " - ";
@@ -112,6 +122,16 @@ public class WorldTimingsHandler {
miscMobSpawning = Timings.ofSafe(name + "Mob spawning - Misc");
+
+ poiUnload = Timings.ofSafe(name + "Chunk unload - POI");
+ chunkUnload = Timings.ofSafe(name + "Chunk unload - Chunk");
+ poiSaveDataSerialization = Timings.ofSafe(name + "Chunk save - POI Data serialization");
+ chunkSave = Timings.ofSafe(name + "Chunk save - Chunk");
+ chunkSaveDataSerialization = Timings.ofSafe(name + "Chunk save - Chunk Data serialization");
+ chunkSaveIOWait = Timings.ofSafe(name + "Chunk save - Chunk IO Wait");
+ chunkUnloadPrepareSave = Timings.ofSafe(name + "Chunk unload - Async Save Prepare");
+ chunkUnloadPOISerialization = Timings.ofSafe(name + "Chunk unload - POI Data Serialization");
+ chunkUnloadDataSave = Timings.ofSafe(name + "Chunk unload - Data Serialization");
}
public static Timing getTickList(ServerLevel worldserver, String timingsType) {
diff --git a/src/main/java/com/destroystokyo/paper/io/IOUtil.java b/src/main/java/com/destroystokyo/paper/io/IOUtil.java
new file mode 100644
index 0000000000000000000000000000000000000000..e064f96c90afd1a4890060baa055cfd0469b6a6f
--- /dev/null
+++ b/src/main/java/com/destroystokyo/paper/io/IOUtil.java
@@ -0,0 +1,63 @@
+package com.destroystokyo.paper.io;
+
+import org.bukkit.Bukkit;
+
+@Deprecated(forRemoval = true)
+public final class IOUtil {
+
+ /* Copied from concrete or concurrentutil */
+
+ public static long getCoordinateKey(final int x, final int z) {
+ return ((long)z << 32) | (x & 0xFFFFFFFFL);
+ }
+
+ public static int getCoordinateX(final long key) {
+ return (int)key;
+ }
+
+ public static int getCoordinateZ(final long key) {
+ return (int)(key >>> 32);
+ }
+
+ public static int getRegionCoordinate(final int chunkCoordinate) {
+ return chunkCoordinate >> 5;
+ }
+
+ public static int getChunkInRegion(final int chunkCoordinate) {
+ return chunkCoordinate & 31;
+ }
+
+ public static String genericToString(final Object object) {
+ return object == null ? "null" : object.getClass().getName() + ":" + object.toString();
+ }
+
+ public static T notNull(final T obj) {
+ if (obj == null) {
+ throw new NullPointerException();
+ }
+ return obj;
+ }
+
+ public static T notNull(final T obj, final String msgIfNull) {
+ if (obj == null) {
+ throw new NullPointerException(msgIfNull);
+ }
+ return obj;
+ }
+
+ public static void arrayBounds(final int off, final int len, final int arrayLength, final String msgPrefix) {
+ if (off < 0 || len < 0 || (arrayLength - off) < len) {
+ throw new ArrayIndexOutOfBoundsException(msgPrefix + ": off: " + off + ", len: " + len + ", array length: " + arrayLength);
+ }
+ }
+
+ public static int getPriorityForCurrentThread() {
+ return Bukkit.isPrimaryThread() ? PrioritizedTaskQueue.HIGHEST_PRIORITY : PrioritizedTaskQueue.NORMAL_PRIORITY;
+ }
+
+ @SuppressWarnings("unchecked")
+ public static void rethrow(final Throwable throwable) throws T {
+ throw (T)throwable;
+ }
+
+}
diff --git a/src/main/java/com/destroystokyo/paper/io/PaperFileIOThread.java b/src/main/java/com/destroystokyo/paper/io/PaperFileIOThread.java
new file mode 100644
index 0000000000000000000000000000000000000000..f2c27e0ac65be4b75c1d86ef6fd45fdb538d96ac
--- /dev/null
+++ b/src/main/java/com/destroystokyo/paper/io/PaperFileIOThread.java
@@ -0,0 +1,474 @@
+package com.destroystokyo.paper.io;
+
+import com.mojang.logging.LogUtils;
+import net.minecraft.nbt.CompoundTag;
+import net.minecraft.server.level.ServerLevel;
+import net.minecraft.world.level.ChunkPos;
+import net.minecraft.world.level.chunk.storage.RegionFile;
+import org.slf4j.Logger;
+
+import java.io.IOException;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Consumer;
+import java.util.function.Function;
+
+/**
+ * Prioritized singleton thread responsible for all chunk IO that occurs in a minecraft server.
+ *
+ *
+ * Singleton access: {@link Holder#INSTANCE}
+ *
+ *
+ *
+ * All functions provided are MT-Safe, however certain ordering constraints are (but not enforced):
+ *
+ * Chunk saves may not occur for unloaded chunks.
+ *
+ *
+ * Tasks must be scheduled on the main thread.
+ *
+ *
+ *
+ * @see Holder#INSTANCE
+ * @see #scheduleSave(ServerLevel, int, int, CompoundTag, CompoundTag, int)
+ * @see #loadChunkDataAsync(ServerLevel, int, int, int, Consumer, boolean, boolean, boolean)
+ * @deprecated
+ */
+@Deprecated(forRemoval = true)
+public final class PaperFileIOThread extends QueueExecutorThread {
+
+ public static final Logger LOGGER = LogUtils.getLogger();
+ public static final CompoundTag FAILURE_VALUE = new CompoundTag();
+
+ public static final class Holder {
+
+ public static final PaperFileIOThread INSTANCE = new PaperFileIOThread();
+
+ static {
+ // Paper - fail hard on usage
+ }
+ }
+
+ private final AtomicLong writeCounter = new AtomicLong();
+
+ private PaperFileIOThread() {
+ super(new PrioritizedTaskQueue<>(), (int)(1.0e6)); // 1.0ms spinwait time
+ this.setName("Paper RegionFile IO Thread");
+ this.setPriority(Thread.NORM_PRIORITY - 1); // we keep priority close to normal because threads can wait on us
+ this.setUncaughtExceptionHandler((final Thread unused, final Throwable thr) -> {
+ LOGGER.error("Uncaught exception thrown from IO thread, report this!", thr);
+ });
+ }
+
+ /* run() is implemented by superclass */
+
+ /*
+ *
+ * IO thread will perform reads before writes
+ *
+ * How reads/writes are scheduled:
+ *
+ * If read in progress while scheduling write, ignore read and schedule write
+ * If read in progress while scheduling read (no write in progress), chain the read task
+ *
+ *
+ * If write in progress while scheduling read, use the pending write data and ret immediately
+ * If write in progress while scheduling write (ignore read in progress), overwrite the write in progress data
+ *
+ * This allows the reads and writes to act as if they occur synchronously to the thread scheduling them, however
+ * it fails to properly propagate write failures. When writes fail the data is kept so future reads will actually
+ * read the failed write data. This should hopefully act as a way to prevent data loss for spurious fails for writing data.
+ *
+ */
+
+ /**
+ * Attempts to bump the priority of all IO tasks for the given chunk coordinates. This has no effect if no tasks are queued.
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param priority Priority level to try to bump to
+ */
+ public void bumpPriority(final ServerLevel world, final int chunkX, final int chunkZ, final int priority) {
+ throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
+ }
+
+ public CompoundTag getPendingWrite(final ServerLevel world, final int chunkX, final int chunkZ, final boolean poiData) {
+ // Paper start - rewrite chunk system
+ return io.papermc.paper.chunk.system.io.RegionFileIOThread.getPendingWrite(
+ world, chunkX, chunkZ, poiData ? io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.POI_DATA :
+ io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.CHUNK_DATA
+ );
+ // Paper end - rewrite chunk system
+ }
+
+ /**
+ * Sets the priority of all IO tasks for the given chunk coordinates. This has no effect if no tasks are queued.
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param priority Priority level to set to
+ */
+ public void setPriority(final ServerLevel world, final int chunkX, final int chunkZ, final int priority) {
+ throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
+ }
+
+ /**
+ * Schedules the chunk data to be written asynchronously.
+ *
+ * Impl notes:
+ *
+ *
+ * This function presumes a chunk load for the coordinates is not called during this function (anytime after is OK). This means
+ * saves must be scheduled before a chunk is unloaded.
+ *
+ *
+ * Writes may be called concurrently, although only the "later" write will go through.
+ *
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param poiData Chunk point of interest data. If {@code null}, then no poi data is saved.
+ * @param chunkData Chunk data. If {@code null}, then no chunk data is saved.
+ * @param priority Priority level for this task. See {@link PrioritizedTaskQueue}
+ * @throws IllegalArgumentException If both {@code poiData} and {@code chunkData} are {@code null}.
+ * @throws IllegalStateException If the file io thread has shutdown.
+ */
+ public void scheduleSave(final ServerLevel world, final int chunkX, final int chunkZ,
+ final CompoundTag poiData, final CompoundTag chunkData,
+ final int priority) throws IllegalArgumentException {
+ throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
+ }
+
+ private void scheduleWrite(final ChunkDataController dataController, final ServerLevel world,
+ final int chunkX, final int chunkZ, final CompoundTag data, final int priority, final long writeCounter) {
+ throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
+ }
+
+ /**
+ * Same as {@link #loadChunkDataAsync(ServerLevel, int, int, int, Consumer, boolean, boolean, boolean)}, except this function returns
+ * a {@link CompletableFuture} which is potentially completed ASYNCHRONOUSLY ON THE FILE IO THREAD when the load task
+ * has completed.
+ *
+ * Note that if the chunk fails to load the returned future is completed with {@code null}.
+ *
+ */
+ public CompletableFuture loadChunkDataAsyncFuture(final ServerLevel world, final int chunkX, final int chunkZ,
+ final int priority, final boolean readPoiData, final boolean readChunkData,
+ final boolean intendingToBlock) {
+ final CompletableFuture future = new CompletableFuture<>();
+ this.loadChunkDataAsync(world, chunkX, chunkZ, priority, future::complete, readPoiData, readChunkData, intendingToBlock);
+ return future;
+ }
+
+ /**
+ * Schedules a load to be executed asynchronously.
+ *
+ * Impl notes:
+ *
+ *
+ * If a chunk fails to load, the {@code onComplete} parameter is completed with {@code null}.
+ *
+ *
+ * It is possible for the {@code onComplete} parameter to be given {@link ChunkData} containing data
+ * this call did not request.
+ *
+ *
+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
+ * data is undefined behaviour, and can cause deadlock.
+ *
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param priority Priority level for this task. See {@link PrioritizedTaskQueue}
+ * @param onComplete Consumer to execute once this task has completed
+ * @param readPoiData Whether to read point of interest data. If {@code false}, the {@code NBTTagCompound} will be {@code null}.
+ * @param readChunkData Whether to read chunk data. If {@code false}, the {@code NBTTagCompound} will be {@code null}.
+ * @return The {@link PrioritizedTaskQueue.PrioritizedTask} associated with this task. Note that this task does not support
+ * cancellation.
+ */
+ public void loadChunkDataAsync(final ServerLevel world, final int chunkX, final int chunkZ,
+ final int priority, final Consumer onComplete,
+ final boolean readPoiData, final boolean readChunkData,
+ final boolean intendingToBlock) {
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
+ throw new IllegalArgumentException("Invalid priority: " + priority);
+ }
+
+ if (!(readPoiData | readChunkData)) {
+ throw new IllegalArgumentException("Must read chunk data or poi data");
+ }
+
+ final ChunkData complete = new ChunkData();
+ // Paper start - rewrite chunk system
+ final java.util.List types = new java.util.ArrayList<>();
+ if (readPoiData) {
+ types.add(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.POI_DATA);
+ }
+ if (readChunkData) {
+ types.add(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.CHUNK_DATA);
+ }
+ final ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority newPriority;
+ switch (priority) {
+ case PrioritizedTaskQueue.HIGHEST_PRIORITY -> newPriority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.BLOCKING;
+ case PrioritizedTaskQueue.HIGHER_PRIORITY -> newPriority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.HIGHEST;
+ case PrioritizedTaskQueue.HIGH_PRIORITY -> newPriority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.HIGH;
+ case PrioritizedTaskQueue.NORMAL_PRIORITY -> newPriority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.NORMAL;
+ case PrioritizedTaskQueue.LOW_PRIORITY -> newPriority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.LOW;
+ case PrioritizedTaskQueue.LOWEST_PRIORITY -> newPriority = ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor.Priority.IDLE;
+ default -> throw new IllegalStateException("Legacy priority " + priority + " should be valid");
+ }
+ final Consumer transformComplete = (io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileData data) -> {
+ if (readPoiData) {
+ if (data.getThrowable(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.POI_DATA) != null) {
+ complete.poiData = FAILURE_VALUE;
+ } else {
+ complete.poiData = data.getData(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.POI_DATA);
+ }
+ }
+
+ if (readChunkData) {
+ if (data.getThrowable(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.CHUNK_DATA) != null) {
+ complete.chunkData = FAILURE_VALUE;
+ } else {
+ complete.chunkData = data.getData(io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType.CHUNK_DATA);
+ }
+ }
+
+ onComplete.accept(complete);
+ };
+ io.papermc.paper.chunk.system.io.RegionFileIOThread.loadChunkData(world, chunkX, chunkZ, transformComplete, intendingToBlock, newPriority, types.toArray(new io.papermc.paper.chunk.system.io.RegionFileIOThread.RegionFileType[0]));
+ // Paper end - rewrite chunk system
+
+ }
+
+ // Note: the onComplete may be called asynchronously or synchronously here.
+ private void scheduleRead(final ChunkDataController dataController, final ServerLevel world,
+ final int chunkX, final int chunkZ, final Consumer onComplete, final int priority,
+ final boolean intendingToBlock) {
+ throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
+ }
+
+ /**
+ * Same as {@link #loadChunkDataAsync(ServerLevel, int, int, int, Consumer, boolean, boolean, boolean)}, except this function returns
+ * the {@link ChunkData} associated with the specified chunk when the task is complete.
+ * @return The chunk data, or {@code null} if the chunk failed to load.
+ */
+ public ChunkData loadChunkData(final ServerLevel world, final int chunkX, final int chunkZ, final int priority,
+ final boolean readPoiData, final boolean readChunkData) {
+ return this.loadChunkDataAsyncFuture(world, chunkX, chunkZ, priority, readPoiData, readChunkData, true).join();
+ }
+
+ /**
+ * Schedules the given task at the specified priority to be executed on the IO thread.
+ *
+ * Internal api. Do not use.
+ *
+ */
+ public void runTask(final int priority, final Runnable runnable) {
+ throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
+ }
+
+ static final class GeneralTask extends PrioritizedTaskQueue.PrioritizedTask implements Runnable {
+
+ private final Runnable run;
+
+ public GeneralTask(final int priority, final Runnable run) {
+ super(priority);
+ this.run = IOUtil.notNull(run, "Task may not be null");
+ }
+
+ @Override
+ public void run() {
+ try {
+ this.run.run();
+ } catch (final Throwable throwable) {
+ if (throwable instanceof ThreadDeath) {
+ throw (ThreadDeath)throwable;
+ }
+ LOGGER.error("Failed to execute general task on IO thread " + IOUtil.genericToString(this.run), throwable);
+ }
+ }
+ }
+
+ public static final class ChunkData {
+
+ public CompoundTag poiData;
+ public CompoundTag chunkData;
+
+ public ChunkData() {}
+
+ public ChunkData(final CompoundTag poiData, final CompoundTag chunkData) {
+ this.poiData = poiData;
+ this.chunkData = chunkData;
+ }
+ }
+
+ public static abstract class ChunkDataController {
+
+ // ConcurrentHashMap synchronizes per chain, so reduce the chance of task's hashes colliding.
+ public final ConcurrentHashMap tasks = new ConcurrentHashMap<>(64, 0.5f);
+
+ public abstract void writeData(final int x, final int z, final CompoundTag compound) throws IOException;
+ public abstract CompoundTag readData(final int x, final int z) throws IOException;
+
+ public abstract T computeForRegionFile(final int chunkX, final int chunkZ, final Function function);
+ public abstract T computeForRegionFileIfLoaded(final int chunkX, final int chunkZ, final Function function);
+
+ public static final class InProgressWrite {
+ public long writeCounter;
+ public CompoundTag data;
+ }
+
+ public static final class InProgressRead {
+ public final CompletableFuture readFuture = new CompletableFuture<>();
+ }
+ }
+
+ public static final class ChunkDataTask extends PrioritizedTaskQueue.PrioritizedTask implements Runnable {
+
+ public ChunkDataController.InProgressWrite inProgressWrite;
+ public ChunkDataController.InProgressRead inProgressRead;
+
+ private final ServerLevel world;
+ private final int x;
+ private final int z;
+ private final ChunkDataController taskController;
+
+ public ChunkDataTask(final int priority, final ServerLevel world, final int x, final int z, final ChunkDataController taskController) {
+ super(priority);
+ this.world = world;
+ this.x = x;
+ this.z = z;
+ this.taskController = taskController;
+ }
+
+ @Override
+ public String toString() {
+ return "Task for world: '" + this.world.getWorld().getName() + "' at " + this.x + "," + this.z +
+ " poi: " + (this.taskController == null) + ", hash: " + this.hashCode(); // Paper - TODO rewrite chunk system
+ }
+
+ /*
+ *
+ * IO thread will perform reads before writes
+ *
+ * How reads/writes are scheduled:
+ *
+ * If read in progress while scheduling write, ignore read and schedule write
+ * If read in progress while scheduling read (no write in progress), chain the read task
+ *
+ *
+ * If write in progress while scheduling read, use the pending write data and ret immediately
+ * If write in progress while scheduling write (ignore read in progress), overwrite the write in progress data
+ *
+ * This allows the reads and writes to act as if they occur synchronously to the thread scheduling them, however
+ * it fails to properly propagate write failures
+ *
+ */
+
+ void reschedule(final int priority) {
+ // priority is checked before this stage // TODO what
+ this.queue.lazySet(null);
+ this.priority.lazySet(priority);
+ PaperFileIOThread.Holder.INSTANCE.queueTask(this);
+ }
+
+ @Override
+ public void run() {
+ if (true) throw new IllegalStateException("Shouldn't get here, use RegionFileIOThread"); // Paper - rewrite chunk system, fail hard on usage
+ ChunkDataController.InProgressRead read = this.inProgressRead;
+ if (read != null) {
+ CompoundTag compound = PaperFileIOThread.FAILURE_VALUE;
+ try {
+ compound = this.taskController.readData(this.x, this.z);
+ } catch (final Throwable thr) {
+ if (thr instanceof ThreadDeath) {
+ throw (ThreadDeath)thr;
+ }
+ LOGGER.error("Failed to read chunk data for task: " + this.toString(), thr);
+ // fall through to complete with null data
+ }
+ read.readFuture.complete(compound);
+ }
+
+ final Long chunkKey = Long.valueOf(IOUtil.getCoordinateKey(this.x, this.z));
+
+ ChunkDataController.InProgressWrite write = this.inProgressWrite;
+
+ if (write == null) {
+ // IntelliJ warns this is invalid, however it does not consider that writes to the task map & the inProgress field can occur concurrently.
+ ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final Long keyInMap, final ChunkDataTask valueInMap) -> {
+ if (valueInMap == null) {
+ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
+ }
+ if (valueInMap != ChunkDataTask.this) {
+ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
+ }
+ return valueInMap.inProgressWrite == null ? null : valueInMap;
+ });
+
+ if (inMap == null) {
+ return; // set the task value to null, indicating we're done
+ }
+
+ // not null, which means there was a concurrent write
+ write = this.inProgressWrite;
+ }
+
+ for (;;) {
+ final long writeCounter;
+ final CompoundTag data;
+
+ //noinspection SynchronizationOnLocalVariableOrMethodParameter
+ synchronized (write) {
+ writeCounter = write.writeCounter;
+ data = write.data;
+ }
+
+ boolean failedWrite = false;
+
+ try {
+ this.taskController.writeData(this.x, this.z, data);
+ } catch (final Throwable thr) {
+ if (thr instanceof ThreadDeath) {
+ throw (ThreadDeath)thr;
+ }
+ LOGGER.error("Failed to write chunk data for task: " + this.toString(), thr);
+ failedWrite = true;
+ }
+
+ boolean finalFailWrite = failedWrite;
+
+ ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final Long keyInMap, final ChunkDataTask valueInMap) -> {
+ if (valueInMap == null) {
+ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
+ }
+ if (valueInMap != ChunkDataTask.this) {
+ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
+ }
+ if (valueInMap.inProgressWrite.writeCounter == writeCounter) {
+ if (finalFailWrite) {
+ valueInMap.inProgressWrite.writeCounter = -1L;
+ }
+
+ return null;
+ }
+ return valueInMap;
+ // Hack end
+ });
+
+ if (inMap == null) {
+ // write counter matched, so we wrote the most up-to-date pending data, we're done here
+ // or we failed to write and successfully set the write counter to -1
+ return; // we're done here
+ }
+
+ // fetch & write new data
+ continue;
+ }
+ }
+ }
+}
diff --git a/src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java b/src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java
new file mode 100644
index 0000000000000000000000000000000000000000..7844a3515430472bd829ff246396bceb0797de1b
--- /dev/null
+++ b/src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java
@@ -0,0 +1,299 @@
+package com.destroystokyo.paper.io;
+
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.concurrent.atomic.AtomicReference;
+
+@Deprecated(forRemoval = true)
+public class PrioritizedTaskQueue {
+
+ // lower numbers are a higher priority (except < 0)
+ // higher priorities are always executed before lower priorities
+
+ /**
+ * Priority value indicating the task has completed or is being completed.
+ */
+ public static final int COMPLETING_PRIORITY = -1;
+
+ /**
+ * Highest priority, should only be used for main thread tasks or tasks that are blocking the main thread.
+ */
+ public static final int HIGHEST_PRIORITY = 0;
+
+ /**
+ * Should be only used in an IO task so that chunk loads do not wait on other IO tasks.
+ * This only exists because IO tasks are scheduled before chunk load tasks to decrease IO waiting times.
+ */
+ public static final int HIGHER_PRIORITY = 1;
+
+ /**
+ * Should be used for scheduling chunk loads/generation that would increase response times to users.
+ */
+ public static final int HIGH_PRIORITY = 2;
+
+ /**
+ * Default priority.
+ */
+ public static final int NORMAL_PRIORITY = 3;
+
+ /**
+ * Use for tasks not at all critical and can potentially be delayed.
+ */
+ public static final int LOW_PRIORITY = 4;
+
+ /**
+ * Use for tasks that should "eventually" execute.
+ */
+ public static final int LOWEST_PRIORITY = 5;
+
+ private static final int TOTAL_PRIORITIES = 6;
+
+ final ConcurrentLinkedQueue[] queues = (ConcurrentLinkedQueue[])new ConcurrentLinkedQueue[TOTAL_PRIORITIES];
+
+ private final AtomicBoolean shutdown = new AtomicBoolean();
+
+ {
+ for (int i = 0; i < TOTAL_PRIORITIES; ++i) {
+ this.queues[i] = new ConcurrentLinkedQueue<>();
+ }
+ }
+
+ /**
+ * Returns whether the specified priority is valid
+ */
+ public static boolean validPriority(final int priority) {
+ return priority >= 0 && priority < TOTAL_PRIORITIES;
+ }
+
+ /**
+ * Queues a task.
+ * @throws IllegalStateException If the task has already been queued. Use {@link PrioritizedTask#raisePriority(int)} to
+ * raise a task's priority.
+ * This can also be thrown if the queue has shutdown.
+ */
+ public void add(final T task) throws IllegalStateException {
+ int priority = task.getPriority();
+ if (priority != COMPLETING_PRIORITY) {
+ task.setQueue(this);
+ this.queues[priority].add(task);
+ }
+ if (this.shutdown.get()) {
+ // note: we're not actually sure at this point if our task will go through
+ throw new IllegalStateException("Queue has shutdown, refusing to execute task " + IOUtil.genericToString(task));
+ }
+ }
+
+ /**
+ * Polls the highest priority task currently available. {@code null} if none.
+ */
+ public T poll() {
+ T task;
+ for (int i = 0; i < TOTAL_PRIORITIES; ++i) {
+ final ConcurrentLinkedQueue queue = this.queues[i];
+
+ while ((task = queue.poll()) != null) {
+ final int prevPriority = task.tryComplete(i);
+ if (prevPriority != COMPLETING_PRIORITY && prevPriority <= i) {
+ // if the prev priority was greater-than or equal to our current priority
+ return task;
+ }
+ }
+ }
+
+ return null;
+ }
+
+ /**
+ * Polls the highest priority task currently available. {@code null} if none.
+ */
+ public T poll(final int lowestPriority) {
+ T task;
+ final int max = Math.min(LOWEST_PRIORITY, lowestPriority);
+ for (int i = 0; i <= max; ++i) {
+ final ConcurrentLinkedQueue queue = this.queues[i];
+
+ while ((task = queue.poll()) != null) {
+ final int prevPriority = task.tryComplete(i);
+ if (prevPriority != COMPLETING_PRIORITY && prevPriority <= i) {
+ // if the prev priority was greater-than or equal to our current priority
+ return task;
+ }
+ }
+ }
+
+ return null;
+ }
+
+ /**
+ * Returns whether this queue may have tasks queued.
+ *
+ * This operation is not atomic, but is MT-Safe.
+ *
+ * @return {@code true} if tasks may be queued, {@code false} otherwise
+ */
+ public boolean hasTasks() {
+ for (int i = 0; i < TOTAL_PRIORITIES; ++i) {
+ final ConcurrentLinkedQueue queue = this.queues[i];
+
+ if (queue.peek() != null) {
+ return true;
+ }
+ }
+ return false;
+ }
+
+ /**
+ * Prevent further additions to this queue. Attempts to add after this call has completed (potentially during) will
+ * result in {@link IllegalStateException} being thrown.
+ *
+ * This operation is atomic with respect to other shutdown calls
+ *
+ *
+ * After this call has completed, regardless of return value, this queue will be shutdown.
+ *
+ * @return {@code true} if the queue was shutdown, {@code false} if it has shut down already
+ */
+ public boolean shutdown() {
+ return this.shutdown.getAndSet(false);
+ }
+
+ public abstract static class PrioritizedTask {
+
+ protected final AtomicReference queue = new AtomicReference<>();
+
+ protected final AtomicInteger priority;
+
+ protected PrioritizedTask() {
+ this(PrioritizedTaskQueue.NORMAL_PRIORITY);
+ }
+
+ protected PrioritizedTask(final int priority) {
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
+ throw new IllegalArgumentException("Invalid priority " + priority);
+ }
+ this.priority = new AtomicInteger(priority);
+ }
+
+ /**
+ * Returns the current priority. Note that {@link PrioritizedTaskQueue#COMPLETING_PRIORITY} will be returned
+ * if this task is completing or has completed.
+ */
+ public final int getPriority() {
+ return this.priority.get();
+ }
+
+ /**
+ * Returns whether this task is scheduled to execute, or has been already executed.
+ */
+ public boolean isScheduled() {
+ return this.queue.get() != null;
+ }
+
+ final int tryComplete(final int minPriority) {
+ for (int curr = this.getPriorityVolatile();;) {
+ if (curr == COMPLETING_PRIORITY) {
+ return COMPLETING_PRIORITY;
+ }
+ if (curr > minPriority) {
+ // curr is lower priority
+ return curr;
+ }
+
+ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, COMPLETING_PRIORITY))) {
+ return curr;
+ }
+ continue;
+ }
+ }
+
+ /**
+ * Forces this task to be completed.
+ * @return {@code true} if the task was cancelled, {@code false} if the task has already completed or is being completed.
+ */
+ public boolean cancel() {
+ return this.exchangePriorityVolatile(PrioritizedTaskQueue.COMPLETING_PRIORITY) != PrioritizedTaskQueue.COMPLETING_PRIORITY;
+ }
+
+ /**
+ * Attempts to raise the priority to the priority level specified.
+ * @param priority Priority specified
+ * @return {@code true} if successful, {@code false} otherwise.
+ */
+ public boolean raisePriority(final int priority) {
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
+ throw new IllegalArgumentException("Invalid priority");
+ }
+
+ for (int curr = this.getPriorityVolatile();;) {
+ if (curr == COMPLETING_PRIORITY) {
+ return false;
+ }
+ if (priority >= curr) {
+ return true;
+ }
+
+ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority))) {
+ PrioritizedTaskQueue queue = this.queue.get();
+ if (queue != null) {
+ //noinspection unchecked
+ queue.queues[priority].add(this); // silently fail on shutdown
+ }
+ return true;
+ }
+ continue;
+ }
+ }
+
+ /**
+ * Attempts to set this task's priority level to the level specified.
+ * @param priority Specified priority level.
+ * @return {@code true} if successful, {@code false} if this task is completing or has completed.
+ */
+ public boolean updatePriority(final int priority) {
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
+ throw new IllegalArgumentException("Invalid priority");
+ }
+
+ for (int curr = this.getPriorityVolatile();;) {
+ if (curr == COMPLETING_PRIORITY) {
+ return false;
+ }
+ if (curr == priority) {
+ return true;
+ }
+
+ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority))) {
+ PrioritizedTaskQueue queue = this.queue.get();
+ if (queue != null) {
+ //noinspection unchecked
+ queue.queues[priority].add(this); // silently fail on shutdown
+ }
+ return true;
+ }
+ continue;
+ }
+ }
+
+ void setQueue(final PrioritizedTaskQueue queue) {
+ this.queue.set(queue);
+ }
+
+ /* priority */
+
+ protected final int getPriorityVolatile() {
+ return this.priority.get();
+ }
+
+ protected final int compareAndExchangePriorityVolatile(final int expect, final int update) {
+ if (this.priority.compareAndSet(expect, update)) {
+ return expect;
+ }
+ return this.priority.get();
+ }
+
+ protected final int exchangePriorityVolatile(final int value) {
+ return this.priority.getAndSet(value);
+ }
+ }
+}
diff --git a/src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java b/src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java
new file mode 100644
index 0000000000000000000000000000000000000000..99f49b5625cf51d6c97640553cf5c420bb6fdd36
--- /dev/null
+++ b/src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java
@@ -0,0 +1,255 @@
+package com.destroystokyo.paper.io;
+
+import com.mojang.logging.LogUtils;
+import org.slf4j.Logger;
+
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.locks.LockSupport;
+
+@Deprecated(forRemoval = true)
+public class QueueExecutorThread extends Thread {
+
+ private static final Logger LOGGER = LogUtils.getLogger();
+
+ protected final PrioritizedTaskQueue queue;
+ protected final long spinWaitTime;
+
+ protected volatile boolean closed;
+
+ protected final AtomicBoolean parked = new AtomicBoolean();
+
+ protected volatile ConcurrentLinkedQueue flushQueue = new ConcurrentLinkedQueue<>();
+ protected volatile long flushCycles;
+
+ protected int lowestPriorityToPoll = PrioritizedTaskQueue.LOWEST_PRIORITY;
+
+ public int getLowestPriorityToPoll() {
+ return this.lowestPriorityToPoll;
+ }
+
+ public void setLowestPriorityToPoll(final int lowestPriorityToPoll) {
+ if (this.isAlive()) {
+ throw new IllegalStateException("Cannot set after starting");
+ }
+ this.lowestPriorityToPoll = lowestPriorityToPoll;
+ }
+
+ public QueueExecutorThread(final PrioritizedTaskQueue queue) {
+ this(queue, (int)(1.e6)); // 1.0ms
+ }
+
+ public QueueExecutorThread(final PrioritizedTaskQueue queue, final long spinWaitTime) { // in ms
+ this.queue = queue;
+ this.spinWaitTime = spinWaitTime;
+ }
+
+ @Override
+ public void run() {
+ final long spinWaitTime = this.spinWaitTime;
+ main_loop:
+ for (;;) {
+ this.pollTasks(true);
+
+ // spinwait
+
+ final long start = System.nanoTime();
+
+ for (;;) {
+ // If we are interrpted for any reason, park() will always return immediately. Clear so that we don't needlessly use cpu in such an event.
+ Thread.interrupted();
+ LockSupport.parkNanos("Spinwaiting on tasks", 1000L); // 1us
+
+ if (this.pollTasks(true)) {
+ // restart loop, found tasks
+ continue main_loop;
+ }
+
+ if (this.handleClose()) {
+ return; // we're done
+ }
+
+ if ((System.nanoTime() - start) >= spinWaitTime) {
+ break;
+ }
+ }
+
+ if (this.handleClose()) {
+ return;
+ }
+
+ this.parked.set(true);
+
+ // We need to parse here to avoid a race condition where a thread queues a task before we set parked to true
+ // (i.e it will not notify us)
+ if (this.pollTasks(true)) {
+ this.parked.set(false);
+ continue;
+ }
+
+ if (this.handleClose()) {
+ return;
+ }
+
+ // we don't need to check parked before sleeping, but we do need to check parked in a do-while loop
+ // LockSupport.park() can fail for any reason
+ do {
+ Thread.interrupted();
+ LockSupport.park("Waiting on tasks");
+ } while (this.parked.get());
+ }
+ }
+
+ protected boolean handleClose() {
+ if (this.closed) {
+ this.pollTasks(true); // this ensures we've emptied the queue
+ this.handleFlushThreads(true);
+ return true;
+ }
+ return false;
+ }
+
+ protected boolean pollTasks(boolean flushTasks) {
+ Runnable task;
+ boolean ret = false;
+
+ while ((task = this.queue.poll(this.lowestPriorityToPoll)) != null) {
+ ret = true;
+ try {
+ task.run();
+ } catch (final Throwable throwable) {
+ if (throwable instanceof ThreadDeath) {
+ throw (ThreadDeath)throwable;
+ }
+ LOGGER.error("Exception thrown from prioritized runnable task in thread '" + this.getName() + "': " + IOUtil.genericToString(task), throwable);
+ }
+ }
+
+ if (flushTasks) {
+ this.handleFlushThreads(false);
+ }
+
+ return ret;
+ }
+
+ protected void handleFlushThreads(final boolean shutdown) {
+ Thread parking;
+ ConcurrentLinkedQueue flushQueue = this.flushQueue;
+ do {
+ ++flushCycles; // may be plain read opaque write
+ while ((parking = flushQueue.poll()) != null) {
+ LockSupport.unpark(parking);
+ }
+ } while (this.pollTasks(false));
+
+ if (shutdown) {
+ this.flushQueue = null;
+
+ // defend against a race condition where a flush thread double-checks right before we set to null
+ while ((parking = flushQueue.poll()) != null) {
+ LockSupport.unpark(parking);
+ }
+ }
+ }
+
+ /**
+ * Notify's this thread that a task has been added to its queue
+ * @return {@code true} if this thread was waiting for tasks, {@code false} if it is executing tasks
+ */
+ public boolean notifyTasks() {
+ if (this.parked.get() && this.parked.getAndSet(false)) {
+ LockSupport.unpark(this);
+ return true;
+ }
+ return false;
+ }
+
+ protected void queueTask(final T task) {
+ this.queue.add(task);
+ this.notifyTasks();
+ }
+
+ /**
+ * Waits until this thread's queue is empty.
+ *
+ * @throws IllegalStateException If the current thread is {@code this} thread.
+ */
+ public void flush() {
+ final Thread currentThread = Thread.currentThread();
+
+ if (currentThread == this) {
+ // avoid deadlock
+ throw new IllegalStateException("Cannot flush the queue executor thread while on the queue executor thread");
+ }
+
+ // order is important
+
+ int successes = 0;
+ long lastCycle = -1L;
+
+ do {
+ final ConcurrentLinkedQueue flushQueue = this.flushQueue;
+ if (flushQueue == null) {
+ return;
+ }
+
+ flushQueue.add(currentThread);
+
+ // double check flush queue
+ if (this.flushQueue == null) {
+ return;
+ }
+
+ final long currentCycle = this.flushCycles; // may be opaque read
+
+ if (currentCycle == lastCycle) {
+ Thread.yield();
+ continue;
+ }
+
+ // force response
+ this.parked.set(false);
+ LockSupport.unpark(this);
+
+ LockSupport.park("flushing queue executor thread");
+
+ // returns whether there are tasks queued, does not return whether there are tasks executing
+ // this is why we cycle twice twice through flush (we know a pollTask call is made after a flush cycle)
+ // we really only need to guarantee that the tasks this thread has queued has gone through, and can leave
+ // tasks queued concurrently that are unsychronized with this thread as undefined behavior
+ if (this.queue.hasTasks()) {
+ successes = 0;
+ } else {
+ ++successes;
+ }
+
+ } while (successes != 2);
+
+ }
+
+ /**
+ * Closes this queue executor's queue and optionally waits for it to empty.
+ *
+ * If wait is {@code true}, then the queue will be empty by the time this call completes.
+ *
+ *
+ * This function is MT-Safe.
+ *
+ * @param wait If this call is to wait until the queue is empty
+ * @param killQueue Whether to shutdown this thread's queue
+ * @return whether this thread shut down the queue
+ */
+ public boolean close(final boolean wait, final boolean killQueue) {
+ boolean ret = !killQueue ? false : this.queue.shutdown();
+ this.closed = true;
+
+ // force thread to respond to the shutdown
+ this.parked.set(false);
+ LockSupport.unpark(this);
+
+ if (wait) {
+ this.flush();
+ }
+ return ret;
+ }
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java b/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java
index 05bddc0697faa8d9d9955d89d76930c84ef7df0d..cbeaadaecf816070b3a37938c8e683180939afc4 100644
--- a/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java
+++ b/src/main/java/io/papermc/paper/chunk/system/ChunkSystem.java
@@ -32,192 +32,41 @@ public final class ChunkSystem {
}
public static void scheduleChunkTask(final ServerLevel level, final int chunkX, final int chunkZ, final Runnable run, final PrioritisedExecutor.Priority priority) {
- level.chunkSource.mainThreadProcessor.execute(run);
+ level.chunkTaskScheduler.scheduleChunkTask(chunkX, chunkZ, run, priority); // Paper - rewrite chunk system
}
public static void scheduleChunkLoad(final ServerLevel level, final int chunkX, final int chunkZ, final boolean gen,
final ChunkStatus toStatus, final boolean addTicket, final PrioritisedExecutor.Priority priority,
final Consumer onComplete) {
- if (gen) {
- scheduleChunkLoad(level, chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
- return;
- }
- scheduleChunkLoad(level, chunkX, chunkZ, ChunkStatus.EMPTY, addTicket, priority, (final ChunkAccess chunk) -> {
- if (chunk == null) {
- onComplete.accept(null);
- } else {
- if (chunk.getStatus().isOrAfter(toStatus)) {
- scheduleChunkLoad(level, chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
- } else {
- onComplete.accept(null);
- }
- }
- });
+ level.chunkTaskScheduler.scheduleChunkLoad(chunkX, chunkZ, gen, toStatus, addTicket, priority, onComplete); // Paper - rewrite chunk system
}
- static final TicketType CHUNK_LOAD = TicketType.create("chunk_load", Long::compareTo);
-
- private static long chunkLoadCounter = 0L;
+ // Paper - rewrite chunk system
public static void scheduleChunkLoad(final ServerLevel level, final int chunkX, final int chunkZ, final ChunkStatus toStatus,
final boolean addTicket, final PrioritisedExecutor.Priority priority, final Consumer onComplete) {
- if (!Bukkit.isPrimaryThread()) {
- scheduleChunkTask(level, chunkX, chunkZ, () -> {
- scheduleChunkLoad(level, chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
- }, priority);
- return;
- }
-
- final int minLevel = 33 + ChunkStatus.getDistance(toStatus);
- final Long chunkReference = addTicket ? Long.valueOf(++chunkLoadCounter) : null;
- final ChunkPos chunkPos = new ChunkPos(chunkX, chunkZ);
-
- if (addTicket) {
- level.chunkSource.addTicketAtLevel(CHUNK_LOAD, chunkPos, minLevel, chunkReference);
- }
- level.chunkSource.runDistanceManagerUpdates();
-
- final Consumer loadCallback = (final ChunkAccess chunk) -> {
- try {
- if (onComplete != null) {
- onComplete.accept(chunk);
- }
- } catch (final ThreadDeath death) {
- throw death;
- } catch (final Throwable thr) {
- LOGGER.error("Exception handling chunk load callback", thr);
- SneakyThrow.sneaky(thr);
- } finally {
- if (addTicket) {
- level.chunkSource.addTicketAtLevel(TicketType.UNKNOWN, chunkPos, minLevel, chunkPos);
- level.chunkSource.removeTicketAtLevel(CHUNK_LOAD, chunkPos, minLevel, chunkReference);
- }
- }
- };
-
- final ChunkHolder holder = level.chunkSource.chunkMap.updatingChunkMap.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-
- if (holder == null || holder.getTicketLevel() > minLevel) {
- loadCallback.accept(null);
- return;
- }
-
- final CompletableFuture> loadFuture = holder.getOrScheduleFuture(toStatus, level.chunkSource.chunkMap);
-
- if (loadFuture.isDone()) {
- loadCallback.accept(loadFuture.join().left().orElse(null));
- return;
- }
-
- loadFuture.whenCompleteAsync((final Either either, final Throwable thr) -> {
- if (thr != null) {
- loadCallback.accept(null);
- return;
- }
- loadCallback.accept(either.left().orElse(null));
- }, (final Runnable r) -> {
- scheduleChunkTask(level, chunkX, chunkZ, r, PrioritisedExecutor.Priority.HIGHEST);
- });
+ level.chunkTaskScheduler.scheduleChunkLoad(chunkX, chunkZ, toStatus, addTicket, priority, onComplete); // Paper - rewrite chunk system
}
public static void scheduleTickingState(final ServerLevel level, final int chunkX, final int chunkZ,
final FullChunkStatus toStatus, final boolean addTicket,
final PrioritisedExecutor.Priority priority, final Consumer onComplete) {
- // This method goes unused until the chunk system rewrite
- if (toStatus == FullChunkStatus.INACCESSIBLE) {
- throw new IllegalArgumentException("Cannot wait for INACCESSIBLE status");
- }
-
- if (!Bukkit.isPrimaryThread()) {
- scheduleChunkTask(level, chunkX, chunkZ, () -> {
- scheduleTickingState(level, chunkX, chunkZ, toStatus, addTicket, priority, onComplete);
- }, priority);
- return;
- }
-
- final int minLevel = 33 - (toStatus.ordinal() - 1);
- final int radius = toStatus.ordinal() - 1;
- final Long chunkReference = addTicket ? Long.valueOf(++chunkLoadCounter) : null;
- final ChunkPos chunkPos = new ChunkPos(chunkX, chunkZ);
-
- if (addTicket) {
- level.chunkSource.addTicketAtLevel(CHUNK_LOAD, chunkPos, minLevel, chunkReference);
- }
- level.chunkSource.runDistanceManagerUpdates();
-
- final Consumer loadCallback = (final LevelChunk chunk) -> {
- try {
- if (onComplete != null) {
- onComplete.accept(chunk);
- }
- } catch (final ThreadDeath death) {
- throw death;
- } catch (final Throwable thr) {
- LOGGER.error("Exception handling chunk load callback", thr);
- SneakyThrow.sneaky(thr);
- } finally {
- if (addTicket) {
- level.chunkSource.addTicketAtLevel(TicketType.UNKNOWN, chunkPos, minLevel, chunkPos);
- level.chunkSource.removeTicketAtLevel(CHUNK_LOAD, chunkPos, minLevel, chunkReference);
- }
- }
- };
-
- final ChunkHolder holder = level.chunkSource.chunkMap.updatingChunkMap.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
-
- if (holder == null || holder.getTicketLevel() > minLevel) {
- loadCallback.accept(null);
- return;
- }
-
- final CompletableFuture> tickingState;
- switch (toStatus) {
- case FULL: {
- tickingState = holder.getFullChunkFuture();
- break;
- }
- case BLOCK_TICKING: {
- tickingState = holder.getTickingChunkFuture();
- break;
- }
- case ENTITY_TICKING: {
- tickingState = holder.getEntityTickingChunkFuture();
- break;
- }
- default: {
- throw new IllegalStateException("Cannot reach here");
- }
- }
-
- if (tickingState.isDone()) {
- loadCallback.accept(tickingState.join().left().orElse(null));
- return;
- }
-
- tickingState.whenCompleteAsync((final Either either, final Throwable thr) -> {
- if (thr != null) {
- loadCallback.accept(null);
- return;
- }
- loadCallback.accept(either.left().orElse(null));
- }, (final Runnable r) -> {
- scheduleChunkTask(level, chunkX, chunkZ, r, PrioritisedExecutor.Priority.HIGHEST);
- });
+ level.chunkTaskScheduler.scheduleTickingState(chunkX, chunkZ, toStatus, addTicket, priority, onComplete); // Paper - rewrite chunk system
}
public static List getVisibleChunkHolders(final ServerLevel level) {
- return new ArrayList<>(level.chunkSource.chunkMap.visibleChunkMap.values());
+ return level.chunkTaskScheduler.chunkHolderManager.getOldChunkHolders(); // Paper - rewrite chunk system
}
public static List getUpdatingChunkHolders(final ServerLevel level) {
- return new ArrayList<>(level.chunkSource.chunkMap.updatingChunkMap.values());
+ return level.chunkTaskScheduler.chunkHolderManager.getOldChunkHolders(); // Paper - rewrite chunk system
}
public static int getVisibleChunkHolderCount(final ServerLevel level) {
- return level.chunkSource.chunkMap.visibleChunkMap.size();
+ return level.chunkTaskScheduler.chunkHolderManager.size(); // Paper - rewrite chunk system
}
public static int getUpdatingChunkHolderCount(final ServerLevel level) {
- return level.chunkSource.chunkMap.updatingChunkMap.size();
+ return level.chunkTaskScheduler.chunkHolderManager.size(); // Paper - rewrite chunk system
}
public static boolean hasAnyChunkHolders(final ServerLevel level) {
@@ -244,26 +93,31 @@ public final class ChunkSystem {
public static void onChunkBorder(final LevelChunk chunk, final ChunkHolder holder) {
chunk.playerChunk = holder;
+ chunk.chunkStatus = net.minecraft.server.level.FullChunkStatus.FULL;
}
public static void onChunkNotBorder(final LevelChunk chunk, final ChunkHolder holder) {
-
+ chunk.chunkStatus = net.minecraft.server.level.FullChunkStatus.INACCESSIBLE;
}
public static void onChunkTicking(final LevelChunk chunk, final ChunkHolder holder) {
chunk.level.getChunkSource().tickingChunks.add(chunk);
+ chunk.chunkStatus = net.minecraft.server.level.FullChunkStatus.BLOCK_TICKING;
}
public static void onChunkNotTicking(final LevelChunk chunk, final ChunkHolder holder) {
chunk.level.getChunkSource().tickingChunks.remove(chunk);
+ chunk.chunkStatus = net.minecraft.server.level.FullChunkStatus.FULL;
}
public static void onChunkEntityTicking(final LevelChunk chunk, final ChunkHolder holder) {
chunk.level.getChunkSource().entityTickingChunks.add(chunk);
+ chunk.chunkStatus = net.minecraft.server.level.FullChunkStatus.ENTITY_TICKING;
}
public static void onChunkNotEntityTicking(final LevelChunk chunk, final ChunkHolder holder) {
chunk.level.getChunkSource().entityTickingChunks.remove(chunk);
+ chunk.chunkStatus = net.minecraft.server.level.FullChunkStatus.BLOCK_TICKING;
}
public static ChunkHolder getUnloadingChunkHolder(final ServerLevel level, final int chunkX, final int chunkZ) {
@@ -271,23 +125,15 @@ public final class ChunkSystem {
}
public static int getSendViewDistance(final ServerPlayer player) {
- return getLoadViewDistance(player);
+ return io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.getAPISendViewDistance(player);
}
public static int getLoadViewDistance(final ServerPlayer player) {
- final ServerLevel level = player.serverLevel();
- if (level == null) {
- return Bukkit.getViewDistance();
- }
- return level.chunkSource.chunkMap.getPlayerViewDistance(player);
+ return io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.getLoadViewDistance(player);
}
public static int getTickViewDistance(final ServerPlayer player) {
- final ServerLevel level = player.serverLevel();
- if (level == null) {
- return Bukkit.getSimulationDistance();
- }
- return level.chunkSource.chunkMap.distanceManager.simulationDistance;
+ return io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader.getAPITickViewDistance(player);
}
private ChunkSystem() {
diff --git a/src/main/java/io/papermc/paper/chunk/system/RegionizedPlayerChunkLoader.java b/src/main/java/io/papermc/paper/chunk/system/RegionizedPlayerChunkLoader.java
new file mode 100644
index 0000000000000000000000000000000000000000..1b090f1e79b996e52097afc49c1cec85936653e6
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/RegionizedPlayerChunkLoader.java
@@ -0,0 +1,1208 @@
+package io.papermc.paper.chunk.system;
+
+import ca.spottedleaf.concurrentutil.collection.SRSWLinkedQueue;
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
+import io.papermc.paper.chunk.system.scheduling.ChunkHolderManager;
+import io.papermc.paper.configuration.GlobalConfiguration;
+import io.papermc.paper.util.CoordinateUtils;
+import io.papermc.paper.util.TickThread;
+import io.papermc.paper.util.player.SingleUserAreaMap;
+import it.unimi.dsi.fastutil.longs.Long2ByteOpenHashMap;
+import it.unimi.dsi.fastutil.longs.LongArrayFIFOQueue;
+import it.unimi.dsi.fastutil.longs.LongArrayList;
+import it.unimi.dsi.fastutil.longs.LongComparator;
+import it.unimi.dsi.fastutil.longs.LongHeapPriorityQueue;
+import it.unimi.dsi.fastutil.longs.LongOpenHashSet;
+import net.minecraft.network.protocol.Packet;
+import net.minecraft.network.protocol.game.ClientboundSetChunkCacheCenterPacket;
+import net.minecraft.network.protocol.game.ClientboundSetChunkCacheRadiusPacket;
+import net.minecraft.network.protocol.game.ClientboundSetSimulationDistancePacket;
+import net.minecraft.server.level.ChunkTrackingView;
+import net.minecraft.server.level.ServerLevel;
+import net.minecraft.server.level.ServerPlayer;
+import net.minecraft.server.level.TicketType;
+import net.minecraft.server.network.PlayerChunkSender;
+import net.minecraft.world.level.ChunkPos;
+import net.minecraft.world.level.GameRules;
+import net.minecraft.world.level.chunk.ChunkAccess;
+import net.minecraft.world.level.chunk.ChunkStatus;
+import net.minecraft.world.level.chunk.LevelChunk;
+import net.minecraft.world.level.levelgen.BelowZeroRetrogen;
+import org.bukkit.craftbukkit.entity.CraftPlayer;
+import org.bukkit.entity.Player;
+import java.lang.invoke.VarHandle;
+import java.util.ArrayDeque;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicLong;
+
+public class RegionizedPlayerChunkLoader {
+
+ public static final TicketType REGION_PLAYER_TICKET = TicketType.create("region_player_ticket", Long::compareTo);
+
+ public static final int MIN_VIEW_DISTANCE = 2;
+ public static final int MAX_VIEW_DISTANCE = 32;
+
+ public static final int TICK_TICKET_LEVEL = 31;
+ public static final int GENERATED_TICKET_LEVEL = 33 + ChunkStatus.getDistance(ChunkStatus.FULL);
+ public static final int LOADED_TICKET_LEVEL = 33 + ChunkStatus.getDistance(ChunkStatus.EMPTY);
+
+ public static final record ViewDistances(
+ int tickViewDistance,
+ int loadViewDistance,
+ int sendViewDistance
+ ) {
+ public ViewDistances setTickViewDistance(final int distance) {
+ return new ViewDistances(distance, this.loadViewDistance, this.sendViewDistance);
+ }
+
+ public ViewDistances setLoadViewDistance(final int distance) {
+ return new ViewDistances(this.tickViewDistance, distance, this.sendViewDistance);
+ }
+
+
+ public ViewDistances setSendViewDistance(final int distance) {
+ return new ViewDistances(this.tickViewDistance, this.loadViewDistance, distance);
+ }
+ }
+
+ public static int getAPITickViewDistance(final Player player) {
+ return getAPITickViewDistance(((CraftPlayer)player).getHandle());
+ }
+
+ public static int getAPITickViewDistance(final ServerPlayer player) {
+ final ServerLevel level = (ServerLevel)player.level();
+ final PlayerChunkLoaderData data = player.chunkLoader;
+ if (data == null) {
+ return level.playerChunkLoader.getAPITickDistance();
+ }
+ return data.lastTickDistance;
+ }
+
+ public static int getAPIViewDistance(final Player player) {
+ return getAPIViewDistance(((CraftPlayer)player).getHandle());
+ }
+
+ public static int getAPIViewDistance(final ServerPlayer player) {
+ final ServerLevel level = (ServerLevel)player.level();
+ final PlayerChunkLoaderData data = player.chunkLoader;
+ if (data == null) {
+ return level.playerChunkLoader.getAPIViewDistance();
+ }
+ // view distance = load distance + 1
+ return data.lastLoadDistance - 1;
+ }
+
+ public static int getLoadViewDistance(final ServerPlayer player) {
+ final ServerLevel level = (ServerLevel)player.level();
+ final PlayerChunkLoaderData data = player.chunkLoader;
+ if (data == null) {
+ return level.playerChunkLoader.getAPIViewDistance();
+ }
+ // view distance = load distance + 1
+ return data.lastLoadDistance - 1;
+ }
+
+ public static int getAPISendViewDistance(final Player player) {
+ return getAPISendViewDistance(((CraftPlayer)player).getHandle());
+ }
+
+ public static int getAPISendViewDistance(final ServerPlayer player) {
+ final ServerLevel level = (ServerLevel)player.level();
+ final PlayerChunkLoaderData data = player.chunkLoader;
+ if (data == null) {
+ return level.playerChunkLoader.getAPISendViewDistance();
+ }
+ return data.lastSendDistance;
+ }
+
+ private final ServerLevel world;
+
+ public RegionizedPlayerChunkLoader(final ServerLevel world) {
+ this.world = world;
+ }
+
+ public void addPlayer(final ServerPlayer player) {
+ TickThread.ensureTickThread(player, "Cannot add player to player chunk loader async");
+ if (!player.isRealPlayer) {
+ return;
+ }
+
+ if (player.chunkLoader != null) {
+ throw new IllegalStateException("Player is already added to player chunk loader");
+ }
+
+ final PlayerChunkLoaderData loader = new PlayerChunkLoaderData(this.world, player);
+
+ player.chunkLoader = loader;
+ loader.add();
+ }
+
+ public void updatePlayer(final ServerPlayer player) {
+ final PlayerChunkLoaderData loader = player.chunkLoader;
+ if (loader != null) {
+ loader.update();
+ }
+ }
+
+ public void removePlayer(final ServerPlayer player) {
+ TickThread.ensureTickThread(player, "Cannot remove player from player chunk loader async");
+ if (!player.isRealPlayer) {
+ return;
+ }
+
+ final PlayerChunkLoaderData loader = player.chunkLoader;
+
+ if (loader == null) {
+ return;
+ }
+
+ loader.remove();
+ player.chunkLoader = null;
+ }
+
+ public void setSendDistance(final int distance) {
+ this.world.setSendViewDistance(distance);
+ }
+
+ public void setLoadDistance(final int distance) {
+ this.world.setLoadViewDistance(distance);
+ }
+
+ public void setTickDistance(final int distance) {
+ this.world.setTickViewDistance(distance);
+ }
+
+ // Note: follow the player chunk loader so everything stays consistent...
+ public int getAPITickDistance() {
+ final ViewDistances distances = this.world.getViewDistances();
+ final int tickViewDistance = PlayerChunkLoaderData.getTickDistance(-1, distances.tickViewDistance);
+ return tickViewDistance;
+ }
+
+ public int getAPIViewDistance() {
+ final ViewDistances distances = this.world.getViewDistances();
+ final int tickViewDistance = PlayerChunkLoaderData.getTickDistance(-1, distances.tickViewDistance);
+ final int loadDistance = PlayerChunkLoaderData.getLoadViewDistance(tickViewDistance, -1, distances.loadViewDistance);
+
+ // loadDistance = api view distance + 1
+ return loadDistance - 1;
+ }
+
+ public int getAPISendViewDistance() {
+ final ViewDistances distances = this.world.getViewDistances();
+ final int tickViewDistance = PlayerChunkLoaderData.getTickDistance(-1, distances.tickViewDistance);
+ final int loadDistance = PlayerChunkLoaderData.getLoadViewDistance(tickViewDistance, -1, distances.loadViewDistance);
+ final int sendViewDistance = PlayerChunkLoaderData.getSendViewDistance(
+ loadDistance, -1, -1, distances.sendViewDistance
+ );
+
+ return sendViewDistance;
+ }
+
+ public boolean isChunkSent(final ServerPlayer player, final int chunkX, final int chunkZ, final boolean borderOnly) {
+ return borderOnly ? this.isChunkSentBorderOnly(player, chunkX, chunkZ) : this.isChunkSent(player, chunkX, chunkZ);
+ }
+
+ public boolean isChunkSent(final ServerPlayer player, final int chunkX, final int chunkZ) {
+ final PlayerChunkLoaderData loader = player.chunkLoader;
+ if (loader == null) {
+ return false;
+ }
+
+ return loader.sentChunks.contains(CoordinateUtils.getChunkKey(chunkX, chunkZ));
+ }
+
+ public boolean isChunkSentBorderOnly(final ServerPlayer player, final int chunkX, final int chunkZ) {
+ final PlayerChunkLoaderData loader = player.chunkLoader;
+ if (loader == null) {
+ return false;
+ }
+
+ for (int dz = -1; dz <= 1; ++dz) {
+ for (int dx = -1; dx <= 1; ++dx) {
+ if (!loader.sentChunks.contains(CoordinateUtils.getChunkKey(dx + chunkX, dz + chunkZ))) {
+ return true;
+ }
+ }
+ }
+
+ return false;
+ }
+
+ public void tick() {
+ TickThread.ensureTickThread("Cannot tick player chunk loader async");
+ long currTime = System.nanoTime();
+ for (final ServerPlayer player : new java.util.ArrayList<>(this.world.players())) {
+ final PlayerChunkLoaderData loader = player.chunkLoader;
+ if (loader == null || loader.world != this.world) {
+ // not our problem anymore
+ continue;
+ }
+ loader.update(); // can't invoke plugin logic
+ loader.updateQueues(currTime);
+ }
+ }
+
+ private static long[] generateBFSOrder(final int radius) {
+ final LongArrayList chunks = new LongArrayList();
+ final LongArrayFIFOQueue queue = new LongArrayFIFOQueue();
+ final LongOpenHashSet seen = new LongOpenHashSet();
+
+ seen.add(CoordinateUtils.getChunkKey(0, 0));
+ queue.enqueue(CoordinateUtils.getChunkKey(0, 0));
+ while (!queue.isEmpty()) {
+ final long chunk = queue.dequeueLong();
+ final int chunkX = CoordinateUtils.getChunkX(chunk);
+ final int chunkZ = CoordinateUtils.getChunkZ(chunk);
+
+ // important that the addition to the list is here, rather than when enqueueing neighbours
+ // ensures the order is actually kept
+ chunks.add(chunk);
+
+ // -x
+ final long n1 = CoordinateUtils.getChunkKey(chunkX - 1, chunkZ);
+ // -z
+ final long n2 = CoordinateUtils.getChunkKey(chunkX, chunkZ - 1);
+ // +x
+ final long n3 = CoordinateUtils.getChunkKey(chunkX + 1, chunkZ);
+ // +z
+ final long n4 = CoordinateUtils.getChunkKey(chunkX, chunkZ + 1);
+
+ final long[] list = new long[] {n1, n2, n3, n4};
+
+ for (final long neighbour : list) {
+ final int neighbourX = CoordinateUtils.getChunkX(neighbour);
+ final int neighbourZ = CoordinateUtils.getChunkZ(neighbour);
+ if (Math.max(Math.abs(neighbourX), Math.abs(neighbourZ)) > radius) {
+ // don't enqueue out of range
+ continue;
+ }
+ if (!seen.add(neighbour)) {
+ continue;
+ }
+ queue.enqueue(neighbour);
+ }
+ }
+
+ // to increase generation parallelism, we want to space the chunks out so that they are not nearby when generating
+ // this also means we are minimising locality
+ // but, we need to maintain sorted order by manhatten distance
+
+ // first, build a map of manhatten distance -> chunks
+ final java.util.List byDistance = new java.util.ArrayList<>();
+ for (final it.unimi.dsi.fastutil.longs.LongIterator iterator = chunks.iterator(); iterator.hasNext();) {
+ final long chunkKey = iterator.nextLong();
+
+ final int chunkX = CoordinateUtils.getChunkX(chunkKey);
+ final int chunkZ = CoordinateUtils.getChunkZ(chunkKey);
+
+ final int dist = Math.abs(chunkX) + Math.abs(chunkZ);
+ if (dist == byDistance.size()) {
+ final LongArrayList list = new LongArrayList();
+ list.add(chunkKey);
+ byDistance.add(list);
+ continue;
+ }
+
+ byDistance.get(dist).add(chunkKey);
+ }
+
+ // per distance we transform the chunk list so that each element is maximally spaced out from each other
+ for (int i = 0, len = byDistance.size(); i < len; ++i) {
+ final LongArrayList notAdded = byDistance.get(i).clone();
+ final LongArrayList added = new LongArrayList();
+
+ while (!notAdded.isEmpty()) {
+ if (added.isEmpty()) {
+ added.add(notAdded.removeLong(notAdded.size() - 1));
+ continue;
+ }
+
+ long maxChunk = -1L;
+ int maxDist = 0;
+
+ // select the chunk from the not yet added set that has the largest minimum distance from
+ // the current set of added chunks
+
+ for (final it.unimi.dsi.fastutil.longs.LongIterator iterator = notAdded.iterator(); iterator.hasNext();) {
+ final long chunkKey = iterator.nextLong();
+ final int chunkX = CoordinateUtils.getChunkX(chunkKey);
+ final int chunkZ = CoordinateUtils.getChunkZ(chunkKey);
+
+ int minDist = Integer.MAX_VALUE;
+
+ for (final it.unimi.dsi.fastutil.longs.LongIterator iterator2 = added.iterator(); iterator2.hasNext();) {
+ final long addedKey = iterator2.nextLong();
+ final int addedX = CoordinateUtils.getChunkX(addedKey);
+ final int addedZ = CoordinateUtils.getChunkZ(addedKey);
+
+ // here we use square distance because chunk generation uses neighbours in a square radius
+ final int dist = Math.max(Math.abs(addedX - chunkX), Math.abs(addedZ - chunkZ));
+
+ if (dist < minDist) {
+ minDist = dist;
+ }
+ }
+
+ if (minDist > maxDist) {
+ maxDist = minDist;
+ maxChunk = chunkKey;
+ }
+ }
+
+ // move the selected chunk from the not added set to the added set
+
+ if (!notAdded.rem(maxChunk)) {
+ throw new IllegalStateException();
+ }
+
+ added.add(maxChunk);
+ }
+
+ byDistance.set(i, added);
+ }
+
+ // now, rebuild the list so that it still maintains manhatten distance order
+ final LongArrayList ret = new LongArrayList(chunks.size());
+
+ for (final LongArrayList dist : byDistance) {
+ ret.addAll(dist);
+ }
+
+ return ret.toLongArray();
+ }
+
+ public static final class PlayerChunkLoaderData {
+
+ private static final AtomicLong ID_GENERATOR = new AtomicLong();
+ private final long id = ID_GENERATOR.incrementAndGet();
+ private final Long idBoxed = Long.valueOf(this.id);
+
+ // expected that this list returns for a given radius, the set of chunks ordered
+ // by manhattan distance
+ private static final long[][] SEARCH_RADIUS_ITERATION_LIST = new long[65][];
+ static {
+ for (int i = 0; i < SEARCH_RADIUS_ITERATION_LIST.length; ++i) {
+ // a BFS around -x, -z, +x, +z will give increasing manhatten distance
+ SEARCH_RADIUS_ITERATION_LIST[i] = generateBFSOrder(i);
+ }
+ }
+
+ private static final long MAX_RATE = 10_000L;
+
+ private final ServerPlayer player;
+ private final ServerLevel world;
+
+ private int lastChunkX = Integer.MIN_VALUE;
+ private int lastChunkZ = Integer.MIN_VALUE;
+
+ private int lastSendDistance = Integer.MIN_VALUE;
+ private int lastLoadDistance = Integer.MIN_VALUE;
+ private int lastTickDistance = Integer.MIN_VALUE;
+
+ private int lastSentChunkCenterX = Integer.MIN_VALUE;
+ private int lastSentChunkCenterZ = Integer.MIN_VALUE;
+
+ private int lastSentChunkRadius = Integer.MIN_VALUE;
+ private int lastSentSimulationDistance = Integer.MIN_VALUE;
+
+ private boolean canGenerateChunks = true;
+
+ private final ArrayDeque> delayedTicketOps = new ArrayDeque<>();
+ private final LongOpenHashSet sentChunks = new LongOpenHashSet();
+
+ private static final byte CHUNK_TICKET_STAGE_NONE = 0;
+ private static final byte CHUNK_TICKET_STAGE_LOADING = 1;
+ private static final byte CHUNK_TICKET_STAGE_LOADED = 2;
+ private static final byte CHUNK_TICKET_STAGE_GENERATING = 3;
+ private static final byte CHUNK_TICKET_STAGE_GENERATED = 4;
+ private static final byte CHUNK_TICKET_STAGE_TICK = 5;
+ private static final int[] TICKET_STAGE_TO_LEVEL = new int[] {
+ ChunkHolderManager.MAX_TICKET_LEVEL + 1,
+ LOADED_TICKET_LEVEL,
+ LOADED_TICKET_LEVEL,
+ GENERATED_TICKET_LEVEL,
+ GENERATED_TICKET_LEVEL,
+ TICK_TICKET_LEVEL
+ };
+ private final Long2ByteOpenHashMap chunkTicketStage = new Long2ByteOpenHashMap();
+ {
+ this.chunkTicketStage.defaultReturnValue(CHUNK_TICKET_STAGE_NONE);
+ }
+
+ // rate limiting
+ private final AllocatingRateLimiter chunkSendLimiter = new AllocatingRateLimiter();
+ private final AllocatingRateLimiter chunkLoadTicketLimiter = new AllocatingRateLimiter();
+ private final AllocatingRateLimiter chunkGenerateTicketLimiter = new AllocatingRateLimiter();
+
+ // queues
+ private final LongComparator CLOSEST_MANHATTAN_DIST = (final long c1, final long c2) -> {
+ final int c1x = CoordinateUtils.getChunkX(c1);
+ final int c1z = CoordinateUtils.getChunkZ(c1);
+
+ final int c2x = CoordinateUtils.getChunkX(c2);
+ final int c2z = CoordinateUtils.getChunkZ(c2);
+
+ final int centerX = PlayerChunkLoaderData.this.lastChunkX;
+ final int centerZ = PlayerChunkLoaderData.this.lastChunkZ;
+
+ return Integer.compare(
+ Math.abs(c1x - centerX) + Math.abs(c1z - centerZ),
+ Math.abs(c2x - centerX) + Math.abs(c2z - centerZ)
+ );
+ };
+ private final LongHeapPriorityQueue sendQueue = new LongHeapPriorityQueue(CLOSEST_MANHATTAN_DIST);
+ private final LongHeapPriorityQueue tickingQueue = new LongHeapPriorityQueue(CLOSEST_MANHATTAN_DIST);
+ private final LongHeapPriorityQueue generatingQueue = new LongHeapPriorityQueue(CLOSEST_MANHATTAN_DIST);
+ private final LongHeapPriorityQueue genQueue = new LongHeapPriorityQueue(CLOSEST_MANHATTAN_DIST);
+ private final LongHeapPriorityQueue loadingQueue = new LongHeapPriorityQueue(CLOSEST_MANHATTAN_DIST);
+ private final LongHeapPriorityQueue loadQueue = new LongHeapPriorityQueue(CLOSEST_MANHATTAN_DIST);
+
+ private volatile boolean removed;
+
+ public PlayerChunkLoaderData(final ServerLevel world, final ServerPlayer player) {
+ this.world = world;
+ this.player = player;
+ }
+
+ private void flushDelayedTicketOps() {
+ if (this.delayedTicketOps.isEmpty()) {
+ return;
+ }
+ this.world.chunkTaskScheduler.chunkHolderManager.performTicketUpdates(this.delayedTicketOps);
+ this.delayedTicketOps.clear();
+ }
+
+ private void pushDelayedTicketOp(final ChunkHolderManager.TicketOperation, ?> op) {
+ this.delayedTicketOps.addLast(op);
+ }
+
+ private void sendChunk(final int chunkX, final int chunkZ) {
+ if (this.sentChunks.add(CoordinateUtils.getChunkKey(chunkX, chunkZ))) {
+ PlayerChunkSender.sendChunk(this.player.connection, this.world, this.world.getChunkIfLoaded(chunkX, chunkZ));
+ return;
+ }
+ throw new IllegalStateException();
+ }
+
+ private void sendUnloadChunk(final int chunkX, final int chunkZ) {
+ if (!this.sentChunks.remove(CoordinateUtils.getChunkKey(chunkX, chunkZ))) {
+ return;
+ }
+ this.sendUnloadChunkRaw(chunkX, chunkZ);
+ }
+
+ private void sendUnloadChunkRaw(final int chunkX, final int chunkZ) {
+ PlayerChunkSender.dropChunkStatic(this.player, new ChunkPos(chunkX, chunkZ));
+ }
+
+ private final SingleUserAreaMap broadcastMap = new SingleUserAreaMap<>(this) {
+ @Override
+ protected void addCallback(final PlayerChunkLoaderData parameter, final int chunkX, final int chunkZ) {
+ // do nothing, we only care about remove
+ }
+
+ @Override
+ protected void removeCallback(final PlayerChunkLoaderData parameter, final int chunkX, final int chunkZ) {
+ parameter.sendUnloadChunk(chunkX, chunkZ);
+ }
+ };
+ private final SingleUserAreaMap loadTicketCleanup = new SingleUserAreaMap<>(this) {
+ @Override
+ protected void addCallback(final PlayerChunkLoaderData parameter, final int chunkX, final int chunkZ) {
+ // do nothing, we only care about remove
+ }
+
+ @Override
+ protected void removeCallback(final PlayerChunkLoaderData parameter, final int chunkX, final int chunkZ) {
+ final long chunk = CoordinateUtils.getChunkKey(chunkX, chunkZ);
+ final byte ticketStage = parameter.chunkTicketStage.remove(chunk);
+ final int level = TICKET_STAGE_TO_LEVEL[ticketStage];
+ if (level > ChunkHolderManager.MAX_TICKET_LEVEL) {
+ return;
+ }
+
+ parameter.pushDelayedTicketOp(ChunkHolderManager.TicketOperation.addAndRemove(
+ chunk,
+ TicketType.UNKNOWN, level, new ChunkPos(chunkX, chunkZ),
+ REGION_PLAYER_TICKET, level, parameter.idBoxed
+ ));
+ }
+ };
+ private final SingleUserAreaMap tickMap = new SingleUserAreaMap<>(this) {
+ @Override
+ protected void addCallback(final PlayerChunkLoaderData parameter, final int chunkX, final int chunkZ) {
+ // do nothing, we will detect ticking chunks when we try to load them
+ }
+
+ @Override
+ protected void removeCallback(final PlayerChunkLoaderData parameter, final int chunkX, final int chunkZ) {
+ final long chunk = CoordinateUtils.getChunkKey(chunkX, chunkZ);
+ // note: by the time this is called, the tick cleanup should have ran - so, if the chunk is at
+ // the tick stage it was deemed in range for loading. Thus, we need to move it to generated
+ if (!parameter.chunkTicketStage.replace(chunk, CHUNK_TICKET_STAGE_TICK, CHUNK_TICKET_STAGE_GENERATED)) {
+ return;
+ }
+
+ // Since we are possibly downgrading the ticket level, we add an unknown ticket so that
+ // the level is kept until tick().
+ parameter.pushDelayedTicketOp(ChunkHolderManager.TicketOperation.addAndRemove(
+ chunk,
+ TicketType.UNKNOWN, TICK_TICKET_LEVEL, new ChunkPos(chunkX, chunkZ),
+ REGION_PLAYER_TICKET, TICK_TICKET_LEVEL, parameter.idBoxed
+ ));
+ // keep chunk at new generated level
+ parameter.pushDelayedTicketOp(ChunkHolderManager.TicketOperation.addOp(
+ chunk,
+ REGION_PLAYER_TICKET, GENERATED_TICKET_LEVEL, parameter.idBoxed
+ ));
+ }
+ };
+
+ private static boolean wantChunkLoaded(final int centerX, final int centerZ, final int chunkX, final int chunkZ,
+ final int sendRadius) {
+ // expect sendRadius to be = 1 + target viewable radius
+ return ChunkTrackingView.isWithinDistance(centerX, centerZ, sendRadius, chunkX, chunkZ, true);
+ }
+
+ private static int getClientViewDistance(final ServerPlayer player) {
+ final Integer vd = player.requestedViewDistance();
+ return vd == null ? -1 : Math.max(0, vd.intValue());
+ }
+
+ private static int getTickDistance(final int playerTickViewDistance, final int worldTickViewDistance) {
+ return playerTickViewDistance < 0 ? worldTickViewDistance : playerTickViewDistance;
+ }
+
+ private static int getLoadViewDistance(final int tickViewDistance, final int playerLoadViewDistance,
+ final int worldLoadViewDistance) {
+ return Math.max(tickViewDistance + 1, playerLoadViewDistance < 0 ? worldLoadViewDistance : playerLoadViewDistance);
+ }
+
+ private static int getSendViewDistance(final int loadViewDistance, final int clientViewDistance,
+ final int playerSendViewDistance, final int worldSendViewDistance) {
+ return Math.min(
+ loadViewDistance - 1,
+ playerSendViewDistance < 0 ? (!GlobalConfiguration.get().chunkLoadingAdvanced.autoConfigSendDistance || clientViewDistance < 0 ? (worldSendViewDistance < 0 ? (loadViewDistance - 1) : worldSendViewDistance) : clientViewDistance + 1) : playerSendViewDistance
+ );
+ }
+
+ private Packet> updateClientChunkRadius(final int radius) {
+ this.lastSentChunkRadius = radius;
+ return new ClientboundSetChunkCacheRadiusPacket(radius);
+ }
+
+ private Packet> updateClientSimulationDistance(final int distance) {
+ this.lastSentSimulationDistance = distance;
+ return new ClientboundSetSimulationDistancePacket(distance);
+ }
+
+ private Packet> updateClientChunkCenter(final int chunkX, final int chunkZ) {
+ this.lastSentChunkCenterX = chunkX;
+ this.lastSentChunkCenterZ = chunkZ;
+ return new ClientboundSetChunkCacheCenterPacket(chunkX, chunkZ);
+ }
+
+ private boolean canPlayerGenerateChunks() {
+ return !this.player.isSpectator() || this.world.getGameRules().getBoolean(GameRules.RULE_SPECTATORSGENERATECHUNKS);
+ }
+
+ private double getMaxChunkLoadRate() {
+ final double configRate = GlobalConfiguration.get().chunkLoadingBasic.playerMaxChunkLoadRate;
+
+ return configRate < 0.0 || configRate > (double)MAX_RATE ? (double)MAX_RATE : Math.max(1.0, configRate);
+ }
+
+ private double getMaxChunkGenRate() {
+ final double configRate = GlobalConfiguration.get().chunkLoadingBasic.playerMaxChunkGenerateRate;
+
+ return configRate < 0.0 || configRate > (double)MAX_RATE ? (double)MAX_RATE : Math.max(1.0, configRate);
+ }
+
+ private double getMaxChunkSendRate() {
+ final double configRate = GlobalConfiguration.get().chunkLoadingBasic.playerMaxChunkSendRate;
+
+ return configRate < 0.0 || configRate > (double)MAX_RATE ? (double)MAX_RATE : Math.max(1.0, configRate);
+ }
+
+ private long getMaxChunkLoads() {
+ final long radiusChunks = (2L * this.lastLoadDistance + 1L) * (2L * this.lastLoadDistance + 1L);
+ long configLimit = GlobalConfiguration.get().chunkLoadingAdvanced.playerMaxConcurrentChunkLoads;
+ if (configLimit == 0L) {
+ // by default, only allow 1/5th of the chunks in the view distance to be concurrently active
+ configLimit = Math.max(5L, radiusChunks / 5L);
+ } else if (configLimit < 0L) {
+ configLimit = Integer.MAX_VALUE;
+ } // else: use the value configured
+ configLimit = configLimit - this.loadingQueue.size();
+
+ return configLimit;
+ }
+
+ private long getMaxChunkGenerates() {
+ final long radiusChunks = (2L * this.lastLoadDistance + 1L) * (2L * this.lastLoadDistance + 1L);
+ long configLimit = GlobalConfiguration.get().chunkLoadingAdvanced.playerMaxConcurrentChunkGenerates;
+ if (configLimit == 0L) {
+ // by default, only allow 1/5th of the chunks in the view distance to be concurrently active
+ configLimit = Math.max(5L, radiusChunks / 5L);
+ } else if (configLimit < 0L) {
+ configLimit = Integer.MAX_VALUE;
+ } // else: use the value configured
+ configLimit = configLimit - this.generatingQueue.size();
+
+ return configLimit;
+ }
+
+ private boolean wantChunkSent(final int chunkX, final int chunkZ) {
+ final int dx = this.lastChunkX - chunkX;
+ final int dz = this.lastChunkZ - chunkZ;
+ return (Math.max(Math.abs(dx), Math.abs(dz)) <= (this.lastSendDistance + 1)) && wantChunkLoaded(
+ this.lastChunkX, this.lastChunkZ, chunkX, chunkZ, this.lastSendDistance
+ );
+ }
+
+ private boolean wantChunkTicked(final int chunkX, final int chunkZ) {
+ final int dx = this.lastChunkX - chunkX;
+ final int dz = this.lastChunkZ - chunkZ;
+ return Math.max(Math.abs(dx), Math.abs(dz)) <= this.lastTickDistance;
+ }
+
+ void updateQueues(final long time) {
+ TickThread.ensureTickThread(this.player, "Cannot tick player chunk loader async");
+ if (this.removed) {
+ throw new IllegalStateException("Ticking removed player chunk loader");
+ }
+ // update rate limits
+ final double loadRate = this.getMaxChunkLoadRate();
+ final double genRate = this.getMaxChunkGenRate();
+ final double sendRate = this.getMaxChunkSendRate();
+
+ this.chunkLoadTicketLimiter.tickAllocation(time, loadRate, loadRate);
+ this.chunkGenerateTicketLimiter.tickAllocation(time, genRate, genRate);
+ this.chunkSendLimiter.tickAllocation(time, sendRate, sendRate);
+
+ // try to progress chunk loads
+ while (!this.loadingQueue.isEmpty()) {
+ final long pendingLoadChunk = this.loadingQueue.firstLong();
+ final int pendingChunkX = CoordinateUtils.getChunkX(pendingLoadChunk);
+ final int pendingChunkZ = CoordinateUtils.getChunkZ(pendingLoadChunk);
+ final ChunkAccess pending = this.world.chunkSource.getChunkAtImmediately(pendingChunkX, pendingChunkZ);
+ if (pending == null) {
+ // nothing to do here
+ break;
+ }
+ // chunk has loaded, so we can take it out of the queue
+ this.loadingQueue.dequeueLong();
+
+ // try to move to generate queue
+ final byte prev = this.chunkTicketStage.put(pendingLoadChunk, CHUNK_TICKET_STAGE_LOADED);
+ if (prev != CHUNK_TICKET_STAGE_LOADING) {
+ throw new IllegalStateException("Previous state should be " + CHUNK_TICKET_STAGE_LOADING + ", not " + prev);
+ }
+
+ if (this.canGenerateChunks || this.isLoadedChunkGeneratable(pending)) {
+ this.genQueue.enqueue(pendingLoadChunk);
+ } // else: don't want to generate, so just leave it loaded
+ }
+
+ // try to push more chunk loads
+ final long maxLoads = Math.max(0L, Math.min(MAX_RATE, Math.min(this.loadQueue.size(), this.getMaxChunkLoads())));
+ final int maxLoadsThisTick = (int)this.chunkLoadTicketLimiter.takeAllocation(time, loadRate, maxLoads);
+ if (maxLoadsThisTick > 0) {
+ final LongArrayList chunks = new LongArrayList(maxLoadsThisTick);
+ for (int i = 0; i < maxLoadsThisTick; ++i) {
+ final long chunk = this.loadQueue.dequeueLong();
+ final byte prev = this.chunkTicketStage.put(chunk, CHUNK_TICKET_STAGE_LOADING);
+ if (prev != CHUNK_TICKET_STAGE_NONE) {
+ throw new IllegalStateException("Previous state should be " + CHUNK_TICKET_STAGE_NONE + ", not " + prev);
+ }
+ this.pushDelayedTicketOp(
+ ChunkHolderManager.TicketOperation.addOp(
+ chunk,
+ REGION_PLAYER_TICKET, LOADED_TICKET_LEVEL, this.idBoxed
+ )
+ );
+ chunks.add(chunk);
+ this.loadingQueue.enqueue(chunk);
+ }
+
+ // here we need to flush tickets, as scheduleChunkLoad requires tickets to be propagated with addTicket = false
+ this.flushDelayedTicketOps();
+ // we only need to call scheduleChunkLoad because the loaded ticket level is not enough to start the chunk
+ // load - only generate ticket levels start anything, but they start generation...
+ // propagate levels
+ // Note: this CAN call plugin logic, so it is VITAL that our bookkeeping logic is completely done by the time this is invoked
+ this.world.chunkTaskScheduler.chunkHolderManager.processTicketUpdates();
+
+ if (this.removed) {
+ // process ticket updates may invoke plugin logic, which may remove this player
+ return;
+ }
+
+ for (int i = 0; i < maxLoadsThisTick; ++i) {
+ final long queuedLoadChunk = chunks.getLong(i);
+ final int queuedChunkX = CoordinateUtils.getChunkX(queuedLoadChunk);
+ final int queuedChunkZ = CoordinateUtils.getChunkZ(queuedLoadChunk);
+ this.world.chunkTaskScheduler.scheduleChunkLoad(
+ queuedChunkX, queuedChunkZ, ChunkStatus.EMPTY, false, PrioritisedExecutor.Priority.NORMAL, null
+ );
+ if (this.removed) {
+ return;
+ }
+ }
+ }
+
+ // try to progress chunk generations
+ while (!this.generatingQueue.isEmpty()) {
+ final long pendingGenChunk = this.generatingQueue.firstLong();
+ final int pendingChunkX = CoordinateUtils.getChunkX(pendingGenChunk);
+ final int pendingChunkZ = CoordinateUtils.getChunkZ(pendingGenChunk);
+ final LevelChunk pending = this.world.chunkSource.getChunkAtIfLoadedMainThreadNoCache(pendingChunkX, pendingChunkZ);
+ if (pending == null) {
+ // nothing to do here
+ break;
+ }
+
+ // chunk has generated, so we can take it out of queue
+ this.generatingQueue.dequeueLong();
+
+ final byte prev = this.chunkTicketStage.put(pendingGenChunk, CHUNK_TICKET_STAGE_GENERATED);
+ if (prev != CHUNK_TICKET_STAGE_GENERATING) {
+ throw new IllegalStateException("Previous state should be " + CHUNK_TICKET_STAGE_GENERATING + ", not " + prev);
+ }
+
+ // try to move to send queue
+ if (this.wantChunkSent(pendingChunkX, pendingChunkZ)) {
+ this.sendQueue.enqueue(pendingGenChunk);
+ }
+ // try to move to tick queue
+ if (this.wantChunkTicked(pendingChunkX, pendingChunkZ)) {
+ this.tickingQueue.enqueue(pendingGenChunk);
+ }
+ }
+
+ // try to push more chunk generations
+ final long maxGens = Math.max(0L, Math.min(MAX_RATE, Math.min(this.genQueue.size(), this.getMaxChunkGenerates())));
+ final int maxGensThisTick = (int)this.chunkGenerateTicketLimiter.takeAllocation(time, genRate, maxGens);
+ int ratedGensThisTick = 0;
+ while (!this.genQueue.isEmpty()) {
+ final long chunkKey = this.genQueue.firstLong();
+ final int chunkX = CoordinateUtils.getChunkX(chunkKey);
+ final int chunkZ = CoordinateUtils.getChunkZ(chunkKey);
+ final ChunkAccess chunk = this.world.chunkSource.getChunkAtImmediately(chunkX, chunkZ);
+ if (chunk.getStatus() != ChunkStatus.FULL) {
+ // only rate limit actual generations
+ if ((ratedGensThisTick + 1) > maxGensThisTick) {
+ break;
+ }
+ ++ratedGensThisTick;
+ }
+
+ this.genQueue.dequeueLong();
+
+ final byte prev = this.chunkTicketStage.put(chunkKey, CHUNK_TICKET_STAGE_GENERATING);
+ if (prev != CHUNK_TICKET_STAGE_LOADED) {
+ throw new IllegalStateException("Previous state should be " + CHUNK_TICKET_STAGE_LOADED + ", not " + prev);
+ }
+ this.pushDelayedTicketOp(
+ ChunkHolderManager.TicketOperation.addAndRemove(
+ chunkKey,
+ REGION_PLAYER_TICKET, GENERATED_TICKET_LEVEL, this.idBoxed,
+ REGION_PLAYER_TICKET, LOADED_TICKET_LEVEL, this.idBoxed
+ )
+ );
+ this.generatingQueue.enqueue(chunkKey);
+ }
+
+ // try to pull ticking chunks
+ tick_check_outer:
+ while (!this.tickingQueue.isEmpty()) {
+ final long pendingTicking = this.tickingQueue.firstLong();
+ final int pendingChunkX = CoordinateUtils.getChunkX(pendingTicking);
+ final int pendingChunkZ = CoordinateUtils.getChunkZ(pendingTicking);
+
+ final int tickingReq = 2;
+ for (int dz = -tickingReq; dz <= tickingReq; ++dz) {
+ for (int dx = -tickingReq; dx <= tickingReq; ++dx) {
+ if ((dx | dz) == 0) {
+ continue;
+ }
+ final long neighbour = CoordinateUtils.getChunkKey(dx + pendingChunkX, dz + pendingChunkZ);
+ final byte stage = this.chunkTicketStage.get(neighbour);
+ if (stage != CHUNK_TICKET_STAGE_GENERATED && stage != CHUNK_TICKET_STAGE_TICK) {
+ break tick_check_outer;
+ }
+ }
+ }
+ // only gets here if all neighbours were marked as generated or ticking themselves
+ this.tickingQueue.dequeueLong();
+ this.pushDelayedTicketOp(
+ ChunkHolderManager.TicketOperation.addAndRemove(
+ pendingTicking,
+ REGION_PLAYER_TICKET, TICK_TICKET_LEVEL, this.idBoxed,
+ REGION_PLAYER_TICKET, GENERATED_TICKET_LEVEL, this.idBoxed
+ )
+ );
+ // there is no queue to add after ticking
+ final byte prev = this.chunkTicketStage.put(pendingTicking, CHUNK_TICKET_STAGE_TICK);
+ if (prev != CHUNK_TICKET_STAGE_GENERATED) {
+ throw new IllegalStateException("Previous state should be " + CHUNK_TICKET_STAGE_GENERATED + ", not " + prev);
+ }
+ }
+
+ // try to pull sending chunks
+ final long maxSends = Math.max(0L, Math.min(MAX_RATE, Integer.MAX_VALUE)); // no logic to track concurrent sends
+ final int maxSendsThisTick = Math.min((int)this.chunkSendLimiter.takeAllocation(time, sendRate, maxSends), this.sendQueue.size());
+ // we do not return sends that we took from the allocation back because we want to limit the max send rate, not target it
+ for (int i = 0; i < maxSendsThisTick; ++i) {
+ final long pendingSend = this.sendQueue.firstLong();
+ final int pendingSendX = CoordinateUtils.getChunkX(pendingSend);
+ final int pendingSendZ = CoordinateUtils.getChunkZ(pendingSend);
+ final LevelChunk chunk = this.world.chunkSource.getChunkAtIfLoadedMainThreadNoCache(pendingSendX, pendingSendZ);
+ if (!chunk.areNeighboursLoaded(1) || !TickThread.isTickThreadFor(this.world, pendingSendX, pendingSendZ)) {
+ // nothing to do
+ // the target chunk may not be owned by this region, but this should be resolved in the future
+ break;
+ }
+ if (!chunk.isPostProcessingDone) {
+ // not yet post-processed, need to do this so that tile entities can properly be sent to clients
+ chunk.postProcessGeneration();
+ // check if there was any recursive action
+ if (this.removed || this.sendQueue.isEmpty() || this.sendQueue.firstLong() != pendingSend) {
+ return;
+ } // else: good to dequeue and send, fall through
+ }
+ this.sendQueue.dequeueLong();
+
+ this.sendChunk(pendingSendX, pendingSendZ);
+ if (this.removed) {
+ // sendChunk may invoke plugin logic
+ return;
+ }
+ }
+
+ this.flushDelayedTicketOps();
+ // we assume propagate ticket levels happens after this call
+ }
+
+ void add() {
+ TickThread.ensureTickThread(this.player, "Cannot add player asynchronously");
+ if (this.removed) {
+ throw new IllegalStateException("Adding removed player chunk loader");
+ }
+ final ViewDistances playerDistances = this.player.getViewDistances();
+ final ViewDistances worldDistances = this.world.getViewDistances();
+ final int chunkX = this.player.chunkPosition().x;
+ final int chunkZ = this.player.chunkPosition().z;
+
+ final int tickViewDistance = getTickDistance(playerDistances.tickViewDistance, worldDistances.tickViewDistance);
+ // load view cannot be less-than tick view + 1
+ final int loadViewDistance = getLoadViewDistance(tickViewDistance, playerDistances.loadViewDistance, worldDistances.loadViewDistance);
+ // send view cannot be greater-than load view
+ final int clientViewDistance = getClientViewDistance(this.player);
+ final int sendViewDistance = getSendViewDistance(loadViewDistance, clientViewDistance, playerDistances.sendViewDistance, worldDistances.sendViewDistance);
+
+ // send view distances
+ this.player.connection.send(this.updateClientChunkRadius(sendViewDistance));
+ this.player.connection.send(this.updateClientSimulationDistance(tickViewDistance));
+
+ // add to distance maps
+ this.broadcastMap.add(chunkX, chunkZ, sendViewDistance + 1);
+ this.loadTicketCleanup.add(chunkX, chunkZ, loadViewDistance + 1);
+ this.tickMap.add(chunkX, chunkZ, tickViewDistance);
+
+ // update chunk center
+ this.player.connection.send(this.updateClientChunkCenter(chunkX, chunkZ));
+
+ // now we can update
+ this.update();
+ }
+
+ private boolean isLoadedChunkGeneratable(final int chunkX, final int chunkZ) {
+ return this.isLoadedChunkGeneratable(this.world.chunkSource.getChunkAtImmediately(chunkX, chunkZ));
+ }
+
+ private boolean isLoadedChunkGeneratable(final ChunkAccess chunkAccess) {
+ final BelowZeroRetrogen belowZeroRetrogen;
+ // see PortalForcer#findPortalAround
+ return chunkAccess != null && (
+ chunkAccess.getStatus() == ChunkStatus.FULL ||
+ ((belowZeroRetrogen = chunkAccess.getBelowZeroRetrogen()) != null && belowZeroRetrogen.targetStatus().isOrAfter(ChunkStatus.SPAWN))
+ );
+ }
+
+ void update() {
+ TickThread.ensureTickThread(this.player, "Cannot update player asynchronously");
+ if (this.removed) {
+ throw new IllegalStateException("Updating removed player chunk loader");
+ }
+ final ViewDistances playerDistances = this.player.getViewDistances();
+ final ViewDistances worldDistances = this.world.getViewDistances();
+
+ final int tickViewDistance = getTickDistance(playerDistances.tickViewDistance, worldDistances.tickViewDistance);
+ // load view cannot be less-than tick view + 1
+ final int loadViewDistance = getLoadViewDistance(tickViewDistance, playerDistances.loadViewDistance, worldDistances.loadViewDistance);
+ // send view cannot be greater-than load view
+ final int clientViewDistance = getClientViewDistance(this.player);
+ final int sendViewDistance = getSendViewDistance(loadViewDistance, clientViewDistance, playerDistances.sendViewDistance, worldDistances.sendViewDistance);
+
+ final ChunkPos playerPos = this.player.chunkPosition();
+ final boolean canGenerateChunks = this.canPlayerGenerateChunks();
+ final int currentChunkX = playerPos.x;
+ final int currentChunkZ = playerPos.z;
+
+ final int prevChunkX = this.lastChunkX;
+ final int prevChunkZ = this.lastChunkZ;
+
+ if (
+ // has view distance stayed the same?
+ sendViewDistance == this.lastSendDistance
+ && loadViewDistance == this.lastLoadDistance
+ && tickViewDistance == this.lastTickDistance
+
+ // has our chunk stayed the same?
+ && prevChunkX == currentChunkX
+ && prevChunkZ == currentChunkZ
+
+ // can we still generate chunks?
+ && this.canGenerateChunks == canGenerateChunks
+ ) {
+ // nothing we care about changed, so we're not re-calculating
+ return;
+ }
+
+ // update distance maps
+ this.broadcastMap.update(currentChunkX, currentChunkZ, sendViewDistance + 1);
+ this.loadTicketCleanup.update(currentChunkX, currentChunkZ, loadViewDistance + 1);
+ this.tickMap.update(currentChunkX, currentChunkZ, tickViewDistance);
+ if (sendViewDistance > loadViewDistance || tickViewDistance > loadViewDistance) {
+ throw new IllegalStateException();
+ }
+
+ // update VDs for client
+ // this should be after the distance map updates, as they will send unload packets
+ if (this.lastSentChunkRadius != sendViewDistance) {
+ this.player.connection.send(this.updateClientChunkRadius(sendViewDistance));
+ }
+ if (this.lastSentSimulationDistance != tickViewDistance) {
+ this.player.connection.send(this.updateClientSimulationDistance(tickViewDistance));
+ }
+
+ this.sendQueue.clear();
+ this.tickingQueue.clear();
+ this.generatingQueue.clear();
+ this.genQueue.clear();
+ this.loadingQueue.clear();
+ this.loadQueue.clear();
+
+ this.lastChunkX = currentChunkX;
+ this.lastChunkZ = currentChunkZ;
+ this.lastSendDistance = sendViewDistance;
+ this.lastLoadDistance = loadViewDistance;
+ this.lastTickDistance = tickViewDistance;
+ this.canGenerateChunks = canGenerateChunks;
+
+ // +1 since we need to load chunks +1 around the load view distance...
+ final long[] toIterate = SEARCH_RADIUS_ITERATION_LIST[loadViewDistance + 1];
+ // the iteration order is by increasing manhattan distance - so, we do NOT need to
+ // sort anything in the queue!
+ for (final long deltaChunk : toIterate) {
+ final int dx = CoordinateUtils.getChunkX(deltaChunk);
+ final int dz = CoordinateUtils.getChunkZ(deltaChunk);
+ final int chunkX = dx + currentChunkX;
+ final int chunkZ = dz + currentChunkZ;
+ final long chunk = CoordinateUtils.getChunkKey(chunkX, chunkZ);
+ final int squareDistance = Math.max(Math.abs(dx), Math.abs(dz));
+ final int manhattanDistance = Math.abs(dx) + Math.abs(dz);
+
+ // since chunk sending is not by radius alone, we need an extra check here to account for
+ // everything <= sendDistance
+ // Note: Vanilla may want to send chunks outside the send view distance, so we do need
+ // the dist <= view check
+ final boolean sendChunk = (squareDistance <= (sendViewDistance + 1))
+ && wantChunkLoaded(currentChunkX, currentChunkZ, chunkX, chunkZ, sendViewDistance);
+ final boolean sentChunk = sendChunk ? this.sentChunks.contains(chunk) : this.sentChunks.remove(chunk);
+
+ if (!sendChunk && sentChunk) {
+ // have sent the chunk, but don't want it anymore
+ // unload it now
+ this.sendUnloadChunkRaw(chunkX, chunkZ);
+ }
+
+ final byte stage = this.chunkTicketStage.get(chunk);
+ switch (stage) {
+ case CHUNK_TICKET_STAGE_NONE: {
+ // we want the chunk to be at least loaded
+ this.loadQueue.enqueue(chunk);
+ break;
+ }
+ case CHUNK_TICKET_STAGE_LOADING: {
+ this.loadingQueue.enqueue(chunk);
+ break;
+ }
+ case CHUNK_TICKET_STAGE_LOADED: {
+ if (canGenerateChunks || this.isLoadedChunkGeneratable(chunkX, chunkZ)) {
+ this.genQueue.enqueue(chunk);
+ }
+ break;
+ }
+ case CHUNK_TICKET_STAGE_GENERATING: {
+ this.generatingQueue.enqueue(chunk);
+ break;
+ }
+ case CHUNK_TICKET_STAGE_GENERATED: {
+ if (sendChunk && !sentChunk) {
+ this.sendQueue.enqueue(chunk);
+ }
+ if (squareDistance <= tickViewDistance) {
+ this.tickingQueue.enqueue(chunk);
+ }
+ break;
+ }
+ case CHUNK_TICKET_STAGE_TICK: {
+ if (sendChunk && !sentChunk) {
+ this.sendQueue.enqueue(chunk);
+ }
+ break;
+ }
+ default: {
+ throw new IllegalStateException("Unknown stage: " + stage);
+ }
+ }
+ }
+
+ // update the chunk center
+ // this must be done last so that the client does not ignore any of our unload chunk packets above
+ if (this.lastSentChunkCenterX != currentChunkX || this.lastSentChunkCenterZ != currentChunkZ) {
+ this.player.connection.send(this.updateClientChunkCenter(currentChunkX, currentChunkZ));
+ }
+
+ this.flushDelayedTicketOps();
+ }
+
+ void remove() {
+ TickThread.ensureTickThread(this.player, "Cannot add player asynchronously");
+ if (this.removed) {
+ throw new IllegalStateException("Removing removed player chunk loader");
+ }
+ this.removed = true;
+ // sends the chunk unload packets
+ this.broadcastMap.remove();
+ // cleans up loading/generating tickets
+ this.loadTicketCleanup.remove();
+ // cleans up ticking tickets
+ this.tickMap.remove();
+
+ // purge queues
+ this.sendQueue.clear();
+ this.tickingQueue.clear();
+ this.generatingQueue.clear();
+ this.genQueue.clear();
+ this.loadingQueue.clear();
+ this.loadQueue.clear();
+
+ // flush ticket changes
+ this.flushDelayedTicketOps();
+
+ // now all tickets should be removed, which is all of our external state
+ }
+ }
+
+ // TODO rebase into util patch
+ private static final class AllocatingRateLimiter {
+
+ // max difference granularity in ns
+ private static final long MAX_GRANULARITY = TimeUnit.SECONDS.toNanos(1L);
+
+ private double allocation;
+ private long lastAllocationUpdate;
+ private double takeCarry;
+ private long lastTakeUpdate;
+
+ // rate in units/s, and time in ns
+ public void tickAllocation(final long time, final double rate, final double maxAllocation) {
+ final long diff = Math.min(MAX_GRANULARITY, time - this.lastAllocationUpdate);
+ this.lastAllocationUpdate = time;
+
+ this.allocation = Math.min(maxAllocation - this.takeCarry, this.allocation + rate * (diff*1.0E-9D));
+ }
+
+ // rate in units/s, and time in ns
+ public long takeAllocation(final long time, final double rate, final long maxTake) {
+ if (maxTake < 1L) {
+ return 0L;
+ }
+
+ double ret = this.takeCarry;
+ final long diff = Math.min(MAX_GRANULARITY, time - this.lastTakeUpdate);
+ this.lastTakeUpdate = time;
+
+ // note: abs(takeCarry) <= 1.0
+ final double take = Math.min(Math.min((double)maxTake - this.takeCarry, this.allocation), rate * (diff*1.0E-9));
+
+ ret += take;
+ this.allocation -= take;
+
+ final long retInteger = (long)Math.floor(ret);
+ this.takeCarry = ret - (double)retInteger;
+
+ return retInteger;
+ }
+ }
+
+ static final class CountedSRSWLinkedQueue {
+
+ private final SRSWLinkedQueue queue = new SRSWLinkedQueue<>();
+ private volatile long countAdded;
+ private volatile long countRemoved;
+
+ private static final VarHandle COUNT_ADDED_HANDLE = ConcurrentUtil.getVarHandle(CountedSRSWLinkedQueue.class, "countAdded", long.class);
+ private static final VarHandle COUNT_REMOVED_HANDLE = ConcurrentUtil.getVarHandle(CountedSRSWLinkedQueue.class, "countRemoved", long.class);
+
+ private long getCountAddedPlain() {
+ return (long)COUNT_ADDED_HANDLE.get(this);
+ }
+
+ private long getCountAddedAcquire() {
+ return (long)COUNT_ADDED_HANDLE.getAcquire(this);
+ }
+
+ private void setCountAddedRelease(final long to) {
+ COUNT_ADDED_HANDLE.setRelease(this, to);
+ }
+
+ private long getCountRemovedPlain() {
+ return (long)COUNT_REMOVED_HANDLE.get(this);
+ }
+
+ private long getCountRemovedAcquire() {
+ return (long)COUNT_REMOVED_HANDLE.getAcquire(this);
+ }
+
+ private void setCountRemovedRelease(final long to) {
+ COUNT_REMOVED_HANDLE.setRelease(this, to);
+ }
+
+ public void add(final E element) {
+ this.setCountAddedRelease(this.getCountAddedPlain() + 1L);
+ this.queue.addLast(element);
+ }
+
+ public E poll() {
+ final E ret = this.queue.poll();
+ if (ret != null) {
+ this.setCountRemovedRelease(this.getCountRemovedPlain() + 1L);
+ }
+
+ return ret;
+ }
+
+ public long size() {
+ final long removed = this.getCountRemovedAcquire();
+ final long added = this.getCountAddedAcquire();
+
+ return added - removed;
+ }
+ }
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/entity/EntityLookup.java b/src/main/java/io/papermc/paper/chunk/system/entity/EntityLookup.java
new file mode 100644
index 0000000000000000000000000000000000000000..15ee41452992714108efe53b708b5a4e1da7c1ff
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/entity/EntityLookup.java
@@ -0,0 +1,902 @@
+package io.papermc.paper.chunk.system.entity;
+
+import com.destroystokyo.paper.util.maplist.EntityList;
+import com.mojang.logging.LogUtils;
+import io.papermc.paper.util.CoordinateUtils;
+import io.papermc.paper.util.TickThread;
+import io.papermc.paper.util.WorldUtil;
+import io.papermc.paper.world.ChunkEntitySlices;
+import it.unimi.dsi.fastutil.ints.Int2ReferenceOpenHashMap;
+import it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap;
+import it.unimi.dsi.fastutil.objects.Object2ReferenceOpenHashMap;
+import net.minecraft.core.BlockPos;
+import io.papermc.paper.chunk.system.ChunkSystem;
+import net.minecraft.server.level.ChunkHolder;
+import net.minecraft.server.level.ServerLevel;
+import net.minecraft.util.AbortableIterationConsumer;
+import net.minecraft.util.Mth;
+import net.minecraft.world.entity.Entity;
+import net.minecraft.world.entity.EntityType;
+import net.minecraft.world.level.ChunkPos;
+import net.minecraft.world.level.entity.EntityInLevelCallback;
+import net.minecraft.world.level.entity.EntityTypeTest;
+import net.minecraft.world.level.entity.LevelCallback;
+import net.minecraft.server.level.FullChunkStatus;
+import net.minecraft.world.level.entity.LevelEntityGetter;
+import net.minecraft.world.level.entity.Visibility;
+import net.minecraft.world.phys.AABB;
+import net.minecraft.world.phys.Vec3;
+import org.jetbrains.annotations.NotNull;
+import org.jetbrains.annotations.Nullable;
+import org.slf4j.Logger;
+import java.util.ArrayList;
+import java.util.Arrays;
+import java.util.Iterator;
+import java.util.List;
+import java.util.NoSuchElementException;
+import java.util.UUID;
+import java.util.concurrent.locks.StampedLock;
+import java.util.function.Consumer;
+import java.util.function.Predicate;
+
+public final class EntityLookup implements LevelEntityGetter {
+
+ private static final Logger LOGGER = LogUtils.getClassLogger();
+
+ protected static final int REGION_SHIFT = 5;
+ protected static final int REGION_MASK = (1 << REGION_SHIFT) - 1;
+ protected static final int REGION_SIZE = 1 << REGION_SHIFT;
+
+ public final ServerLevel world;
+
+ private final StampedLock stateLock = new StampedLock();
+ protected final Long2ObjectOpenHashMap regions = new Long2ObjectOpenHashMap<>(128, 0.5f);
+
+ private final int minSection; // inclusive
+ private final int maxSection; // inclusive
+ private final LevelCallback worldCallback;
+
+ private final StampedLock entityByLock = new StampedLock();
+ private final Int2ReferenceOpenHashMap entityById = new Int2ReferenceOpenHashMap<>();
+ private final Object2ReferenceOpenHashMap entityByUUID = new Object2ReferenceOpenHashMap<>();
+ private final EntityList accessibleEntities = new EntityList();
+
+ public EntityLookup(final ServerLevel world, final LevelCallback worldCallback) {
+ this.world = world;
+ this.minSection = WorldUtil.getMinSection(world);
+ this.maxSection = WorldUtil.getMaxSection(world);
+ this.worldCallback = worldCallback;
+ }
+
+ private static Entity maskNonAccessible(final Entity entity) {
+ if (entity == null) {
+ return null;
+ }
+ final Visibility visibility = EntityLookup.getEntityStatus(entity);
+ return visibility.isAccessible() ? entity : null;
+ }
+
+ @Nullable
+ @Override
+ public Entity get(final int id) {
+ final long attempt = this.entityByLock.tryOptimisticRead();
+ if (attempt != 0L) {
+ try {
+ final Entity ret = this.entityById.get(id);
+
+ if (this.entityByLock.validate(attempt)) {
+ return maskNonAccessible(ret);
+ }
+ } catch (final Error error) {
+ throw error;
+ } catch (final Throwable thr) {
+ // ignore
+ }
+ }
+
+ this.entityByLock.readLock();
+ try {
+ return maskNonAccessible(this.entityById.get(id));
+ } finally {
+ this.entityByLock.tryUnlockRead();
+ }
+ }
+
+ @Nullable
+ @Override
+ public Entity get(final UUID id) {
+ final long attempt = this.entityByLock.tryOptimisticRead();
+ if (attempt != 0L) {
+ try {
+ final Entity ret = this.entityByUUID.get(id);
+
+ if (this.entityByLock.validate(attempt)) {
+ return maskNonAccessible(ret);
+ }
+ } catch (final Error error) {
+ throw error;
+ } catch (final Throwable thr) {
+ // ignore
+ }
+ }
+
+ this.entityByLock.readLock();
+ try {
+ return maskNonAccessible(this.entityByUUID.get(id));
+ } finally {
+ this.entityByLock.tryUnlockRead();
+ }
+ }
+
+ public boolean hasEntity(final UUID uuid) {
+ return this.get(uuid) != null;
+ }
+
+ public String getDebugInfo() {
+ return "count_id:" + this.entityById.size() + ",count_uuid:" + this.entityByUUID.size() + ",region_count:" + this.regions.size();
+ }
+
+ static final class ArrayIterable implements Iterable {
+
+ private final T[] array;
+ private final int off;
+ private final int length;
+
+ public ArrayIterable(final T[] array, final int off, final int length) {
+ this.array = array;
+ this.off = off;
+ this.length = length;
+ if (length > array.length) {
+ throw new IllegalArgumentException("Length must be no greater-than the array length");
+ }
+ }
+
+ @NotNull
+ @Override
+ public Iterator iterator() {
+ return new ArrayIterator<>(this.array, this.off, this.length);
+ }
+
+ static final class ArrayIterator implements Iterator {
+
+ private final T[] array;
+ private int off;
+ private final int length;
+
+ public ArrayIterator(final T[] array, final int off, final int length) {
+ this.array = array;
+ this.off = off;
+ this.length = length;
+ }
+
+ @Override
+ public boolean hasNext() {
+ return this.off < this.length;
+ }
+
+ @Override
+ public T next() {
+ if (this.off >= this.length) {
+ throw new NoSuchElementException();
+ }
+ return this.array[this.off++];
+ }
+
+ @Override
+ public void remove() {
+ throw new UnsupportedOperationException();
+ }
+ }
+ }
+
+ @Override
+ public Iterable getAll() {
+ return new ArrayIterable<>(this.accessibleEntities.getRawData(), 0, this.accessibleEntities.size());
+ }
+
+ public Entity[] getAllCopy() {
+ return Arrays.copyOf(this.accessibleEntities.getRawData(), this.accessibleEntities.size(), Entity[].class);
+ }
+
+ @Override
+ public void get(final EntityTypeTest filter, final AbortableIterationConsumer action) {
+ final Int2ReferenceOpenHashMap entityCopy;
+
+ this.entityByLock.readLock();
+ try {
+ entityCopy = this.entityById.clone();
+ } finally {
+ this.entityByLock.tryUnlockRead();
+ }
+ for (final Entity entity : entityCopy.values()) {
+ final Visibility visibility = EntityLookup.getEntityStatus(entity);
+ if (!visibility.isAccessible()) {
+ continue;
+ }
+ final U casted = filter.tryCast(entity);
+ if (casted != null && action.accept(casted).shouldAbort()) {
+ break;
+ }
+ }
+ }
+
+ @Override
+ public void get(final AABB box, final Consumer action) {
+ List entities = new ArrayList<>();
+ this.getEntitiesWithoutDragonParts(null, box, entities, null);
+ for (int i = 0, len = entities.size(); i < len; ++i) {
+ action.accept(entities.get(i));
+ }
+ }
+
+ @Override
+ public void get(final EntityTypeTest filter, final AABB box, final AbortableIterationConsumer action) {
+ List entities = new ArrayList<>();
+ this.getEntitiesWithoutDragonParts(null, box, entities, null);
+ for (int i = 0, len = entities.size(); i < len; ++i) {
+ final U casted = filter.tryCast(entities.get(i));
+ if (casted != null && action.accept(casted).shouldAbort()) {
+ break;
+ }
+ }
+ }
+
+ public void entityStatusChange(final Entity entity, final ChunkEntitySlices slices, final Visibility oldVisibility, final Visibility newVisibility, final boolean moved,
+ final boolean created, final boolean destroyed) {
+ TickThread.ensureTickThread(entity, "Entity status change must only happen on the main thread");
+
+ if (entity.updatingSectionStatus) {
+ // recursive status update
+ LOGGER.error("Cannot recursively update entity chunk status for entity " + entity, new Throwable());
+ return;
+ }
+
+ final boolean entityStatusUpdateBefore = slices == null ? false : slices.startPreventingStatusUpdates();
+
+ if (entityStatusUpdateBefore) {
+ LOGGER.error("Cannot update chunk status for entity " + entity + " since entity chunk (" + slices.chunkX + "," + slices.chunkZ + ") is receiving update", new Throwable());
+ return;
+ }
+
+ try {
+ final Boolean ticketBlockBefore = this.world.chunkTaskScheduler.chunkHolderManager.blockTicketUpdates();
+ try {
+ entity.updatingSectionStatus = true;
+ try {
+ if (created) {
+ EntityLookup.this.worldCallback.onCreated(entity);
+ }
+
+ if (oldVisibility == newVisibility) {
+ if (moved && newVisibility.isAccessible()) {
+ EntityLookup.this.worldCallback.onSectionChange(entity);
+ }
+ return;
+ }
+
+ if (newVisibility.ordinal() > oldVisibility.ordinal()) {
+ // status upgrade
+ if (!oldVisibility.isAccessible() && newVisibility.isAccessible()) {
+ this.accessibleEntities.add(entity);
+ EntityLookup.this.worldCallback.onTrackingStart(entity);
+ }
+
+ if (!oldVisibility.isTicking() && newVisibility.isTicking()) {
+ EntityLookup.this.worldCallback.onTickingStart(entity);
+ }
+ } else {
+ // status downgrade
+ if (oldVisibility.isTicking() && !newVisibility.isTicking()) {
+ EntityLookup.this.worldCallback.onTickingEnd(entity);
+ }
+
+ if (oldVisibility.isAccessible() && !newVisibility.isAccessible()) {
+ this.accessibleEntities.remove(entity);
+ EntityLookup.this.worldCallback.onTrackingEnd(entity);
+ }
+ }
+
+ if (moved && newVisibility.isAccessible()) {
+ EntityLookup.this.worldCallback.onSectionChange(entity);
+ }
+
+ if (destroyed) {
+ EntityLookup.this.worldCallback.onDestroyed(entity);
+ }
+ } finally {
+ entity.updatingSectionStatus = false;
+ }
+ } finally {
+ this.world.chunkTaskScheduler.chunkHolderManager.unblockTicketUpdates(ticketBlockBefore);
+ }
+ } finally {
+ if (slices != null) {
+ slices.stopPreventingStatusUpdates(false);
+ }
+ }
+ }
+
+ public void chunkStatusChange(final int x, final int z, final FullChunkStatus newStatus) {
+ this.getChunk(x, z).updateStatus(newStatus, this);
+ }
+
+ public void addLegacyChunkEntities(final List entities, final ChunkPos forChunk) {
+ this.addEntityChunk(entities, forChunk, true);
+ }
+
+ public void addEntityChunkEntities(final List entities, final ChunkPos forChunk) {
+ this.addEntityChunk(entities, forChunk, true);
+ }
+
+ public void addWorldGenChunkEntities(final List entities, final ChunkPos forChunk) {
+ this.addEntityChunk(entities, forChunk, false);
+ }
+
+ private void addRecursivelySafe(final Entity root, final boolean fromDisk) {
+ if (!this.addEntity(root, fromDisk)) {
+ // possible we are a passenger, and so should dismount from any valid entity in the world
+ root.stopRiding(true);
+ return;
+ }
+ for (final Entity passenger : root.getPassengers()) {
+ this.addRecursivelySafe(passenger, fromDisk);
+ }
+ }
+
+ private void addEntityChunk(final List entities, final ChunkPos forChunk, final boolean fromDisk) {
+ for (int i = 0, len = entities.size(); i < len; ++i) {
+ final Entity entity = entities.get(i);
+ if (entity.isPassenger()) {
+ continue;
+ }
+
+ if (!entity.chunkPosition().equals(forChunk)) {
+ LOGGER.warn("Root entity " + entity + " is outside of serialized chunk " + forChunk);
+ // can't set removed here, as we may not own the chunk position
+ // skip the entity
+ continue;
+ }
+
+ final Vec3 rootPosition = entity.position();
+
+ // always adjust positions before adding passengers in case plugins access the entity, and so that
+ // they are added to the right entity chunk
+ for (final Entity passenger : entity.getIndirectPassengers()) {
+ if (!passenger.chunkPosition().equals(forChunk)) {
+ passenger.setPosRaw(rootPosition.x, rootPosition.y, rootPosition.z, true);
+ }
+ }
+
+ this.addRecursivelySafe(entity, fromDisk);
+ }
+ }
+
+ public boolean addNewEntity(final Entity entity) {
+ return this.addEntity(entity, false);
+ }
+
+ public static Visibility getEntityStatus(final Entity entity) {
+ if (entity.isAlwaysTicking()) {
+ return Visibility.TICKING;
+ }
+ final FullChunkStatus entityStatus = entity.chunkStatus;
+ return Visibility.fromFullChunkStatus(entityStatus == null ? FullChunkStatus.INACCESSIBLE : entityStatus);
+ }
+
+ private boolean addEntity(final Entity entity, final boolean fromDisk) {
+ final BlockPos pos = entity.blockPosition();
+ final int sectionX = pos.getX() >> 4;
+ final int sectionY = Mth.clamp(pos.getY() >> 4, this.minSection, this.maxSection);
+ final int sectionZ = pos.getZ() >> 4;
+ TickThread.ensureTickThread(this.world, sectionX, sectionZ, "Cannot add entity off-main thread");
+
+ if (entity.isRemoved()) {
+ LOGGER.warn("Refusing to add removed entity: " + entity);
+ return false;
+ }
+
+ if (entity.updatingSectionStatus) {
+ LOGGER.warn("Entity " + entity + " is currently prevented from being added/removed to world since it is processing section status updates", new Throwable());
+ return false;
+ }
+
+ if (fromDisk) {
+ ChunkSystem.onEntityPreAdd(this.world, entity);
+ if (entity.isRemoved()) {
+ // removed from checkDupeUUID call
+ return false;
+ }
+ }
+
+ this.entityByLock.writeLock();
+ try {
+ if (this.entityById.containsKey(entity.getId())) {
+ LOGGER.warn("Entity id already exists: " + entity.getId() + ", mapped to " + this.entityById.get(entity.getId()) + ", can't add " + entity);
+ return false;
+ }
+ if (this.entityByUUID.containsKey(entity.getUUID())) {
+ LOGGER.warn("Entity uuid already exists: " + entity.getUUID() + ", mapped to " + this.entityByUUID.get(entity.getUUID()) + ", can't add " + entity);
+ return false;
+ }
+ this.entityById.put(entity.getId(), entity);
+ this.entityByUUID.put(entity.getUUID(), entity);
+ } finally {
+ this.entityByLock.tryUnlockWrite();
+ }
+
+ entity.sectionX = sectionX;
+ entity.sectionY = sectionY;
+ entity.sectionZ = sectionZ;
+ final ChunkEntitySlices slices = this.getOrCreateChunk(sectionX, sectionZ);
+ if (!slices.addEntity(entity, sectionY)) {
+ LOGGER.warn("Entity " + entity + " added to world '" + this.world.getWorld().getName() + "', but was already contained in entity chunk (" + sectionX + "," + sectionZ + ")");
+ }
+
+ entity.setLevelCallback(new EntityCallback(entity));
+
+ this.entityStatusChange(entity, slices, Visibility.HIDDEN, getEntityStatus(entity), false, !fromDisk, false);
+
+ return true;
+ }
+
+ public boolean canRemoveEntity(final Entity entity) {
+ if (entity.updatingSectionStatus) {
+ return false;
+ }
+
+ final int sectionX = entity.sectionX;
+ final int sectionZ = entity.sectionZ;
+ final ChunkEntitySlices slices = this.getChunk(sectionX, sectionZ);
+ return slices == null || !slices.isPreventingStatusUpdates();
+ }
+
+ private void removeEntity(final Entity entity) {
+ final int sectionX = entity.sectionX;
+ final int sectionY = entity.sectionY;
+ final int sectionZ = entity.sectionZ;
+ TickThread.ensureTickThread(this.world, sectionX, sectionZ, "Cannot remove entity off-main");
+ if (!entity.isRemoved()) {
+ throw new IllegalStateException("Only call Entity#setRemoved to remove an entity");
+ }
+ final ChunkEntitySlices slices = this.getChunk(sectionX, sectionZ);
+ // all entities should be in a chunk
+ if (slices == null) {
+ LOGGER.warn("Cannot remove entity " + entity + " from null entity slices (" + sectionX + "," + sectionZ + ")");
+ } else {
+ if (slices.isPreventingStatusUpdates()) {
+ throw new IllegalStateException("Attempting to remove entity " + entity + " from entity slices (" + sectionX + "," + sectionZ + ") that is receiving status updates");
+ }
+ if (!slices.removeEntity(entity, sectionY)) {
+ LOGGER.warn("Failed to remove entity " + entity + " from entity slices (" + sectionX + "," + sectionZ + ")");
+ }
+ }
+ entity.sectionX = entity.sectionY = entity.sectionZ = Integer.MIN_VALUE;
+
+ this.entityByLock.writeLock();
+ try {
+ if (!this.entityById.remove(entity.getId(), entity)) {
+ LOGGER.warn("Failed to remove entity " + entity + " by id, current entity mapped: " + this.entityById.get(entity.getId()));
+ }
+ if (!this.entityByUUID.remove(entity.getUUID(), entity)) {
+ LOGGER.warn("Failed to remove entity " + entity + " by uuid, current entity mapped: " + this.entityByUUID.get(entity.getUUID()));
+ }
+ } finally {
+ this.entityByLock.tryUnlockWrite();
+ }
+ }
+
+ private ChunkEntitySlices moveEntity(final Entity entity) {
+ // ensure we own the entity
+ TickThread.ensureTickThread(entity, "Cannot move entity off-main");
+
+ final BlockPos newPos = entity.blockPosition();
+ final int newSectionX = newPos.getX() >> 4;
+ final int newSectionY = Mth.clamp(newPos.getY() >> 4, this.minSection, this.maxSection);
+ final int newSectionZ = newPos.getZ() >> 4;
+
+ if (newSectionX == entity.sectionX && newSectionY == entity.sectionY && newSectionZ == entity.sectionZ) {
+ return null;
+ }
+
+ // ensure the new section is owned by this tick thread
+ TickThread.ensureTickThread(this.world, newSectionX, newSectionZ, "Cannot move entity off-main");
+
+ // ensure the old section is owned by this tick thread
+ TickThread.ensureTickThread(this.world, entity.sectionX, entity.sectionZ, "Cannot move entity off-main");
+
+ final ChunkEntitySlices old = this.getChunk(entity.sectionX, entity.sectionZ);
+ final ChunkEntitySlices slices = this.getOrCreateChunk(newSectionX, newSectionZ);
+
+ if (!old.removeEntity(entity, entity.sectionY)) {
+ LOGGER.warn("Could not remove entity " + entity + " from its old chunk section (" + entity.sectionX + "," + entity.sectionY + "," + entity.sectionZ + ") since it was not contained in the section");
+ }
+
+ if (!slices.addEntity(entity, newSectionY)) {
+ LOGGER.warn("Could not add entity " + entity + " to its new chunk section (" + newSectionX + "," + newSectionY + "," + newSectionZ + ") as it is already contained in the section");
+ }
+
+ entity.sectionX = newSectionX;
+ entity.sectionY = newSectionY;
+ entity.sectionZ = newSectionZ;
+
+ return slices;
+ }
+
+ public void getEntitiesWithoutDragonParts(final Entity except, final AABB box, final List into, final Predicate super Entity> predicate) {
+ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
+ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
+ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
+ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
+
+ final int minRegionX = minChunkX >> REGION_SHIFT;
+ final int minRegionZ = minChunkZ >> REGION_SHIFT;
+ final int maxRegionX = maxChunkX >> REGION_SHIFT;
+ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
+
+ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
+ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
+ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
+
+ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
+ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
+
+ if (region == null) {
+ continue;
+ }
+
+ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
+ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
+
+ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
+ for (int currX = minX; currX <= maxX; ++currX) {
+ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
+ if (chunk == null || !chunk.status.isOrAfter(FullChunkStatus.FULL)) {
+ continue;
+ }
+
+ chunk.getEntitiesWithoutDragonParts(except, box, into, predicate);
+ }
+ }
+ }
+ }
+ }
+
+ public void getEntities(final Entity except, final AABB box, final List into, final Predicate super Entity> predicate) {
+ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
+ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
+ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
+ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
+
+ final int minRegionX = minChunkX >> REGION_SHIFT;
+ final int minRegionZ = minChunkZ >> REGION_SHIFT;
+ final int maxRegionX = maxChunkX >> REGION_SHIFT;
+ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
+
+ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
+ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
+ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
+
+ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
+ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
+
+ if (region == null) {
+ continue;
+ }
+
+ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
+ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
+
+ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
+ for (int currX = minX; currX <= maxX; ++currX) {
+ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
+ if (chunk == null || !chunk.status.isOrAfter(FullChunkStatus.FULL)) {
+ continue;
+ }
+
+ chunk.getEntities(except, box, into, predicate);
+ }
+ }
+ }
+ }
+ }
+
+ public void getHardCollidingEntities(final Entity except, final AABB box, final List into, final Predicate super Entity> predicate) {
+ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
+ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
+ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
+ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
+
+ final int minRegionX = minChunkX >> REGION_SHIFT;
+ final int minRegionZ = minChunkZ >> REGION_SHIFT;
+ final int maxRegionX = maxChunkX >> REGION_SHIFT;
+ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
+
+ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
+ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
+ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
+
+ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
+ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
+
+ if (region == null) {
+ continue;
+ }
+
+ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
+ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
+
+ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
+ for (int currX = minX; currX <= maxX; ++currX) {
+ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
+ if (chunk == null || !chunk.status.isOrAfter(FullChunkStatus.FULL)) {
+ continue;
+ }
+
+ chunk.getHardCollidingEntities(except, box, into, predicate);
+ }
+ }
+ }
+ }
+ }
+
+ public void getEntities(final EntityType> type, final AABB box, final List super T> into,
+ final Predicate super T> predicate) {
+ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
+ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
+ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
+ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
+
+ final int minRegionX = minChunkX >> REGION_SHIFT;
+ final int minRegionZ = minChunkZ >> REGION_SHIFT;
+ final int maxRegionX = maxChunkX >> REGION_SHIFT;
+ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
+
+ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
+ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
+ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
+
+ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
+ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
+
+ if (region == null) {
+ continue;
+ }
+
+ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
+ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
+
+ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
+ for (int currX = minX; currX <= maxX; ++currX) {
+ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
+ if (chunk == null || !chunk.status.isOrAfter(FullChunkStatus.FULL)) {
+ continue;
+ }
+
+ chunk.getEntities(type, box, (List)into, (Predicate)predicate);
+ }
+ }
+ }
+ }
+ }
+
+ public void getEntities(final Class extends T> clazz, final Entity except, final AABB box, final List super T> into,
+ final Predicate super T> predicate) {
+ final int minChunkX = (Mth.floor(box.minX) - 2) >> 4;
+ final int minChunkZ = (Mth.floor(box.minZ) - 2) >> 4;
+ final int maxChunkX = (Mth.floor(box.maxX) + 2) >> 4;
+ final int maxChunkZ = (Mth.floor(box.maxZ) + 2) >> 4;
+
+ final int minRegionX = minChunkX >> REGION_SHIFT;
+ final int minRegionZ = minChunkZ >> REGION_SHIFT;
+ final int maxRegionX = maxChunkX >> REGION_SHIFT;
+ final int maxRegionZ = maxChunkZ >> REGION_SHIFT;
+
+ for (int currRegionZ = minRegionZ; currRegionZ <= maxRegionZ; ++currRegionZ) {
+ final int minZ = currRegionZ == minRegionZ ? minChunkZ & REGION_MASK : 0;
+ final int maxZ = currRegionZ == maxRegionZ ? maxChunkZ & REGION_MASK : REGION_MASK;
+
+ for (int currRegionX = minRegionX; currRegionX <= maxRegionX; ++currRegionX) {
+ final ChunkSlicesRegion region = this.getRegion(currRegionX, currRegionZ);
+
+ if (region == null) {
+ continue;
+ }
+
+ final int minX = currRegionX == minRegionX ? minChunkX & REGION_MASK : 0;
+ final int maxX = currRegionX == maxRegionX ? maxChunkX & REGION_MASK : REGION_MASK;
+
+ for (int currZ = minZ; currZ <= maxZ; ++currZ) {
+ for (int currX = minX; currX <= maxX; ++currX) {
+ final ChunkEntitySlices chunk = region.get(currX | (currZ << REGION_SHIFT));
+ if (chunk == null || !chunk.status.isOrAfter(FullChunkStatus.FULL)) {
+ continue;
+ }
+
+ chunk.getEntities(clazz, except, box, into, predicate);
+ }
+ }
+ }
+ }
+ }
+
+ public void entitySectionLoad(final int chunkX, final int chunkZ, final ChunkEntitySlices slices) {
+ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot load in entity section off-main");
+ synchronized (this) {
+ final ChunkEntitySlices curr = this.getChunk(chunkX, chunkZ);
+ if (curr != null) {
+ this.removeChunk(chunkX, chunkZ);
+
+ curr.mergeInto(slices);
+
+ this.addChunk(chunkX, chunkZ, slices);
+ } else {
+ this.addChunk(chunkX, chunkZ, slices);
+ }
+ }
+ }
+
+ public void entitySectionUnload(final int chunkX, final int chunkZ) {
+ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot unload entity section off-main");
+ this.removeChunk(chunkX, chunkZ);
+ }
+
+ public ChunkEntitySlices getChunk(final int chunkX, final int chunkZ) {
+ final ChunkSlicesRegion region = this.getRegion(chunkX >> REGION_SHIFT, chunkZ >> REGION_SHIFT);
+ if (region == null) {
+ return null;
+ }
+
+ return region.get((chunkX & REGION_MASK) | ((chunkZ & REGION_MASK) << REGION_SHIFT));
+ }
+
+ public ChunkEntitySlices getOrCreateChunk(final int chunkX, final int chunkZ) {
+ final ChunkSlicesRegion region = this.getRegion(chunkX >> REGION_SHIFT, chunkZ >> REGION_SHIFT);
+ ChunkEntitySlices ret;
+ if (region == null || (ret = region.get((chunkX & REGION_MASK) | ((chunkZ & REGION_MASK) << REGION_SHIFT))) == null) {
+ // loadInEntityChunk will call addChunk for us
+ return this.world.chunkTaskScheduler.chunkHolderManager.getOrCreateEntityChunk(chunkX, chunkZ, true);
+ }
+
+ return ret;
+ }
+
+ public ChunkSlicesRegion getRegion(final int regionX, final int regionZ) {
+ final long key = CoordinateUtils.getChunkKey(regionX, regionZ);
+ final long attempt = this.stateLock.tryOptimisticRead();
+ if (attempt != 0L) {
+ try {
+ final ChunkSlicesRegion ret = this.regions.get(key);
+
+ if (this.stateLock.validate(attempt)) {
+ return ret;
+ }
+ } catch (final Error error) {
+ throw error;
+ } catch (final Throwable thr) {
+ // ignore
+ }
+ }
+
+ this.stateLock.readLock();
+ try {
+ return this.regions.get(key);
+ } finally {
+ this.stateLock.tryUnlockRead();
+ }
+ }
+
+ private synchronized void removeChunk(final int chunkX, final int chunkZ) {
+ final long key = CoordinateUtils.getChunkKey(chunkX >> REGION_SHIFT, chunkZ >> REGION_SHIFT);
+ final int relIndex = (chunkX & REGION_MASK) | ((chunkZ & REGION_MASK) << REGION_SHIFT);
+
+ final ChunkSlicesRegion region = this.regions.get(key);
+ final int remaining = region.remove(relIndex);
+
+ if (remaining == 0) {
+ this.stateLock.writeLock();
+ try {
+ this.regions.remove(key);
+ } finally {
+ this.stateLock.tryUnlockWrite();
+ }
+ }
+ }
+
+ public synchronized void addChunk(final int chunkX, final int chunkZ, final ChunkEntitySlices slices) {
+ final long key = CoordinateUtils.getChunkKey(chunkX >> REGION_SHIFT, chunkZ >> REGION_SHIFT);
+ final int relIndex = (chunkX & REGION_MASK) | ((chunkZ & REGION_MASK) << REGION_SHIFT);
+
+ ChunkSlicesRegion region = this.regions.get(key);
+ if (region != null) {
+ region.add(relIndex, slices);
+ } else {
+ region = new ChunkSlicesRegion();
+ region.add(relIndex, slices);
+ this.stateLock.writeLock();
+ try {
+ this.regions.put(key, region);
+ } finally {
+ this.stateLock.tryUnlockWrite();
+ }
+ }
+ }
+
+ public static final class ChunkSlicesRegion {
+
+ protected final ChunkEntitySlices[] slices = new ChunkEntitySlices[REGION_SIZE * REGION_SIZE];
+ protected int sliceCount;
+
+ public ChunkEntitySlices get(final int index) {
+ return this.slices[index];
+ }
+
+ public int remove(final int index) {
+ final ChunkEntitySlices slices = this.slices[index];
+ if (slices == null) {
+ throw new IllegalStateException();
+ }
+
+ this.slices[index] = null;
+
+ return --this.sliceCount;
+ }
+
+ public void add(final int index, final ChunkEntitySlices slices) {
+ final ChunkEntitySlices curr = this.slices[index];
+ if (curr != null) {
+ throw new IllegalStateException();
+ }
+
+ this.slices[index] = slices;
+
+ ++this.sliceCount;
+ }
+ }
+
+ private final class EntityCallback implements EntityInLevelCallback {
+
+ public final Entity entity;
+
+ public EntityCallback(final Entity entity) {
+ this.entity = entity;
+ }
+
+ @Override
+ public void onMove() {
+ final Entity entity = this.entity;
+ final Visibility oldVisibility = getEntityStatus(entity);
+ final ChunkEntitySlices newSlices = EntityLookup.this.moveEntity(this.entity);
+ if (newSlices == null) {
+ // no new section, so didn't change sections
+ return;
+ }
+ final Visibility newVisibility = getEntityStatus(entity);
+
+ EntityLookup.this.entityStatusChange(entity, newSlices, oldVisibility, newVisibility, true, false, false);
+ }
+
+ @Override
+ public void onRemove(final Entity.RemovalReason reason) {
+ final Entity entity = this.entity;
+ TickThread.ensureTickThread(entity, "Cannot remove entity off-main"); // Paper - rewrite chunk system
+ final Visibility tickingState = EntityLookup.getEntityStatus(entity);
+
+ EntityLookup.this.removeEntity(entity);
+
+ EntityLookup.this.entityStatusChange(entity, null, tickingState, Visibility.HIDDEN, false, false, reason.shouldDestroy());
+
+ this.entity.setLevelCallback(NoOpCallback.INSTANCE);
+ }
+ }
+
+ private static final class NoOpCallback implements EntityInLevelCallback {
+
+ public static final NoOpCallback INSTANCE = new NoOpCallback();
+
+ @Override
+ public void onMove() {}
+
+ @Override
+ public void onRemove(final Entity.RemovalReason reason) {}
+ }
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/io/RegionFileIOThread.java b/src/main/java/io/papermc/paper/chunk/system/io/RegionFileIOThread.java
new file mode 100644
index 0000000000000000000000000000000000000000..8a11e10b01fa012b2f98b1c193c53251e848f909
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/io/RegionFileIOThread.java
@@ -0,0 +1,1338 @@
+package io.papermc.paper.chunk.system.io;
+
+import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
+import ca.spottedleaf.concurrentutil.executor.Cancellable;
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedQueueExecutorThread;
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedThreadedTaskQueue;
+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
+import com.mojang.logging.LogUtils;
+import io.papermc.paper.util.CoordinateUtils;
+import io.papermc.paper.util.TickThread;
+import it.unimi.dsi.fastutil.HashCommon;
+import net.minecraft.nbt.CompoundTag;
+import net.minecraft.server.level.ServerLevel;
+import net.minecraft.world.level.ChunkPos;
+import net.minecraft.world.level.chunk.storage.RegionFile;
+import net.minecraft.world.level.chunk.storage.RegionFileStorage;
+import org.slf4j.Logger;
+import java.io.IOException;
+import java.lang.invoke.VarHandle;
+import java.util.concurrent.CompletableFuture;
+import java.util.concurrent.CompletionException;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.function.BiConsumer;
+import java.util.function.BiFunction;
+import java.util.function.Consumer;
+import java.util.function.Function;
+
+/**
+ * Prioritised RegionFile I/O executor, responsible for all RegionFile access.
+ *
+ * All functions provided are MT-Safe, however certain ordering constraints are recommended:
+ *
+ * Chunk saves may not occur for unloaded chunks.
+ *
+ *
+ * Tasks must be scheduled on the chunk scheduler thread.
+ *
+ * By following these constraints, no chunk data loss should occur with the exception of underlying I/O problems.
+ *
+ */
+public final class RegionFileIOThread extends PrioritisedQueueExecutorThread {
+
+ private static final Logger LOGGER = LogUtils.getClassLogger();
+
+ /**
+ * The kinds of region files controlled by the region file thread. Add more when needed, and ensure
+ * getControllerFor is updated.
+ */
+ public static enum RegionFileType {
+ CHUNK_DATA,
+ POI_DATA,
+ ENTITY_DATA;
+ }
+
+ protected static final RegionFileType[] CACHED_REGIONFILE_TYPES = RegionFileType.values();
+
+ private ChunkDataController getControllerFor(final ServerLevel world, final RegionFileType type) {
+ switch (type) {
+ case CHUNK_DATA:
+ return world.chunkDataControllerNew;
+ case POI_DATA:
+ return world.poiDataControllerNew;
+ case ENTITY_DATA:
+ return world.entityDataControllerNew;
+ default:
+ throw new IllegalStateException("Unknown controller type " + type);
+ }
+ }
+
+ /**
+ * Collects regionfile data for a certain chunk.
+ */
+ public static final class RegionFileData {
+
+ private final boolean[] hasResult = new boolean[CACHED_REGIONFILE_TYPES.length];
+ private final CompoundTag[] data = new CompoundTag[CACHED_REGIONFILE_TYPES.length];
+ private final Throwable[] throwables = new Throwable[CACHED_REGIONFILE_TYPES.length];
+
+ /**
+ * Sets the result associated with the specified regionfile type. Note that
+ * results can only be set once per regionfile type.
+ *
+ * @param type The regionfile type.
+ * @param data The result to set.
+ */
+ public void setData(final RegionFileType type, final CompoundTag data) {
+ final int index = type.ordinal();
+
+ if (this.hasResult[index]) {
+ throw new IllegalArgumentException("Result already exists for type " + type);
+ }
+ this.hasResult[index] = true;
+ this.data[index] = data;
+ }
+
+ /**
+ * Sets the result associated with the specified regionfile type. Note that
+ * results can only be set once per regionfile type.
+ *
+ * @param type The regionfile type.
+ * @param throwable The result to set.
+ */
+ public void setThrowable(final RegionFileType type, final Throwable throwable) {
+ final int index = type.ordinal();
+
+ if (this.hasResult[index]) {
+ throw new IllegalArgumentException("Result already exists for type " + type);
+ }
+ this.hasResult[index] = true;
+ this.throwables[index] = throwable;
+ }
+
+ /**
+ * Returns whether there is a result for the specified regionfile type.
+ *
+ * @param type Specified regionfile type.
+ *
+ * @return Whether a result exists for {@code type}.
+ */
+ public boolean hasResult(final RegionFileType type) {
+ return this.hasResult[type.ordinal()];
+ }
+
+ /**
+ * Returns the data result for the regionfile type.
+ *
+ * @param type Specified regionfile type.
+ *
+ * @throws IllegalArgumentException If the result has not been set for {@code type}.
+ * @return The data result for the specified type. If the result is a {@code Throwable},
+ * then returns {@code null}.
+ */
+ public CompoundTag getData(final RegionFileType type) {
+ final int index = type.ordinal();
+
+ if (!this.hasResult[index]) {
+ throw new IllegalArgumentException("Result does not exist for type " + type);
+ }
+
+ return this.data[index];
+ }
+
+ /**
+ * Returns the throwable result for the regionfile type.
+ *
+ * @param type Specified regionfile type.
+ *
+ * @throws IllegalArgumentException If the result has not been set for {@code type}.
+ * @return The throwable result for the specified type. If the result is an {@code CompoundTag},
+ * then returns {@code null}.
+ */
+ public Throwable getThrowable(final RegionFileType type) {
+ final int index = type.ordinal();
+
+ if (!this.hasResult[index]) {
+ throw new IllegalArgumentException("Result does not exist for type " + type);
+ }
+
+ return this.throwables[index];
+ }
+ }
+
+ private static final Object INIT_LOCK = new Object();
+
+ static RegionFileIOThread[] threads;
+
+ /* needs to be consistent given a set of parameters */
+ static RegionFileIOThread selectThread(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
+ if (threads == null) {
+ throw new IllegalStateException("Threads not initialised");
+ }
+
+ final int regionX = chunkX >> 5;
+ final int regionZ = chunkZ >> 5;
+ final int typeOffset = type.ordinal();
+
+ return threads[(System.identityHashCode(world) + regionX + regionZ + typeOffset) % threads.length];
+ }
+
+ /**
+ * Shuts down the I/O executor(s). Watis for all tasks to complete if specified.
+ * Tasks queued during this call might not be accepted, and tasks queued after will not be accepted.
+ *
+ * @param wait Whether to wait until all tasks have completed.
+ */
+ public static void close(final boolean wait) {
+ for (int i = 0, len = threads.length; i < len; ++i) {
+ threads[i].close(false, true);
+ }
+ if (wait) {
+ RegionFileIOThread.flush();
+ }
+ }
+
+ public static long[] getExecutedTasks() {
+ final long[] ret = new long[threads.length];
+ for (int i = 0, len = threads.length; i < len; ++i) {
+ ret[i] = threads[i].getTotalTasksExecuted();
+ }
+
+ return ret;
+ }
+
+ public static long[] getTasksScheduled() {
+ final long[] ret = new long[threads.length];
+ for (int i = 0, len = threads.length; i < len; ++i) {
+ ret[i] = threads[i].getTotalTasksScheduled();
+ }
+ return ret;
+ }
+
+ public static void flush() {
+ for (int i = 0, len = threads.length; i < len; ++i) {
+ threads[i].waitUntilAllExecuted();
+ }
+ }
+
+ public static void partialFlush(final int totalTasksRemaining) {
+ long failures = 1L; // start out at 0.25ms
+
+ for (;;) {
+ final long[] executed = getExecutedTasks();
+ final long[] scheduled = getTasksScheduled();
+
+ long sum = 0;
+ for (int i = 0; i < executed.length; ++i) {
+ sum += scheduled[i] - executed[i];
+ }
+
+ if (sum <= totalTasksRemaining) {
+ break;
+ }
+
+ failures = ConcurrentUtil.linearLongBackoff(failures, 250_000L, 5_000_000L); // 500us, 5ms
+ }
+ }
+
+ /**
+ * Inits the executor with the specified number of threads.
+ *
+ * @param threads Specified number of threads.
+ */
+ public static void init(final int threads) {
+ synchronized (INIT_LOCK) {
+ if (RegionFileIOThread.threads != null) {
+ throw new IllegalStateException("Already initialised threads");
+ }
+
+ RegionFileIOThread.threads = new RegionFileIOThread[threads];
+
+ for (int i = 0; i < threads; ++i) {
+ RegionFileIOThread.threads[i] = new RegionFileIOThread(i);
+ RegionFileIOThread.threads[i].start();
+ }
+ }
+ }
+
+ private RegionFileIOThread(final int threadNumber) {
+ super(new PrioritisedThreadedTaskQueue(), (int)(1.0e6)); // 1.0ms spinwait time
+ this.setName("RegionFile I/O Thread #" + threadNumber);
+ this.setPriority(Thread.NORM_PRIORITY - 2); // we keep priority close to normal because threads can wait on us
+ this.setUncaughtExceptionHandler((final Thread thread, final Throwable thr) -> {
+ LOGGER.error("Uncaught exception thrown from I/O thread, report this! Thread: " + thread.getName(), thr);
+ });
+ }
+
+ /**
+ * Returns whether the current thread is a regionfile I/O executor.
+ * @return Whether the current thread is a regionfile I/O executor.
+ */
+ public static boolean isRegionFileThread() {
+ return Thread.currentThread() instanceof RegionFileIOThread;
+ }
+
+ /**
+ * Returns the priority associated with blocking I/O based on the current thread. The goal is to avoid
+ * dumb plugins from taking away priority from threads we consider crucial.
+ * @return The priroity to use with blocking I/O on the current thread.
+ */
+ public static PrioritisedExecutor.Priority getIOBlockingPriorityForCurrentThread() {
+ if (TickThread.isTickThread()) {
+ return PrioritisedExecutor.Priority.BLOCKING;
+ }
+ return PrioritisedExecutor.Priority.HIGHEST;
+ }
+
+ /**
+ * Returns the current {@code CompoundTag} pending for write for the specified chunk & regionfile type.
+ * Note that this does not copy the result, so do not modify the result returned.
+ *
+ * @param world Specified world.
+ * @param chunkX Specified chunk x.
+ * @param chunkZ Specified chunk z.
+ * @param type Specified regionfile type.
+ *
+ * @return The compound tag associated for the specified chunk. {@code null} if no write was pending, or if {@code null} is the write pending.
+ */
+ public static CompoundTag getPendingWrite(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
+ return thread.getPendingWriteInternal(world, chunkX, chunkZ, type);
+ }
+
+ CompoundTag getPendingWriteInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
+ final ChunkDataController taskController = this.getControllerFor(world, type);
+ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
+
+ if (task == null) {
+ return null;
+ }
+
+ final CompoundTag ret = task.inProgressWrite;
+
+ return ret == ChunkDataTask.NOTHING_TO_WRITE ? null : ret;
+ }
+
+ /**
+ * Returns the priority for the specified regionfile type for the specified chunk.
+ * @param world Specified world.
+ * @param chunkX Specified chunk x.
+ * @param chunkZ Specified chunk z.
+ * @param type Specified regionfile type.
+ * @return The priority for the chunk
+ */
+ public static PrioritisedExecutor.Priority getPriority(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
+ return thread.getPriorityInternal(world, chunkX, chunkZ, type);
+ }
+
+ PrioritisedExecutor.Priority getPriorityInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type) {
+ final ChunkDataController taskController = this.getControllerFor(world, type);
+ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
+
+ if (task == null) {
+ return PrioritisedExecutor.Priority.COMPLETING;
+ }
+
+ return task.prioritisedTask.getPriority();
+ }
+
+ /**
+ * Sets the priority for all regionfile types for the specified chunk. Note that great care should
+ * be taken using this method, as there can be multiple tasks tied to the same chunk that want different
+ * priorities.
+ *
+ * @param world Specified world.
+ * @param chunkX Specified chunk x.
+ * @param chunkZ Specified chunk z.
+ * @param priority New priority.
+ *
+ * @see #raisePriority(ServerLevel, int, int, Priority)
+ * @see #raisePriority(ServerLevel, int, int, RegionFileType, Priority)
+ * @see #lowerPriority(ServerLevel, int, int, Priority)
+ * @see #lowerPriority(ServerLevel, int, int, RegionFileType, Priority)
+ */
+ public static void setPriority(final ServerLevel world, final int chunkX, final int chunkZ,
+ final PrioritisedExecutor.Priority priority) {
+ for (final RegionFileType type : CACHED_REGIONFILE_TYPES) {
+ RegionFileIOThread.setPriority(world, chunkX, chunkZ, type, priority);
+ }
+ }
+
+ /**
+ * Sets the priority for the specified regionfile type for the specified chunk. Note that great care should
+ * be taken using this method, as there can be multiple tasks tied to the same chunk that want different
+ * priorities.
+ *
+ * @param world Specified world.
+ * @param chunkX Specified chunk x.
+ * @param chunkZ Specified chunk z.
+ * @param type Specified regionfile type.
+ * @param priority New priority.
+ *
+ * @see #raisePriority(ServerLevel, int, int, Priority)
+ * @see #raisePriority(ServerLevel, int, int, RegionFileType, Priority)
+ * @see #lowerPriority(ServerLevel, int, int, Priority)
+ * @see #lowerPriority(ServerLevel, int, int, RegionFileType, Priority)
+ */
+ public static void setPriority(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
+ final PrioritisedExecutor.Priority priority) {
+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
+ thread.setPriorityInternal(world, chunkX, chunkZ, type, priority);
+ }
+
+ void setPriorityInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
+ final PrioritisedExecutor.Priority priority) {
+ final ChunkDataController taskController = this.getControllerFor(world, type);
+ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
+
+ if (task != null) {
+ task.prioritisedTask.setPriority(priority);
+ }
+ }
+
+ /**
+ * Raises the priority for all regionfile types for the specified chunk.
+ *
+ * @param world Specified world.
+ * @param chunkX Specified chunk x.
+ * @param chunkZ Specified chunk z.
+ * @param priority New priority.
+ *
+ * @see #setPriority(ServerLevel, int, int, Priority)
+ * @see #setPriority(ServerLevel, int, int, RegionFileType, Priority)
+ * @see #lowerPriority(ServerLevel, int, int, Priority)
+ * @see #lowerPriority(ServerLevel, int, int, RegionFileType, Priority)
+ */
+ public static void raisePriority(final ServerLevel world, final int chunkX, final int chunkZ,
+ final PrioritisedExecutor.Priority priority) {
+ for (final RegionFileType type : CACHED_REGIONFILE_TYPES) {
+ RegionFileIOThread.raisePriority(world, chunkX, chunkZ, type, priority);
+ }
+ }
+
+ /**
+ * Raises the priority for the specified regionfile type for the specified chunk.
+ *
+ * @param world Specified world.
+ * @param chunkX Specified chunk x.
+ * @param chunkZ Specified chunk z.
+ * @param type Specified regionfile type.
+ * @param priority New priority.
+ *
+ * @see #setPriority(ServerLevel, int, int, Priority)
+ * @see #setPriority(ServerLevel, int, int, RegionFileType, Priority)
+ * @see #lowerPriority(ServerLevel, int, int, Priority)
+ * @see #lowerPriority(ServerLevel, int, int, RegionFileType, Priority)
+ */
+ public static void raisePriority(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
+ final PrioritisedExecutor.Priority priority) {
+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
+ thread.raisePriorityInternal(world, chunkX, chunkZ, type, priority);
+ }
+
+ void raisePriorityInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
+ final PrioritisedExecutor.Priority priority) {
+ final ChunkDataController taskController = this.getControllerFor(world, type);
+ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
+
+ if (task != null) {
+ task.prioritisedTask.raisePriority(priority);
+ }
+ }
+
+ /**
+ * Lowers the priority for all regionfile types for the specified chunk.
+ *
+ * @param world Specified world.
+ * @param chunkX Specified chunk x.
+ * @param chunkZ Specified chunk z.
+ * @param priority New priority.
+ *
+ * @see #raisePriority(ServerLevel, int, int, Priority)
+ * @see #raisePriority(ServerLevel, int, int, RegionFileType, Priority)
+ * @see #setPriority(ServerLevel, int, int, Priority)
+ * @see #setPriority(ServerLevel, int, int, RegionFileType, Priority)
+ */
+ public static void lowerPriority(final ServerLevel world, final int chunkX, final int chunkZ,
+ final PrioritisedExecutor.Priority priority) {
+ for (final RegionFileType type : CACHED_REGIONFILE_TYPES) {
+ RegionFileIOThread.lowerPriority(world, chunkX, chunkZ, type, priority);
+ }
+ }
+
+ /**
+ * Lowers the priority for the specified regionfile type for the specified chunk.
+ *
+ * @param world Specified world.
+ * @param chunkX Specified chunk x.
+ * @param chunkZ Specified chunk z.
+ * @param type Specified regionfile type.
+ * @param priority New priority.
+ *
+ * @see #raisePriority(ServerLevel, int, int, Priority)
+ * @see #raisePriority(ServerLevel, int, int, RegionFileType, Priority)
+ * @see #setPriority(ServerLevel, int, int, Priority)
+ * @see #setPriority(ServerLevel, int, int, RegionFileType, Priority)
+ */
+ public static void lowerPriority(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
+ final PrioritisedExecutor.Priority priority) {
+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
+ thread.lowerPriorityInternal(world, chunkX, chunkZ, type, priority);
+ }
+
+ void lowerPriorityInternal(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
+ final PrioritisedExecutor.Priority priority) {
+ final ChunkDataController taskController = this.getControllerFor(world, type);
+ final ChunkDataTask task = taskController.tasks.get(Long.valueOf(CoordinateUtils.getChunkKey(chunkX, chunkZ)));
+
+ if (task != null) {
+ task.prioritisedTask.lowerPriority(priority);
+ }
+ }
+
+ /**
+ * Schedules the chunk data to be written asynchronously.
+ *
+ * Impl notes:
+ *
+ *
+ * This function presumes a chunk load for the coordinates is not called during this function (anytime after is OK). This means
+ * saves must be scheduled before a chunk is unloaded.
+ *
+ *
+ * Writes may be called concurrently, although only the "later" write will go through.
+ *
+ *
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param data Chunk's data
+ * @param type The regionfile type to write to.
+ *
+ * @throws IllegalStateException If the file io thread has shutdown.
+ */
+ public static void scheduleSave(final ServerLevel world, final int chunkX, final int chunkZ, final CompoundTag data,
+ final RegionFileType type) {
+ RegionFileIOThread.scheduleSave(world, chunkX, chunkZ, data, type, PrioritisedExecutor.Priority.NORMAL);
+ }
+
+ /**
+ * Schedules the chunk data to be written asynchronously.
+ *
+ * Impl notes:
+ *
+ *
+ * This function presumes a chunk load for the coordinates is not called during this function (anytime after is OK). This means
+ * saves must be scheduled before a chunk is unloaded.
+ *
+ *
+ * Writes may be called concurrently, although only the "later" write will go through.
+ *
+ *
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param data Chunk's data
+ * @param type The regionfile type to write to.
+ * @param priority The minimum priority to schedule at.
+ *
+ * @throws IllegalStateException If the file io thread has shutdown.
+ */
+ public static void scheduleSave(final ServerLevel world, final int chunkX, final int chunkZ, final CompoundTag data,
+ final RegionFileType type, final PrioritisedExecutor.Priority priority) {
+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
+ thread.scheduleSaveInternal(world, chunkX, chunkZ, data, type, priority);
+ }
+
+ void scheduleSaveInternal(final ServerLevel world, final int chunkX, final int chunkZ, final CompoundTag data,
+ final RegionFileType type, final PrioritisedExecutor.Priority priority) {
+ final ChunkDataController taskController = this.getControllerFor(world, type);
+
+ final boolean[] created = new boolean[1];
+ final ChunkCoordinate key = new ChunkCoordinate(CoordinateUtils.getChunkKey(chunkX, chunkZ));
+ final ChunkDataTask task = taskController.tasks.compute(key, (final ChunkCoordinate keyInMap, final ChunkDataTask taskRunning) -> {
+ if (taskRunning == null || taskRunning.failedWrite) {
+ // no task is scheduled or the previous write failed - meaning we need to overwrite it
+
+ // create task
+ final ChunkDataTask newTask = new ChunkDataTask(world, chunkX, chunkZ, taskController, RegionFileIOThread.this, priority);
+ newTask.inProgressWrite = data;
+ created[0] = true;
+
+ return newTask;
+ }
+
+ taskRunning.inProgressWrite = data;
+
+ return taskRunning;
+ });
+
+ if (created[0]) {
+ task.prioritisedTask.queue();
+ } else {
+ task.prioritisedTask.raisePriority(priority);
+ }
+ }
+
+ /**
+ * Schedules a load to be executed asynchronously. This task will load all regionfile types, and then call
+ * {@code onComplete}. This is a bulk load operation, see {@link #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)}
+ * for single load.
+ *
+ * Impl notes:
+ *
+ *
+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
+ * data is undefined behaviour, and can cause deadlock.
+ *
+ *
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param onComplete Consumer to execute once this task has completed
+ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
+ * of this call.
+ *
+ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
+ *
+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)
+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)
+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, RegionFileType...)
+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, Priority, RegionFileType...)
+ */
+ public static Cancellable loadAllChunkData(final ServerLevel world, final int chunkX, final int chunkZ,
+ final Consumer onComplete, final boolean intendingToBlock) {
+ return RegionFileIOThread.loadAllChunkData(world, chunkX, chunkZ, onComplete, intendingToBlock, PrioritisedExecutor.Priority.NORMAL);
+ }
+
+ /**
+ * Schedules a load to be executed asynchronously. This task will load all regionfile types, and then call
+ * {@code onComplete}. This is a bulk load operation, see {@link #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)}
+ * for single load.
+ *
+ * Impl notes:
+ *
+ *
+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
+ * data is undefined behaviour, and can cause deadlock.
+ *
+ *
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param onComplete Consumer to execute once this task has completed
+ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
+ * of this call.
+ * @param priority The minimum priority to load the data at.
+ *
+ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
+ *
+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)
+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)
+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, RegionFileType...)
+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, Priority, RegionFileType...)
+ */
+ public static Cancellable loadAllChunkData(final ServerLevel world, final int chunkX, final int chunkZ,
+ final Consumer onComplete, final boolean intendingToBlock,
+ final PrioritisedExecutor.Priority priority) {
+ return RegionFileIOThread.loadChunkData(world, chunkX, chunkZ, onComplete, intendingToBlock, priority, CACHED_REGIONFILE_TYPES);
+ }
+
+ /**
+ * Schedules a load to be executed asynchronously. This task will load data for the specified regionfile type(s), and
+ * then call {@code onComplete}. This is a bulk load operation, see {@link #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)}
+ * for single load.
+ *
+ * Impl notes:
+ *
+ *
+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
+ * data is undefined behaviour, and can cause deadlock.
+ *
+ *
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param onComplete Consumer to execute once this task has completed
+ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
+ * of this call.
+ * @param types The regionfile type(s) to load.
+ *
+ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
+ *
+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)
+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)
+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean)
+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean, Priority)
+ */
+ public static Cancellable loadChunkData(final ServerLevel world, final int chunkX, final int chunkZ,
+ final Consumer onComplete, final boolean intendingToBlock,
+ final RegionFileType... types) {
+ return RegionFileIOThread.loadChunkData(world, chunkX, chunkZ, onComplete, intendingToBlock, PrioritisedExecutor.Priority.NORMAL, types);
+ }
+
+ /**
+ * Schedules a load to be executed asynchronously. This task will load data for the specified regionfile type(s), and
+ * then call {@code onComplete}. This is a bulk load operation, see {@link #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)}
+ * for single load.
+ *
+ * Impl notes:
+ *
+ *
+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
+ * data is undefined behaviour, and can cause deadlock.
+ *
+ *
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param onComplete Consumer to execute once this task has completed
+ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
+ * of this call.
+ * @param types The regionfile type(s) to load.
+ * @param priority The minimum priority to load the data at.
+ *
+ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
+ *
+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean)
+ * @see #loadDataAsync(ServerLevel, int, int, RegionFileType, BiConsumer, boolean, Priority)
+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean)
+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean, Priority)
+ */
+ public static Cancellable loadChunkData(final ServerLevel world, final int chunkX, final int chunkZ,
+ final Consumer onComplete, final boolean intendingToBlock,
+ final PrioritisedExecutor.Priority priority, final RegionFileType... types) {
+ if (types == null) {
+ throw new NullPointerException("Types cannot be null");
+ }
+ if (types.length == 0) {
+ throw new IllegalArgumentException("Types cannot be empty");
+ }
+
+ final RegionFileData ret = new RegionFileData();
+
+ final Cancellable[] reads = new CancellableRead[types.length];
+ final AtomicInteger completions = new AtomicInteger();
+ final int expectedCompletions = types.length;
+
+ for (int i = 0; i < expectedCompletions; ++i) {
+ final RegionFileType type = types[i];
+ reads[i] = RegionFileIOThread.loadDataAsync(world, chunkX, chunkZ, type,
+ (final CompoundTag data, final Throwable throwable) -> {
+ if (throwable != null) {
+ ret.setThrowable(type, throwable);
+ } else {
+ ret.setData(type, data);
+ }
+
+ if (completions.incrementAndGet() == expectedCompletions) {
+ onComplete.accept(ret);
+ }
+ }, intendingToBlock, priority);
+ }
+
+ return new CancellableReads(reads);
+ }
+
+ /**
+ * Schedules a load to be executed asynchronously. This task will load the specified regionfile type, and then call
+ * {@code onComplete}.
+ *
+ * Impl notes:
+ *
+ *
+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
+ * data is undefined behaviour, and can cause deadlock.
+ *
+ *
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param onComplete Consumer to execute once this task has completed
+ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
+ * of this call.
+ *
+ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
+ *
+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, RegionFileType...)
+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, Priority, RegionFileType...)
+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean)
+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean, Priority)
+ */
+ public static Cancellable loadDataAsync(final ServerLevel world, final int chunkX, final int chunkZ,
+ final RegionFileType type, final BiConsumer onComplete,
+ final boolean intendingToBlock) {
+ return RegionFileIOThread.loadDataAsync(world, chunkX, chunkZ, type, onComplete, intendingToBlock, PrioritisedExecutor.Priority.NORMAL);
+ }
+
+ /**
+ * Schedules a load to be executed asynchronously. This task will load the specified regionfile type, and then call
+ * {@code onComplete}.
+ *
+ * Impl notes:
+ *
+ *
+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
+ * data is undefined behaviour, and can cause deadlock.
+ *
+ *
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param onComplete Consumer to execute once this task has completed
+ * @param intendingToBlock Whether the caller is intending to block on completion. This only affects the cost
+ * of this call.
+ * @param priority Minimum priority to load the data at.
+ *
+ * @return The {@link Cancellable} for this chunk load. Cancelling it will not affect other loads for the same chunk data.
+ *
+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, RegionFileType...)
+ * @see #loadChunkData(ServerLevel, int, int, Consumer, boolean, Priority, RegionFileType...)
+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean)
+ * @see #loadAllChunkData(ServerLevel, int, int, Consumer, boolean, Priority)
+ */
+ public static Cancellable loadDataAsync(final ServerLevel world, final int chunkX, final int chunkZ,
+ final RegionFileType type, final BiConsumer onComplete,
+ final boolean intendingToBlock, final PrioritisedExecutor.Priority priority) {
+ final RegionFileIOThread thread = RegionFileIOThread.selectThread(world, chunkX, chunkZ, type);
+ return thread.loadDataAsyncInternal(world, chunkX, chunkZ, type, onComplete, intendingToBlock, priority);
+ }
+
+ private static Boolean doesRegionFileExist(final int chunkX, final int chunkZ, final boolean intendingToBlock,
+ final ChunkDataController taskController) {
+ final ChunkPos chunkPos = new ChunkPos(chunkX, chunkZ);
+ if (intendingToBlock) {
+ return taskController.computeForRegionFile(chunkX, chunkZ, true, (final RegionFile file) -> {
+ if (file == null) { // null if no regionfile exists
+ return Boolean.FALSE;
+ }
+
+ return file.hasChunk(chunkPos) ? Boolean.TRUE : Boolean.FALSE;
+ });
+ } else {
+ // first check if the region file for sure does not exist
+ if (taskController.doesRegionFileNotExist(chunkX, chunkZ)) {
+ return Boolean.FALSE;
+ } // else: it either exists or is not known, fall back to checking the loaded region file
+
+ return taskController.computeForRegionFileIfLoaded(chunkX, chunkZ, (final RegionFile file) -> {
+ if (file == null) { // null if not loaded
+ // not sure at this point, let the I/O thread figure it out
+ return Boolean.TRUE;
+ }
+
+ return file.hasChunk(chunkPos) ? Boolean.TRUE : Boolean.FALSE;
+ });
+ }
+ }
+
+ Cancellable loadDataAsyncInternal(final ServerLevel world, final int chunkX, final int chunkZ,
+ final RegionFileType type, final BiConsumer onComplete,
+ final boolean intendingToBlock, final PrioritisedExecutor.Priority priority) {
+ final ChunkDataController taskController = this.getControllerFor(world, type);
+
+ final ImmediateCallbackCompletion callbackInfo = new ImmediateCallbackCompletion();
+
+ final ChunkCoordinate key = new ChunkCoordinate(CoordinateUtils.getChunkKey(chunkX, chunkZ));
+ final BiFunction compute = (final ChunkCoordinate keyInMap, final ChunkDataTask running) -> {
+ if (running == null) {
+ // not scheduled
+
+ if (callbackInfo.regionFileCalculation == null) {
+ // caller will compute this outside of compute(), to avoid holding the bin lock
+ callbackInfo.needsRegionFileTest = true;
+ return null;
+ }
+
+ if (callbackInfo.regionFileCalculation == Boolean.FALSE) {
+ // not on disk
+ callbackInfo.data = null;
+ callbackInfo.throwable = null;
+ callbackInfo.completeNow = true;
+ return null;
+ }
+
+ // set up task
+ final ChunkDataTask newTask = new ChunkDataTask(
+ world, chunkX, chunkZ, taskController, RegionFileIOThread.this, priority
+ );
+ newTask.inProgressRead = new RegionFileIOThread.InProgressRead();
+ newTask.inProgressRead.waiters.add(onComplete);
+
+ callbackInfo.tasksNeedsScheduling = true;
+ return newTask;
+ }
+
+ final CompoundTag pendingWrite = running.inProgressWrite;
+
+ if (pendingWrite == ChunkDataTask.NOTHING_TO_WRITE) {
+ // need to add to waiters here, because the regionfile thread will use compute() to lock and check for cancellations
+ if (!running.inProgressRead.addToWaiters(onComplete)) {
+ callbackInfo.data = running.inProgressRead.value;
+ callbackInfo.throwable = running.inProgressRead.throwable;
+ callbackInfo.completeNow = true;
+ }
+ return running;
+ }
+ // using the result sync here - don't bump priority
+
+ // at this stage we have to use the in progress write's data to avoid an order issue
+ callbackInfo.data = pendingWrite;
+ callbackInfo.throwable = null;
+ callbackInfo.completeNow = true;
+ return running;
+ };
+
+ ChunkDataTask curr = taskController.tasks.get(key);
+ if (curr == null) {
+ callbackInfo.regionFileCalculation = doesRegionFileExist(chunkX, chunkZ, intendingToBlock, taskController);
+ }
+ ChunkDataTask ret = taskController.tasks.compute(key, compute);
+ if (callbackInfo.needsRegionFileTest) {
+ // curr isn't null but when we went into compute() it was
+ callbackInfo.regionFileCalculation = doesRegionFileExist(chunkX, chunkZ, intendingToBlock, taskController);
+ // now it should be fine
+ ret = taskController.tasks.compute(key, compute);
+ }
+
+ // needs to be scheduled
+ if (callbackInfo.tasksNeedsScheduling) {
+ ret.prioritisedTask.queue();
+ } else if (callbackInfo.completeNow) {
+ try {
+ onComplete.accept(callbackInfo.data, callbackInfo.throwable);
+ } catch (final ThreadDeath thr) {
+ throw thr;
+ } catch (final Throwable thr) {
+ LOGGER.error("Callback " + ConcurrentUtil.genericToString(onComplete) + " synchronously failed to handle chunk data for task " + ret.toString(), thr);
+ }
+ } else {
+ // we're waiting on a task we didn't schedule, so raise its priority to what we want
+ ret.prioritisedTask.raisePriority(priority);
+ }
+
+ return new CancellableRead(onComplete, ret);
+ }
+
+ /**
+ * Schedules a load task to be executed asynchronously, and blocks on that task.
+ *
+ * @param world Chunk's world
+ * @param chunkX Chunk's x coordinate
+ * @param chunkZ Chunk's z coordinate
+ * @param type Regionfile type
+ * @param priority Minimum priority to load the data at.
+ *
+ * @return The chunk data for the chunk. Note that a {@code null} result means the chunk or regionfile does not exist on disk.
+ *
+ * @throws IOException If the load fails for any reason
+ */
+ public static CompoundTag loadData(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileType type,
+ final PrioritisedExecutor.Priority priority) throws IOException {
+ final CompletableFuture ret = new CompletableFuture<>();
+
+ RegionFileIOThread.loadDataAsync(world, chunkX, chunkZ, type, (final CompoundTag compound, final Throwable thr) -> {
+ if (thr != null) {
+ ret.completeExceptionally(thr);
+ } else {
+ ret.complete(compound);
+ }
+ }, true, priority);
+
+ try {
+ return ret.join();
+ } catch (final CompletionException ex) {
+ throw new IOException(ex);
+ }
+ }
+
+ private static final class ImmediateCallbackCompletion {
+
+ public CompoundTag data;
+ public Throwable throwable;
+ public boolean completeNow;
+ public boolean tasksNeedsScheduling;
+ public boolean needsRegionFileTest;
+ public Boolean regionFileCalculation;
+
+ }
+
+ static final class CancellableRead implements Cancellable {
+
+ private BiConsumer callback;
+ private RegionFileIOThread.ChunkDataTask task;
+
+ CancellableRead(final BiConsumer callback, final RegionFileIOThread.ChunkDataTask task) {
+ this.callback = callback;
+ this.task = task;
+ }
+
+ @Override
+ public boolean cancel() {
+ final BiConsumer callback = this.callback;
+ final RegionFileIOThread.ChunkDataTask task = this.task;
+
+ if (callback == null || task == null) {
+ return false;
+ }
+
+ this.callback = null;
+ this.task = null;
+
+ final RegionFileIOThread.InProgressRead read = task.inProgressRead;
+
+ // read can be null if no read was scheduled (i.e no regionfile existed or chunk in regionfile didn't)
+ return (read != null && read.waiters.remove(callback));
+ }
+ }
+
+ static final class CancellableReads implements Cancellable {
+
+ private Cancellable[] reads;
+
+ protected static final VarHandle READS_HANDLE = ConcurrentUtil.getVarHandle(CancellableReads.class, "reads", Cancellable[].class);
+
+ CancellableReads(final Cancellable[] reads) {
+ this.reads = reads;
+ }
+
+ @Override
+ public boolean cancel() {
+ final Cancellable[] reads = (Cancellable[])READS_HANDLE.getAndSet((CancellableReads)this, (Cancellable[])null);
+
+ if (reads == null) {
+ return false;
+ }
+
+ boolean ret = false;
+
+ for (final Cancellable read : reads) {
+ ret |= read.cancel();
+ }
+
+ return ret;
+ }
+ }
+
+ static final class InProgressRead {
+
+ private static final Logger LOGGER = LogUtils.getClassLogger();
+
+ CompoundTag value;
+ Throwable throwable;
+ final MultiThreadedQueue> waiters = new MultiThreadedQueue<>();
+
+ // rets false if already completed (callback not invoked), true if callback was added
+ boolean addToWaiters(final BiConsumer callback) {
+ return this.waiters.add(callback);
+ }
+
+ void complete(final RegionFileIOThread.ChunkDataTask task, final CompoundTag value, final Throwable throwable) {
+ this.value = value;
+ this.throwable = throwable;
+
+ BiConsumer consumer;
+ while ((consumer = this.waiters.pollOrBlockAdds()) != null) {
+ try {
+ consumer.accept(value, throwable);
+ } catch (final ThreadDeath thr) {
+ throw thr;
+ } catch (final Throwable thr) {
+ LOGGER.error("Callback " + ConcurrentUtil.genericToString(consumer) + " failed to handle chunk data for task " + task.toString(), thr);
+ }
+ }
+ }
+ }
+
+ /**
+ * Class exists to replace {@link Long} usages as keys inside non-fastutil hashtables. The hash for some Long {@code x}
+ * is defined as {@code (x >>> 32) ^ x}. Chunk keys as long values are defined as {@code ((chunkX & 0xFFFFFFFFL) | (chunkZ << 32))},
+ * which means the hashcode as a Long value will be {@code chunkX ^ chunkZ}. Given that most chunks are created within a radius arounds players,
+ * this will lead to many hash collisions. So, this class uses a better hashing algorithm so that usage of
+ * non-fastutil collections is not degraded.
+ */
+ public static final class ChunkCoordinate implements Comparable {
+
+ public final long key;
+
+ public ChunkCoordinate(final long key) {
+ this.key = key;
+ }
+
+ @Override
+ public int hashCode() {
+ return (int)HashCommon.mix(this.key);
+ }
+
+ @Override
+ public boolean equals(final Object obj) {
+ if (this == obj) {
+ return true;
+ }
+
+ if (!(obj instanceof ChunkCoordinate)) {
+ return false;
+ }
+
+ final ChunkCoordinate other = (ChunkCoordinate)obj;
+
+ return this.key == other.key;
+ }
+
+ // This class is intended for HashMap/ConcurrentHashMap usage, which do treeify bin nodes if the chain
+ // is too large. So we should implement compareTo to help.
+ @Override
+ public int compareTo(final RegionFileIOThread.ChunkCoordinate other) {
+ return Long.compare(this.key, other.key);
+ }
+
+ @Override
+ public String toString() {
+ return new ChunkPos(this.key).toString();
+ }
+ }
+
+ public static abstract class ChunkDataController {
+
+ // ConcurrentHashMap synchronizes per chain, so reduce the chance of task's hashes colliding.
+ protected final ConcurrentHashMap tasks = new ConcurrentHashMap<>(8192, 0.10f);
+
+ public final RegionFileType type;
+
+ public ChunkDataController(final RegionFileType type) {
+ this.type = type;
+ }
+
+ public abstract RegionFileStorage getCache();
+
+ public abstract void writeData(final int chunkX, final int chunkZ, final CompoundTag compound) throws IOException;
+
+ public abstract CompoundTag readData(final int chunkX, final int chunkZ) throws IOException;
+
+ public boolean hasTasks() {
+ return !this.tasks.isEmpty();
+ }
+
+ public boolean doesRegionFileNotExist(final int chunkX, final int chunkZ) {
+ return this.getCache().doesRegionFileNotExistNoIO(new ChunkPos(chunkX, chunkZ));
+ }
+
+ public T computeForRegionFile(final int chunkX, final int chunkZ, final boolean existingOnly, final Function function) {
+ final RegionFileStorage cache = this.getCache();
+ final RegionFile regionFile;
+ synchronized (cache) {
+ try {
+ regionFile = cache.getRegionFile(new ChunkPos(chunkX, chunkZ), existingOnly, true);
+ } catch (final IOException ex) {
+ throw new RuntimeException(ex);
+ }
+ }
+
+ try {
+ return function.apply(regionFile);
+ } finally {
+ if (regionFile != null) {
+ regionFile.fileLock.unlock();
+ }
+ }
+ }
+
+ public T computeForRegionFileIfLoaded(final int chunkX, final int chunkZ, final Function function) {
+ final RegionFileStorage cache = this.getCache();
+ final RegionFile regionFile;
+
+ synchronized (cache) {
+ regionFile = cache.getRegionFileIfLoaded(new ChunkPos(chunkX, chunkZ));
+ if (regionFile != null) {
+ regionFile.fileLock.lock();
+ }
+ }
+
+ try {
+ return function.apply(regionFile);
+ } finally {
+ if (regionFile != null) {
+ regionFile.fileLock.unlock();
+ }
+ }
+ }
+ }
+
+ static final class ChunkDataTask implements Runnable {
+
+ protected static final CompoundTag NOTHING_TO_WRITE = new CompoundTag();
+
+ private static final Logger LOGGER = LogUtils.getClassLogger();
+
+ RegionFileIOThread.InProgressRead inProgressRead;
+ volatile CompoundTag inProgressWrite = NOTHING_TO_WRITE; // only needs to be acquire/release
+
+ boolean failedWrite;
+
+ final ServerLevel world;
+ final int chunkX;
+ final int chunkZ;
+ final RegionFileIOThread.ChunkDataController taskController;
+
+ final PrioritisedExecutor.PrioritisedTask prioritisedTask;
+
+ /*
+ * IO thread will perform reads before writes for a given chunk x and z
+ *
+ * How reads/writes are scheduled:
+ *
+ * If read is scheduled while scheduling write, take no special action and just schedule write
+ * If read is scheduled while scheduling read and no write is scheduled, chain the read task
+ *
+ *
+ * If write is scheduled while scheduling read, use the pending write data and ret immediately (so no read is scheduled)
+ * If write is scheduled while scheduling write (ignore read in progress), overwrite the write in progress data
+ *
+ * This allows the reads and writes to act as if they occur synchronously to the thread scheduling them, however
+ * it fails to properly propagate write failures thanks to writes overwriting each other
+ */
+
+ public ChunkDataTask(final ServerLevel world, final int chunkX, final int chunkZ, final RegionFileIOThread.ChunkDataController taskController,
+ final PrioritisedExecutor executor, final PrioritisedExecutor.Priority priority) {
+ this.world = world;
+ this.chunkX = chunkX;
+ this.chunkZ = chunkZ;
+ this.taskController = taskController;
+ this.prioritisedTask = executor.createTask(this, priority);
+ }
+
+ @Override
+ public String toString() {
+ return "Task for world: '" + this.world.getWorld().getName() + "' at (" + this.chunkX + "," + this.chunkZ +
+ ") type: " + this.taskController.type.name() + ", hash: " + this.hashCode();
+ }
+
+ @Override
+ public void run() {
+ final RegionFileIOThread.InProgressRead read = this.inProgressRead;
+ final ChunkCoordinate chunkKey = new ChunkCoordinate(CoordinateUtils.getChunkKey(this.chunkX, this.chunkZ));
+
+ if (read != null) {
+ final boolean[] canRead = new boolean[] { true };
+
+ if (read.waiters.isEmpty()) {
+ // cancelled read? go to task controller to confirm
+ final ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final ChunkCoordinate keyInMap, final ChunkDataTask valueInMap) -> {
+ if (valueInMap == null) {
+ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
+ }
+ if (valueInMap != ChunkDataTask.this) {
+ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
+ }
+
+ if (!read.waiters.isEmpty()) { // as per usual IntelliJ is unable to figure out that there are concurrent accesses.
+ return valueInMap;
+ } else {
+ canRead[0] = false;
+ }
+
+ return valueInMap.inProgressWrite == NOTHING_TO_WRITE ? null : valueInMap;
+ });
+
+ if (inMap == null) {
+ // read is cancelled - and no write pending, so we're done
+ return;
+ }
+ // if there is a write in progress, we don't actually have to worry about waiters gaining new entries -
+ // the readers will just use the in progress write, so the value in canRead is good to use without
+ // further synchronisation.
+ }
+
+ if (canRead[0]) {
+ CompoundTag compound = null;
+ Throwable throwable = null;
+
+ try {
+ compound = this.taskController.readData(this.chunkX, this.chunkZ);
+ } catch (final ThreadDeath thr) {
+ throw thr;
+ } catch (final Throwable thr) {
+ throwable = thr;
+ LOGGER.error("Failed to read chunk data for task: " + this.toString(), thr);
+ }
+ read.complete(this, compound, throwable);
+ }
+ }
+
+ CompoundTag write = this.inProgressWrite;
+
+ if (write == NOTHING_TO_WRITE) {
+ final ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final ChunkCoordinate keyInMap, final ChunkDataTask valueInMap) -> {
+ if (valueInMap == null) {
+ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
+ }
+ if (valueInMap != ChunkDataTask.this) {
+ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
+ }
+ return valueInMap.inProgressWrite == NOTHING_TO_WRITE ? null : valueInMap;
+ });
+
+ if (inMap == null) {
+ return; // set the task value to null, indicating we're done
+ } // else: inProgressWrite changed, so now we have something to write
+ }
+
+ for (;;) {
+ write = this.inProgressWrite;
+ final CompoundTag dataWritten = write;
+
+ boolean failedWrite = false;
+
+ try {
+ this.taskController.writeData(this.chunkX, this.chunkZ, write);
+ } catch (final ThreadDeath thr) {
+ throw thr;
+ } catch (final Throwable thr) {
+ if (thr instanceof RegionFileStorage.RegionFileSizeException) {
+ final int maxSize = RegionFile.MAX_CHUNK_SIZE / (1024 * 1024);
+ LOGGER.error("Chunk at (" + this.chunkX + "," + this.chunkZ + ") in '" + this.world.getWorld().getName() + "' exceeds max size of " + maxSize + "MiB, it has been deleted from disk.");
+ } else {
+ failedWrite = thr instanceof IOException;
+ LOGGER.error("Failed to write chunk data for task: " + this.toString(), thr);
+ }
+ }
+
+ final boolean finalFailWrite = failedWrite;
+ final boolean[] done = new boolean[] { false };
+
+ this.taskController.tasks.compute(chunkKey, (final ChunkCoordinate keyInMap, final ChunkDataTask valueInMap) -> {
+ if (valueInMap == null) {
+ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
+ }
+ if (valueInMap != ChunkDataTask.this) {
+ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
+ }
+ if (valueInMap.inProgressWrite == dataWritten) {
+ valueInMap.failedWrite = finalFailWrite;
+ done[0] = true;
+ // keep the data in map if we failed the write so we can try to prevent data loss
+ return finalFailWrite ? valueInMap : null;
+ }
+ // different data than expected, means we need to retry write
+ return valueInMap;
+ });
+
+ if (done[0]) {
+ return;
+ }
+
+ // fetch & write new data
+ continue;
+ }
+ }
+ }
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/light/LightQueue.java b/src/main/java/io/papermc/paper/chunk/system/light/LightQueue.java
new file mode 100644
index 0000000000000000000000000000000000000000..de28d6ee71990da74d9deb360fac8bde5adbc918
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/light/LightQueue.java
@@ -0,0 +1,283 @@
+package io.papermc.paper.chunk.system.light;
+
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
+import ca.spottedleaf.starlight.common.light.BlockStarLightEngine;
+import ca.spottedleaf.starlight.common.light.SkyStarLightEngine;
+import ca.spottedleaf.starlight.common.light.StarLightInterface;
+import io.papermc.paper.util.CoordinateUtils;
+import it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap;
+import it.unimi.dsi.fastutil.shorts.ShortCollection;
+import it.unimi.dsi.fastutil.shorts.ShortOpenHashSet;
+import net.minecraft.core.BlockPos;
+import net.minecraft.core.SectionPos;
+import net.minecraft.server.level.ServerLevel;
+import net.minecraft.world.level.ChunkPos;
+import net.minecraft.world.level.chunk.ChunkStatus;
+import java.util.ArrayList;
+import java.util.HashSet;
+import java.util.List;
+import java.util.Set;
+import java.util.concurrent.CompletableFuture;
+import java.util.function.BooleanSupplier;
+
+public final class LightQueue {
+
+ protected final Long2ObjectOpenHashMap chunkTasks = new Long2ObjectOpenHashMap<>();
+ protected final StarLightInterface manager;
+ protected final ServerLevel world;
+
+ public LightQueue(final StarLightInterface manager) {
+ this.manager = manager;
+ this.world = ((ServerLevel)manager.getWorld());
+ }
+
+ public void lowerPriority(final int chunkX, final int chunkZ, final PrioritisedExecutor.Priority priority) {
+ final ChunkTasks task;
+ synchronized (this) {
+ task = this.chunkTasks.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
+ }
+ if (task != null) {
+ task.lowerPriority(priority);
+ }
+ }
+
+ public void setPriority(final int chunkX, final int chunkZ, final PrioritisedExecutor.Priority priority) {
+ final ChunkTasks task;
+ synchronized (this) {
+ task = this.chunkTasks.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
+ }
+ if (task != null) {
+ task.setPriority(priority);
+ }
+ }
+
+ public void raisePriority(final int chunkX, final int chunkZ, final PrioritisedExecutor.Priority priority) {
+ final ChunkTasks task;
+ synchronized (this) {
+ task = this.chunkTasks.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
+ }
+ if (task != null) {
+ task.raisePriority(priority);
+ }
+ }
+
+ public PrioritisedExecutor.Priority getPriority(final int chunkX, final int chunkZ) {
+ final ChunkTasks task;
+ synchronized (this) {
+ task = this.chunkTasks.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
+ }
+ if (task != null) {
+ return task.getPriority();
+ }
+
+ return PrioritisedExecutor.Priority.COMPLETING;
+ }
+
+ public boolean isEmpty() {
+ synchronized (this) {
+ return this.chunkTasks.isEmpty();
+ }
+ }
+
+ public ChunkTasks queueBlockChange(final BlockPos pos) {
+ final ChunkTasks tasks;
+ synchronized (this) {
+ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
+ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this);
+ });
+ tasks.changedPositions.add(pos.immutable());
+ }
+
+ tasks.schedule();
+
+ return tasks;
+ }
+
+ public ChunkTasks queueSectionChange(final SectionPos pos, final boolean newEmptyValue) {
+ final ChunkTasks tasks;
+ synchronized (this) {
+ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
+ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this);
+ });
+
+ if (tasks.changedSectionSet == null) {
+ tasks.changedSectionSet = new Boolean[this.manager.maxSection - this.manager.minSection + 1];
+ }
+ tasks.changedSectionSet[pos.getY() - this.manager.minSection] = Boolean.valueOf(newEmptyValue);
+ }
+
+ tasks.schedule();
+
+ return tasks;
+ }
+
+ public ChunkTasks queueChunkLightTask(final ChunkPos pos, final BooleanSupplier lightTask, final PrioritisedExecutor.Priority priority) {
+ final ChunkTasks tasks;
+ synchronized (this) {
+ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
+ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this, priority);
+ });
+ if (tasks.lightTasks == null) {
+ tasks.lightTasks = new ArrayList<>();
+ }
+ tasks.lightTasks.add(lightTask);
+ }
+
+ tasks.schedule();
+
+ return tasks;
+ }
+
+ public ChunkTasks queueChunkSkylightEdgeCheck(final SectionPos pos, final ShortCollection sections) {
+ final ChunkTasks tasks;
+ synchronized (this) {
+ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
+ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this);
+ });
+
+ ShortOpenHashSet queuedEdges = tasks.queuedEdgeChecksSky;
+ if (queuedEdges == null) {
+ queuedEdges = tasks.queuedEdgeChecksSky = new ShortOpenHashSet();
+ }
+ queuedEdges.addAll(sections);
+ }
+
+ tasks.schedule();
+
+ return tasks;
+ }
+
+ public ChunkTasks queueChunkBlocklightEdgeCheck(final SectionPos pos, final ShortCollection sections) {
+ final ChunkTasks tasks;
+
+ synchronized (this) {
+ tasks = this.chunkTasks.computeIfAbsent(CoordinateUtils.getChunkKey(pos), (final long keyInMap) -> {
+ return new ChunkTasks(keyInMap, LightQueue.this.manager, LightQueue.this);
+ });
+
+ ShortOpenHashSet queuedEdges = tasks.queuedEdgeChecksBlock;
+ if (queuedEdges == null) {
+ queuedEdges = tasks.queuedEdgeChecksBlock = new ShortOpenHashSet();
+ }
+ queuedEdges.addAll(sections);
+ }
+
+ tasks.schedule();
+
+ return tasks;
+ }
+
+ public void removeChunk(final ChunkPos pos) {
+ final ChunkTasks tasks;
+ synchronized (this) {
+ tasks = this.chunkTasks.remove(CoordinateUtils.getChunkKey(pos));
+ }
+ if (tasks != null && tasks.cancel()) {
+ tasks.onComplete.complete(null);
+ }
+ }
+
+ public static final class ChunkTasks implements Runnable {
+
+ public final CompletableFuture onComplete = new CompletableFuture<>();
+ public boolean isTicketAdded;
+ public final long chunkCoordinate;
+
+ private final StarLightInterface lightEngine;
+ private final LightQueue queue;
+ private final PrioritisedExecutor.PrioritisedTask task;
+ private final Set changedPositions = new HashSet<>();
+ private Boolean[] changedSectionSet;
+ private ShortOpenHashSet queuedEdgeChecksSky;
+ private ShortOpenHashSet queuedEdgeChecksBlock;
+ private List lightTasks;
+
+ public ChunkTasks(final long chunkCoordinate, final StarLightInterface lightEngine, final LightQueue queue) {
+ this(chunkCoordinate, lightEngine, queue, PrioritisedExecutor.Priority.NORMAL);
+ }
+
+ public ChunkTasks(final long chunkCoordinate, final StarLightInterface lightEngine, final LightQueue queue,
+ final PrioritisedExecutor.Priority priority) {
+ this.chunkCoordinate = chunkCoordinate;
+ this.lightEngine = lightEngine;
+ this.queue = queue;
+ this.task = queue.world.chunkTaskScheduler.radiusAwareScheduler.createTask(
+ CoordinateUtils.getChunkX(chunkCoordinate), CoordinateUtils.getChunkZ(chunkCoordinate),
+ ChunkStatus.LIGHT.writeRadius, this, priority
+ );
+ }
+
+ public void schedule() {
+ this.task.queue();
+ }
+
+ public boolean cancel() {
+ return this.task.cancel();
+ }
+
+ public PrioritisedExecutor.Priority getPriority() {
+ return this.task.getPriority();
+ }
+
+ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
+ this.task.lowerPriority(priority);
+ }
+
+ public void setPriority(final PrioritisedExecutor.Priority priority) {
+ this.task.setPriority(priority);
+ }
+
+ public void raisePriority(final PrioritisedExecutor.Priority priority) {
+ this.task.raisePriority(priority);
+ }
+
+ @Override
+ public void run() {
+ synchronized (this.queue) {
+ this.queue.chunkTasks.remove(this.chunkCoordinate);
+ }
+
+ boolean litChunk = false;
+ if (this.lightTasks != null) {
+ for (final BooleanSupplier run : this.lightTasks) {
+ if (run.getAsBoolean()) {
+ litChunk = true;
+ break;
+ }
+ }
+ }
+
+ final SkyStarLightEngine skyEngine = this.lightEngine.getSkyLightEngine();
+ final BlockStarLightEngine blockEngine = this.lightEngine.getBlockLightEngine();
+ try {
+ final long coordinate = this.chunkCoordinate;
+ final int chunkX = CoordinateUtils.getChunkX(coordinate);
+ final int chunkZ = CoordinateUtils.getChunkZ(coordinate);
+
+ final Set positions = this.changedPositions;
+ final Boolean[] sectionChanges = this.changedSectionSet;
+
+ if (!litChunk) {
+ if (skyEngine != null && (!positions.isEmpty() || sectionChanges != null)) {
+ skyEngine.blocksChangedInChunk(this.lightEngine.getLightAccess(), chunkX, chunkZ, positions, sectionChanges);
+ }
+ if (blockEngine != null && (!positions.isEmpty() || sectionChanges != null)) {
+ blockEngine.blocksChangedInChunk(this.lightEngine.getLightAccess(), chunkX, chunkZ, positions, sectionChanges);
+ }
+
+ if (skyEngine != null && this.queuedEdgeChecksSky != null) {
+ skyEngine.checkChunkEdges(this.lightEngine.getLightAccess(), chunkX, chunkZ, this.queuedEdgeChecksSky);
+ }
+ if (blockEngine != null && this.queuedEdgeChecksBlock != null) {
+ blockEngine.checkChunkEdges(this.lightEngine.getLightAccess(), chunkX, chunkZ, this.queuedEdgeChecksBlock);
+ }
+ }
+
+ this.onComplete.complete(null);
+ } finally {
+ this.lightEngine.releaseSkyLightEngine(skyEngine);
+ this.lightEngine.releaseBlockLightEngine(blockEngine);
+ }
+ }
+ }
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/poi/PoiChunk.java b/src/main/java/io/papermc/paper/chunk/system/poi/PoiChunk.java
new file mode 100644
index 0000000000000000000000000000000000000000..d72041aa814ff179e6e29a45dcd359a91d426d47
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/poi/PoiChunk.java
@@ -0,0 +1,213 @@
+package io.papermc.paper.chunk.system.poi;
+
+import com.mojang.logging.LogUtils;
+import com.mojang.serialization.Codec;
+import com.mojang.serialization.DataResult;
+import io.papermc.paper.util.CoordinateUtils;
+import io.papermc.paper.util.TickThread;
+import io.papermc.paper.util.WorldUtil;
+import net.minecraft.SharedConstants;
+import net.minecraft.nbt.CompoundTag;
+import net.minecraft.nbt.NbtOps;
+import net.minecraft.nbt.Tag;
+import net.minecraft.resources.RegistryOps;
+import net.minecraft.server.level.ServerLevel;
+import net.minecraft.world.entity.ai.village.poi.PoiManager;
+import net.minecraft.world.entity.ai.village.poi.PoiSection;
+import org.slf4j.Logger;
+
+import java.util.Optional;
+
+public final class PoiChunk {
+
+ private static final Logger LOGGER = LogUtils.getClassLogger();
+
+ public final ServerLevel world;
+ public final int chunkX;
+ public final int chunkZ;
+ public final int minSection;
+ public final int maxSection;
+
+ protected final PoiSection[] sections;
+
+ private boolean isDirty;
+ private boolean loaded;
+
+ public PoiChunk(final ServerLevel world, final int chunkX, final int chunkZ, final int minSection, final int maxSection) {
+ this(world, chunkX, chunkZ, minSection, maxSection, new PoiSection[maxSection - minSection + 1]);
+ }
+
+ public PoiChunk(final ServerLevel world, final int chunkX, final int chunkZ, final int minSection, final int maxSection, final PoiSection[] sections) {
+ this.world = world;
+ this.chunkX = chunkX;
+ this.chunkZ = chunkZ;
+ this.minSection = minSection;
+ this.maxSection = maxSection;
+ this.sections = sections;
+ if (this.sections.length != (maxSection - minSection + 1)) {
+ throw new IllegalStateException("Incorrect length used, expected " + (maxSection - minSection + 1) + ", got " + this.sections.length);
+ }
+ }
+
+ public void load() {
+ TickThread.ensureTickThread(this.world, this.chunkX, this.chunkZ, "Loading in poi chunk off-main");
+ if (this.loaded) {
+ return;
+ }
+ this.loaded = true;
+ this.world.chunkSource.getPoiManager().loadInPoiChunk(this);
+ }
+
+ public boolean isLoaded() {
+ return this.loaded;
+ }
+
+ public boolean isEmpty() {
+ for (final PoiSection section : this.sections) {
+ if (section != null && !section.isEmpty()) {
+ return false;
+ }
+ }
+
+ return true;
+ }
+
+ public PoiSection getOrCreateSection(final int chunkY) {
+ if (chunkY >= this.minSection && chunkY <= this.maxSection) {
+ final int idx = chunkY - this.minSection;
+ final PoiSection ret = this.sections[idx];
+ if (ret != null) {
+ return ret;
+ }
+
+ final PoiManager poiManager = this.world.getPoiManager();
+ final long key = CoordinateUtils.getChunkSectionKey(this.chunkX, chunkY, this.chunkZ);
+
+ return this.sections[idx] = new PoiSection(() -> {
+ poiManager.setDirty(key);
+ });
+ }
+ throw new IllegalArgumentException("chunkY is out of bounds, chunkY: " + chunkY + " outside [" + this.minSection + "," + this.maxSection + "]");
+ }
+
+ public PoiSection getSection(final int chunkY) {
+ if (chunkY >= this.minSection && chunkY <= this.maxSection) {
+ return this.sections[chunkY - this.minSection];
+ }
+ return null;
+ }
+
+ public Optional getSectionForVanilla(final int chunkY) {
+ if (chunkY >= this.minSection && chunkY <= this.maxSection) {
+ final PoiSection ret = this.sections[chunkY - this.minSection];
+ return ret == null ? Optional.empty() : ret.noAllocateOptional;
+ }
+ return Optional.empty();
+ }
+
+ public boolean isDirty() {
+ return this.isDirty;
+ }
+
+ public void setDirty(final boolean dirty) {
+ this.isDirty = dirty;
+ }
+
+ // returns null if empty
+ public CompoundTag save() {
+ final RegistryOps registryOps = RegistryOps.create(NbtOps.INSTANCE, world.getPoiManager().registryAccess);
+
+ final CompoundTag ret = new CompoundTag();
+ final CompoundTag sections = new CompoundTag();
+ ret.put("Sections", sections);
+
+ ret.putInt("DataVersion", SharedConstants.getCurrentVersion().getDataVersion().getVersion());
+
+ final ServerLevel world = this.world;
+ final PoiManager poiManager = world.getPoiManager();
+ final int chunkX = this.chunkX;
+ final int chunkZ = this.chunkZ;
+
+ for (int sectionY = this.minSection; sectionY <= this.maxSection; ++sectionY) {
+ final PoiSection chunk = this.sections[sectionY - this.minSection];
+ if (chunk == null || chunk.isEmpty()) {
+ continue;
+ }
+
+ final long key = CoordinateUtils.getChunkSectionKey(chunkX, sectionY, chunkZ);
+ // codecs are honestly such a fucking disaster. What the fuck is this trash?
+ final Codec codec = PoiSection.codec(() -> {
+ poiManager.setDirty(key);
+ });
+
+ final DataResult serializedResult = codec.encodeStart(registryOps, chunk);
+ final int finalSectionY = sectionY;
+ final Tag serialized = serializedResult.resultOrPartial((final String description) -> {
+ LOGGER.error("Failed to serialize poi chunk for world: " + world.getWorld().getName() + ", chunk: (" + chunkX + "," + finalSectionY + "," + chunkZ + "); description: " + description);
+ }).orElse(null);
+ if (serialized == null) {
+ // failed, should be logged from the resultOrPartial
+ continue;
+ }
+
+ sections.put(Integer.toString(sectionY), serialized);
+ }
+
+ return sections.isEmpty() ? null : ret;
+ }
+
+ public static PoiChunk empty(final ServerLevel world, final int chunkX, final int chunkZ) {
+ final PoiChunk ret = new PoiChunk(world, chunkX, chunkZ, WorldUtil.getMinSection(world), WorldUtil.getMaxSection(world));
+ ret.loaded = true;
+ return ret;
+ }
+
+ public static PoiChunk parse(final ServerLevel world, final int chunkX, final int chunkZ, final CompoundTag data) {
+ final PoiChunk ret = empty(world, chunkX, chunkZ);
+
+ final RegistryOps registryOps = RegistryOps.create(NbtOps.INSTANCE, world.getPoiManager().registryAccess);
+
+ final CompoundTag sections = data.getCompound("Sections");
+
+ if (sections.isEmpty()) {
+ // nothing to parse
+ return ret;
+ }
+
+ final PoiManager poiManager = world.getPoiManager();
+
+ boolean readAnything = false;
+
+ for (int sectionY = ret.minSection; sectionY <= ret.maxSection; ++sectionY) {
+ final String key = Integer.toString(sectionY);
+ if (!sections.contains(key)) {
+ continue;
+ }
+
+ final long coordinateKey = CoordinateUtils.getChunkSectionKey(chunkX, sectionY, chunkZ);
+ // codecs are honestly such a fucking disaster. What the fuck is this trash?
+ final Codec codec = PoiSection.codec(() -> {
+ poiManager.setDirty(coordinateKey);
+ });
+
+ final CompoundTag section = sections.getCompound(key);
+ final DataResult deserializeResult = codec.parse(registryOps, section);
+ final int finalSectionY = sectionY;
+ final PoiSection deserialized = deserializeResult.resultOrPartial((final String description) -> {
+ LOGGER.error("Failed to deserialize poi chunk for world: " + world.getWorld().getName() + ", chunk: (" + chunkX + "," + finalSectionY + "," + chunkZ + "); description: " + description);
+ }).orElse(null);
+
+ if (deserialized == null || deserialized.isEmpty()) {
+ // completely empty, no point in storing this
+ continue;
+ }
+
+ readAnything = true;
+ ret.sections[sectionY - ret.minSection] = deserialized;
+ }
+
+ ret.loaded = !readAnything; // Set loaded to false if we read anything to ensure proper callbacks to PoiManager are made on #load
+
+ return ret;
+ }
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkFullTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkFullTask.java
new file mode 100644
index 0000000000000000000000000000000000000000..679ed4d53269e1113035b462cf74ab16a231e22e
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkFullTask.java
@@ -0,0 +1,135 @@
+package io.papermc.paper.chunk.system.scheduling;
+
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
+import com.mojang.logging.LogUtils;
+import io.papermc.paper.chunk.system.poi.PoiChunk;
+import net.minecraft.server.level.ChunkMap;
+import net.minecraft.server.level.ServerLevel;
+import net.minecraft.world.level.chunk.ChunkAccess;
+import net.minecraft.world.level.chunk.ChunkStatus;
+import net.minecraft.world.level.chunk.ImposterProtoChunk;
+import net.minecraft.world.level.chunk.LevelChunk;
+import net.minecraft.world.level.chunk.ProtoChunk;
+import org.slf4j.Logger;
+import java.lang.invoke.VarHandle;
+
+public final class ChunkFullTask extends ChunkProgressionTask implements Runnable {
+
+ private static final Logger LOGGER = LogUtils.getClassLogger();
+
+ protected final NewChunkHolder chunkHolder;
+ protected final ChunkAccess fromChunk;
+ protected final PrioritisedExecutor.PrioritisedTask convertToFullTask;
+
+ public ChunkFullTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX, final int chunkZ,
+ final NewChunkHolder chunkHolder, final ChunkAccess fromChunk, final PrioritisedExecutor.Priority priority) {
+ super(scheduler, world, chunkX, chunkZ);
+ this.chunkHolder = chunkHolder;
+ this.fromChunk = fromChunk;
+ this.convertToFullTask = scheduler.createChunkTask(chunkX, chunkZ, this, priority);
+ }
+
+ @Override
+ public ChunkStatus getTargetStatus() {
+ return ChunkStatus.FULL;
+ }
+
+ @Override
+ public void run() {
+ // See Vanilla protoChunkToFullChunk for what this function should be doing
+ final LevelChunk chunk;
+ try {
+ // moved from the load from nbt stage into here
+ final PoiChunk poiChunk = this.chunkHolder.getPoiChunk();
+ if (poiChunk == null) {
+ LOGGER.error("Expected poi chunk to be loaded with chunk for task " + this.toString());
+ } else {
+ poiChunk.load();
+ this.world.getPoiManager().checkConsistency(this.fromChunk);
+ }
+
+ if (this.fromChunk instanceof ImposterProtoChunk wrappedFull) {
+ chunk = wrappedFull.getWrapped();
+ } else {
+ final ServerLevel world = this.world;
+ final ProtoChunk protoChunk = (ProtoChunk)this.fromChunk;
+ chunk = new LevelChunk(this.world, protoChunk, (final LevelChunk unused) -> {
+ ChunkMap.postLoadProtoChunk(world, protoChunk.getEntities(), protoChunk.getPos()); // Paper - rewrite chunk system
+ });
+ }
+
+ chunk.setChunkHolder(this.scheduler.chunkHolderManager.getChunkHolder(this.chunkX, this.chunkZ)); // replaces setFullStatus
+ chunk.runPostLoad();
+ // Unlike Vanilla, we load the entity chunk here, as we load the NBT in empty status (unlike Vanilla)
+ // This brings entity addition back in line with older versions of the game
+ // Since we load the NBT in the empty status, this will never block for I/O
+ this.world.chunkTaskScheduler.chunkHolderManager.getOrCreateEntityChunk(this.chunkX, this.chunkZ, false);
+
+ // we don't need the entitiesInLevel trash, this system doesn't double run callbacks
+ chunk.setLoaded(true);
+ chunk.registerAllBlockEntitiesAfterLevelLoad();
+ chunk.registerTickContainerInLevel(this.world);
+ } catch (final Throwable throwable) {
+ this.complete(null, throwable);
+
+ if (throwable instanceof ThreadDeath) {
+ throw (ThreadDeath)throwable;
+ }
+ return;
+ }
+ this.complete(chunk, null);
+ }
+
+ protected volatile boolean scheduled;
+ protected static final VarHandle SCHEDULED_HANDLE = ConcurrentUtil.getVarHandle(ChunkFullTask.class, "scheduled", boolean.class);
+
+ @Override
+ public boolean isScheduled() {
+ return this.scheduled;
+ }
+
+ @Override
+ public void schedule() {
+ if ((boolean)SCHEDULED_HANDLE.getAndSet((ChunkFullTask)this, true)) {
+ throw new IllegalStateException("Cannot double call schedule()");
+ }
+ this.convertToFullTask.queue();
+ }
+
+ @Override
+ public void cancel() {
+ if (this.convertToFullTask.cancel()) {
+ this.complete(null, null);
+ }
+ }
+
+ @Override
+ public PrioritisedExecutor.Priority getPriority() {
+ return this.convertToFullTask.getPriority();
+ }
+
+ @Override
+ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
+ throw new IllegalArgumentException("Invalid priority " + priority);
+ }
+ this.convertToFullTask.lowerPriority(priority);
+ }
+
+ @Override
+ public void setPriority(final PrioritisedExecutor.Priority priority) {
+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
+ throw new IllegalArgumentException("Invalid priority " + priority);
+ }
+ this.convertToFullTask.setPriority(priority);
+ }
+
+ @Override
+ public void raisePriority(final PrioritisedExecutor.Priority priority) {
+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
+ throw new IllegalArgumentException("Invalid priority " + priority);
+ }
+ this.convertToFullTask.raisePriority(priority);
+ }
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java
new file mode 100644
index 0000000000000000000000000000000000000000..5b446e6ac151f99f64f0c442d0b40b5e251bc4c4
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkHolderManager.java
@@ -0,0 +1,1500 @@
+package io.papermc.paper.chunk.system.scheduling;
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
+import ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock;
+import ca.spottedleaf.concurrentutil.map.SWMRLong2ObjectHashTable;
+import com.google.common.collect.ImmutableList;
+import com.google.gson.JsonArray;
+import com.google.gson.JsonObject;
+import com.mojang.logging.LogUtils;
+import io.papermc.paper.chunk.system.RegionizedPlayerChunkLoader;
+import io.papermc.paper.chunk.system.io.RegionFileIOThread;
+import io.papermc.paper.chunk.system.poi.PoiChunk;
+import io.papermc.paper.threadedregions.TickRegions;
+import io.papermc.paper.util.CoordinateUtils;
+import io.papermc.paper.util.TickThread;
+import io.papermc.paper.world.ChunkEntitySlices;
+import it.unimi.dsi.fastutil.longs.Long2ByteLinkedOpenHashMap;
+import it.unimi.dsi.fastutil.longs.Long2ByteMap;
+import it.unimi.dsi.fastutil.longs.Long2IntLinkedOpenHashMap;
+import it.unimi.dsi.fastutil.longs.Long2IntMap;
+import it.unimi.dsi.fastutil.longs.Long2IntOpenHashMap;
+import it.unimi.dsi.fastutil.longs.Long2ObjectMap;
+import it.unimi.dsi.fastutil.longs.Long2ObjectOpenHashMap;
+import it.unimi.dsi.fastutil.longs.LongArrayList;
+import it.unimi.dsi.fastutil.longs.LongIterator;
+import it.unimi.dsi.fastutil.objects.ObjectRBTreeSet;
+import net.minecraft.nbt.CompoundTag;
+import io.papermc.paper.chunk.system.ChunkSystem;
+import net.minecraft.server.MinecraftServer;
+import net.minecraft.server.level.ChunkHolder;
+import net.minecraft.server.level.ChunkLevel;
+import net.minecraft.server.level.FullChunkStatus;
+import net.minecraft.server.level.ServerLevel;
+import net.minecraft.server.level.Ticket;
+import net.minecraft.server.level.TicketType;
+import net.minecraft.util.SortedArraySet;
+import net.minecraft.util.Unit;
+import net.minecraft.world.level.ChunkPos;
+import org.bukkit.plugin.Plugin;
+import org.slf4j.Logger;
+import java.io.IOException;
+import java.text.DecimalFormat;
+import java.util.ArrayDeque;
+import java.util.ArrayList;
+import java.util.Collection;
+import java.util.Collections;
+import java.util.Iterator;
+import java.util.List;
+import java.util.concurrent.ConcurrentHashMap;
+import java.util.concurrent.TimeUnit;
+import java.util.concurrent.atomic.AtomicBoolean;
+import java.util.concurrent.atomic.AtomicLong;
+import java.util.concurrent.atomic.AtomicReference;
+import java.util.concurrent.locks.LockSupport;
+import java.util.function.Predicate;
+
+public final class ChunkHolderManager {
+
+ private static final Logger LOGGER = LogUtils.getClassLogger();
+
+ public static final int FULL_LOADED_TICKET_LEVEL = 33;
+ public static final int BLOCK_TICKING_TICKET_LEVEL = 32;
+ public static final int ENTITY_TICKING_TICKET_LEVEL = 31;
+ public static final int MAX_TICKET_LEVEL = ChunkLevel.MAX_LEVEL; // inclusive
+
+ private static final long NO_TIMEOUT_MARKER = Long.MIN_VALUE;
+ private static final long PROBE_MARKER = Long.MIN_VALUE + 1;
+ public final ReentrantAreaLock ticketLockArea;
+
+ private final ConcurrentHashMap>> tickets = new java.util.concurrent.ConcurrentHashMap<>();
+ private final ConcurrentHashMap sectionToChunkToExpireCount = new java.util.concurrent.ConcurrentHashMap<>();
+ final ChunkQueue unloadQueue;
+
+ public boolean processTicketUpdates(final int posX, final int posZ) {
+ final int ticketShift = ThreadedTicketLevelPropagator.SECTION_SHIFT;
+ final int ticketMask = (1 << ticketShift) - 1;
+ final List scheduledTasks = new ArrayList<>();
+ final List changedFullStatus = new ArrayList<>();
+ final boolean ret;
+ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(
+ ((posX >> ticketShift) - 1) << ticketShift,
+ ((posZ >> ticketShift) - 1) << ticketShift,
+ (((posX >> ticketShift) + 1) << ticketShift) | ticketMask,
+ (((posZ >> ticketShift) + 1) << ticketShift) | ticketMask
+ );
+ try {
+ ret = this.processTicketUpdatesNoLock(posX >> ticketShift, posZ >> ticketShift, scheduledTasks, changedFullStatus);
+ } finally {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+
+ this.addChangedStatuses(changedFullStatus);
+
+ for (int i = 0, len = scheduledTasks.size(); i < len; ++i) {
+ scheduledTasks.get(i).schedule();
+ }
+
+ return ret;
+ }
+
+ private boolean processTicketUpdatesNoLock(final int sectionX, final int sectionZ, final List scheduledTasks,
+ final List changedFullStatus) {
+ return this.ticketLevelPropagator.performUpdate(
+ sectionX, sectionZ, this.taskScheduler.schedulingLockArea, scheduledTasks, changedFullStatus
+ );
+ }
+
+ private final SWMRLong2ObjectHashTable chunkHolders = new SWMRLong2ObjectHashTable<>(16384, 0.25f);
+ // what a disaster of a name
+ // this is a map of removal tick to a map of chunks and the number of tickets a chunk has that are to expire that tick
+ private final Long2ObjectOpenHashMap removeTickToChunkExpireTicketCount = new Long2ObjectOpenHashMap<>();
+ private final ServerLevel world;
+ private final ChunkTaskScheduler taskScheduler;
+ private long currentTick;
+
+ private final ArrayDeque pendingFullLoadUpdate = new ArrayDeque<>();
+ private final ObjectRBTreeSet autoSaveQueue = new ObjectRBTreeSet<>((final NewChunkHolder c1, final NewChunkHolder c2) -> {
+ if (c1 == c2) {
+ return 0;
+ }
+
+ final int saveTickCompare = Long.compare(c1.lastAutoSave, c2.lastAutoSave);
+
+ if (saveTickCompare != 0) {
+ return saveTickCompare;
+ }
+
+ final long coord1 = CoordinateUtils.getChunkKey(c1.chunkX, c1.chunkZ);
+ final long coord2 = CoordinateUtils.getChunkKey(c2.chunkX, c2.chunkZ);
+
+ if (coord1 == coord2) {
+ throw new IllegalStateException("Duplicate chunkholder in auto save queue");
+ }
+
+ return Long.compare(coord1, coord2);
+ });
+
+ public ChunkHolderManager(final ServerLevel world, final ChunkTaskScheduler taskScheduler) {
+ this.world = world;
+ this.taskScheduler = taskScheduler;
+ this.ticketLockArea = new ReentrantAreaLock(taskScheduler.getChunkSystemLockShift());
+ this.unloadQueue = new ChunkQueue(world.getRegionChunkShift());
+ }
+
+ private final AtomicLong statusUpgradeId = new AtomicLong();
+
+ long getNextStatusUpgradeId() {
+ return this.statusUpgradeId.incrementAndGet();
+ }
+
+ public List getOldChunkHolders() {
+ final List holders = this.getChunkHolders();
+ final List ret = new ArrayList<>(holders.size());
+ for (final NewChunkHolder holder : holders) {
+ ret.add(holder.vanillaChunkHolder);
+ }
+ return ret;
+ }
+
+ public List getChunkHolders() {
+ final List ret = new ArrayList<>(this.chunkHolders.size());
+ this.chunkHolders.forEachValue(ret::add);
+ return ret;
+ }
+
+ public int size() {
+ return this.chunkHolders.size();
+ }
+
+ public void close(final boolean save, final boolean halt) {
+ TickThread.ensureTickThread("Closing world off-main");
+ if (halt) {
+ LOGGER.info("Waiting 60s for chunk system to halt for world '" + this.world.getWorld().getName() + "'");
+ if (!this.taskScheduler.halt(true, TimeUnit.SECONDS.toNanos(60L))) {
+ LOGGER.warn("Failed to halt world generation/loading tasks for world '" + this.world.getWorld().getName() + "'");
+ } else {
+ LOGGER.info("Halted chunk system for world '" + this.world.getWorld().getName() + "'");
+ }
+ }
+
+ if (save) {
+ this.saveAllChunks(true, true, true);
+ }
+
+ if (this.world.chunkDataControllerNew.hasTasks() || this.world.entityDataControllerNew.hasTasks() || this.world.poiDataControllerNew.hasTasks()) {
+ RegionFileIOThread.flush();
+ }
+
+ // kill regionfile cache
+ try {
+ this.world.chunkDataControllerNew.getCache().close();
+ } catch (final IOException ex) {
+ LOGGER.error("Failed to close chunk regionfile cache for world '" + this.world.getWorld().getName() + "'", ex);
+ }
+ try {
+ this.world.entityDataControllerNew.getCache().close();
+ } catch (final IOException ex) {
+ LOGGER.error("Failed to close entity regionfile cache for world '" + this.world.getWorld().getName() + "'", ex);
+ }
+ try {
+ this.world.poiDataControllerNew.getCache().close();
+ } catch (final IOException ex) {
+ LOGGER.error("Failed to close poi regionfile cache for world '" + this.world.getWorld().getName() + "'", ex);
+ }
+ }
+
+ void ensureInAutosave(final NewChunkHolder holder) {
+ if (!this.autoSaveQueue.contains(holder)) {
+ holder.lastAutoSave = MinecraftServer.currentTick;
+ this.autoSaveQueue.add(holder);
+ }
+ }
+
+ public void autoSave() {
+ final List reschedule = new ArrayList<>();
+ final long currentTick = MinecraftServer.currentTickLong;
+ final long maxSaveTime = currentTick - this.world.paperConfig().chunks.autoSaveInterval.value();
+ for (int autoSaved = 0; autoSaved < this.world.paperConfig().chunks.maxAutoSaveChunksPerTick && !this.autoSaveQueue.isEmpty();) {
+ final NewChunkHolder holder = this.autoSaveQueue.first();
+
+ if (holder.lastAutoSave > maxSaveTime) {
+ break;
+ }
+
+ this.autoSaveQueue.remove(holder);
+
+ holder.lastAutoSave = currentTick;
+ if (holder.save(false, false) != null) {
+ ++autoSaved;
+ }
+
+ if (holder.getChunkStatus().isOrAfter(FullChunkStatus.FULL)) {
+ reschedule.add(holder);
+ }
+ }
+
+ for (final NewChunkHolder holder : reschedule) {
+ if (holder.getChunkStatus().isOrAfter(FullChunkStatus.FULL)) {
+ this.autoSaveQueue.add(holder);
+ }
+ }
+ }
+
+ public void saveAllChunks(final boolean flush, final boolean shutdown, final boolean logProgress) {
+ final List holders = this.getChunkHolders();
+
+ if (logProgress) {
+ LOGGER.info("Saving all chunkholders for world '" + this.world.getWorld().getName() + "'");
+ }
+
+ final DecimalFormat format = new DecimalFormat("#0.00");
+
+ int saved = 0;
+
+ long start = System.nanoTime();
+ long lastLog = start;
+ boolean needsFlush = false;
+ final int flushInterval = 50;
+
+ int savedChunk = 0;
+ int savedEntity = 0;
+ int savedPoi = 0;
+
+ for (int i = 0, len = holders.size(); i < len; ++i) {
+ final NewChunkHolder holder = holders.get(i);
+ try {
+ final NewChunkHolder.SaveStat saveStat = holder.save(shutdown, false);
+ if (saveStat != null) {
+ ++saved;
+ needsFlush = flush;
+ if (saveStat.savedChunk()) {
+ ++savedChunk;
+ }
+ if (saveStat.savedEntityChunk()) {
+ ++savedEntity;
+ }
+ if (saveStat.savedPoiChunk()) {
+ ++savedPoi;
+ }
+ }
+ } catch (final ThreadDeath thr) {
+ throw thr;
+ } catch (final Throwable thr) {
+ LOGGER.error("Failed to save chunk (" + holder.chunkX + "," + holder.chunkZ + ") in world '" + this.world.getWorld().getName() + "'", thr);
+ }
+ if (needsFlush && (saved % flushInterval) == 0) {
+ needsFlush = false;
+ RegionFileIOThread.partialFlush(flushInterval / 2);
+ }
+ if (logProgress) {
+ final long currTime = System.nanoTime();
+ if ((currTime - lastLog) > TimeUnit.SECONDS.toNanos(10L)) {
+ lastLog = currTime;
+ LOGGER.info("Saved " + saved + " chunks (" + format.format((double)(i+1)/(double)len * 100.0) + "%) in world '" + this.world.getWorld().getName() + "'");
+ }
+ }
+ }
+ if (flush) {
+ RegionFileIOThread.flush();
+ if (this.world.paperConfig().chunks.flushRegionsOnSave) {
+ try {
+ this.world.chunkSource.chunkMap.regionFileCache.flush();
+ } catch (IOException ex) {
+ LOGGER.error("Exception when flushing regions in world {}", this.world.getWorld().getName(), ex);
+ }
+ }
+ }
+ if (logProgress) {
+ LOGGER.info("Saved " + savedChunk + " block chunks, " + savedEntity + " entity chunks, " + savedPoi + " poi chunks in world '" + this.world.getWorld().getName() + "' in " + format.format(1.0E-9 * (System.nanoTime() - start)) + "s");
+ }
+ }
+
+ protected final ThreadedTicketLevelPropagator ticketLevelPropagator = new ThreadedTicketLevelPropagator() {
+ @Override
+ protected void processLevelUpdates(final Long2ByteLinkedOpenHashMap updates) {
+ // first the necessary chunkholders must be created, so just update the ticket levels
+ for (final Iterator iterator = updates.long2ByteEntrySet().fastIterator(); iterator.hasNext();) {
+ final Long2ByteMap.Entry entry = iterator.next();
+ final long key = entry.getLongKey();
+ final int newLevel = convertBetweenTicketLevels((int)entry.getByteValue());
+
+ NewChunkHolder current = ChunkHolderManager.this.chunkHolders.get(key);
+ if (current == null && newLevel > MAX_TICKET_LEVEL) {
+ // not loaded and it shouldn't be loaded!
+ iterator.remove();
+ continue;
+ }
+
+ final int currentLevel = current == null ? MAX_TICKET_LEVEL + 1 : current.getCurrentTicketLevel();
+ if (currentLevel == newLevel) {
+ // nothing to do
+ iterator.remove();
+ continue;
+ }
+
+ if (current == null) {
+ // must create
+ current = ChunkHolderManager.this.createChunkHolder(key);
+ synchronized (ChunkHolderManager.this.chunkHolders) {
+ ChunkHolderManager.this.chunkHolders.put(key, current);
+ }
+ current.updateTicketLevel(newLevel);
+ } else {
+ current.updateTicketLevel(newLevel);
+ }
+ }
+ }
+
+ @Override
+ protected void processSchedulingUpdates(final Long2ByteLinkedOpenHashMap updates, final List scheduledTasks,
+ final List changedFullStatus) {
+ final List prev = CURRENT_TICKET_UPDATE_SCHEDULING.get();
+ CURRENT_TICKET_UPDATE_SCHEDULING.set(scheduledTasks);
+ try {
+ for (final LongIterator iterator = updates.keySet().iterator(); iterator.hasNext();) {
+ final long key = iterator.nextLong();
+ final NewChunkHolder current = ChunkHolderManager.this.chunkHolders.get(key);
+
+ if (current == null) {
+ throw new IllegalStateException("Expected chunk holder to be created");
+ }
+
+ current.processTicketLevelUpdate(scheduledTasks, changedFullStatus);
+ }
+ } finally {
+ CURRENT_TICKET_UPDATE_SCHEDULING.set(prev);
+ }
+ }
+ };
+ // function for converting between ticket levels and propagator levels and vice versa
+ // the problem is the ticket level propagator will propagate from a set source down to zero, whereas mojang expects
+ // levels to propagate from a set value up to a maximum value. so we need to convert the levels we put into the propagator
+ // and the levels we get out of the propagator
+
+ public static int convertBetweenTicketLevels(final int level) {
+ return ChunkLevel.MAX_LEVEL - level + 1;
+ }
+
+ public String getTicketDebugString(final long coordinate) {
+ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(CoordinateUtils.getChunkX(coordinate), CoordinateUtils.getChunkZ(coordinate));
+ try {
+ final SortedArraySet> tickets = this.tickets.get(new RegionFileIOThread.ChunkCoordinate(coordinate));
+
+ return tickets != null ? tickets.first().toString() : "no_ticket";
+ } finally {
+ if (ticketLock != null) {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+ }
+ }
+
+ public Long2ObjectOpenHashMap>> getTicketsCopy() {
+ final Long2ObjectOpenHashMap>> ret = new Long2ObjectOpenHashMap<>();
+ final Long2ObjectOpenHashMap> sections = new Long2ObjectOpenHashMap();
+ final int sectionShift = this.taskScheduler.getChunkSystemLockShift();
+ for (final RegionFileIOThread.ChunkCoordinate coord : this.tickets.keySet()) {
+ sections.computeIfAbsent(
+ CoordinateUtils.getChunkKey(
+ CoordinateUtils.getChunkX(coord.key) >> sectionShift,
+ CoordinateUtils.getChunkZ(coord.key) >> sectionShift
+ ),
+ (final long keyInMap) -> {
+ return new ArrayList<>();
+ }
+ ).add(coord);
+ }
+
+ for (final Iterator>> iterator = sections.long2ObjectEntrySet().fastIterator();
+ iterator.hasNext();) {
+ final Long2ObjectMap.Entry> entry = iterator.next();
+ final long sectionKey = entry.getLongKey();
+ final List coordinates = entry.getValue();
+
+ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(
+ CoordinateUtils.getChunkX(sectionKey) << sectionShift,
+ CoordinateUtils.getChunkZ(sectionKey) << sectionShift
+ );
+ try {
+ for (final RegionFileIOThread.ChunkCoordinate coord : coordinates) {
+ final SortedArraySet> tickets = this.tickets.get(coord);
+ if (tickets == null) {
+ // removed before we acquired lock
+ continue;
+ }
+ ret.put(coord.key, new SortedArraySet<>(tickets));
+ }
+ } finally {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+ }
+
+ return ret;
+ }
+
+ public Collection getPluginChunkTickets(int x, int z) {
+ ImmutableList.Builder ret;
+ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(x, z);
+ try {
+ final long coordinate = CoordinateUtils.getChunkKey(x, z);
+ final SortedArraySet> tickets = this.tickets.get(new RegionFileIOThread.ChunkCoordinate(coordinate));
+
+ if (tickets == null) {
+ return Collections.emptyList();
+ }
+
+ ret = ImmutableList.builder();
+ for (Ticket> ticket : tickets) {
+ if (ticket.getType() == TicketType.PLUGIN_TICKET) {
+ ret.add((Plugin)ticket.key);
+ }
+ }
+ } finally {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+
+ return ret.build();
+ }
+
+ protected final void updateTicketLevel(final long coordinate, final int ticketLevel) {
+ if (ticketLevel > ChunkLevel.MAX_LEVEL) {
+ this.ticketLevelPropagator.removeSource(CoordinateUtils.getChunkX(coordinate), CoordinateUtils.getChunkZ(coordinate));
+ } else {
+ this.ticketLevelPropagator.setSource(CoordinateUtils.getChunkX(coordinate), CoordinateUtils.getChunkZ(coordinate), convertBetweenTicketLevels(ticketLevel));
+ }
+ }
+
+ private static int getTicketLevelAt(SortedArraySet> tickets) {
+ return !tickets.isEmpty() ? tickets.first().getTicketLevel() : MAX_TICKET_LEVEL + 1;
+ }
+
+ public boolean addTicketAtLevel(final TicketType type, final ChunkPos chunkPos, final int level,
+ final T identifier) {
+ return this.addTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkPos), level, identifier);
+ }
+
+ public boolean addTicketAtLevel(final TicketType type, final int chunkX, final int chunkZ, final int level,
+ final T identifier) {
+ return this.addTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkX, chunkZ), level, identifier);
+ }
+
+ private void addExpireCount(final int chunkX, final int chunkZ) {
+ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
+
+ final int sectionShift = this.world.getRegionChunkShift();
+ final RegionFileIOThread.ChunkCoordinate sectionKey = new RegionFileIOThread.ChunkCoordinate(CoordinateUtils.getChunkKey(
+ chunkX >> sectionShift,
+ chunkZ >> sectionShift
+ ));
+
+ this.sectionToChunkToExpireCount.computeIfAbsent(sectionKey, (final RegionFileIOThread.ChunkCoordinate keyInMap) -> {
+ return new Long2IntOpenHashMap();
+ }).addTo(chunkKey, 1);
+ }
+
+ private void removeExpireCount(final int chunkX, final int chunkZ) {
+ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
+
+ final int sectionShift = this.world.getRegionChunkShift();
+ final RegionFileIOThread.ChunkCoordinate sectionKey = new RegionFileIOThread.ChunkCoordinate(CoordinateUtils.getChunkKey(
+ chunkX >> sectionShift,
+ chunkZ >> sectionShift
+ ));
+
+ final Long2IntOpenHashMap removeCounts = this.sectionToChunkToExpireCount.get(sectionKey);
+ final int prevCount = removeCounts.addTo(chunkKey, -1);
+
+ if (prevCount == 1) {
+ removeCounts.remove(chunkKey);
+ if (removeCounts.isEmpty()) {
+ this.sectionToChunkToExpireCount.remove(sectionKey);
+ }
+ }
+ }
+
+ // supposed to return true if the ticket was added and did not replace another
+ // but, we always return false if the ticket cannot be added
+ public boolean addTicketAtLevel(final TicketType type, final long chunk, final int level, final T identifier) {
+ return this.addTicketAtLevel(type, chunk, level, identifier, true);
+ }
+
+ boolean addTicketAtLevel(final TicketType type, final long chunk, final int level, final T identifier, final boolean lock) {
+ final long removeDelay = type.timeout <= 0 ? NO_TIMEOUT_MARKER : type.timeout;
+ if (level > MAX_TICKET_LEVEL) {
+ return false;
+ }
+
+ final int chunkX = CoordinateUtils.getChunkX(chunk);
+ final int chunkZ = CoordinateUtils.getChunkZ(chunk);
+ final RegionFileIOThread.ChunkCoordinate chunkCoord = new RegionFileIOThread.ChunkCoordinate(chunk);
+ final Ticket ticket = new Ticket<>(type, level, identifier, removeDelay);
+
+ final ReentrantAreaLock.Node ticketLock = lock ? this.ticketLockArea.lock(chunkX, chunkZ) : null;
+ try {
+ final SortedArraySet> ticketsAtChunk = this.tickets.computeIfAbsent(chunkCoord, (final RegionFileIOThread.ChunkCoordinate keyInMap) -> {
+ return SortedArraySet.create(4);
+ });
+
+ final int levelBefore = getTicketLevelAt(ticketsAtChunk);
+ final Ticket current = (Ticket)ticketsAtChunk.replace(ticket);
+ final int levelAfter = getTicketLevelAt(ticketsAtChunk);
+
+ if (current != ticket) {
+ final long oldRemoveDelay = current.removeDelay;
+ if (removeDelay != oldRemoveDelay) {
+ if (oldRemoveDelay != NO_TIMEOUT_MARKER && removeDelay == NO_TIMEOUT_MARKER) {
+ this.removeExpireCount(chunkX, chunkZ);
+ } else if (oldRemoveDelay == NO_TIMEOUT_MARKER) {
+ // since old != new, we have that NO_TIMEOUT_MARKER != new
+ this.addExpireCount(chunkX, chunkZ);
+ }
+ }
+ } else {
+ if (removeDelay != NO_TIMEOUT_MARKER) {
+ this.addExpireCount(chunkX, chunkZ);
+ }
+ }
+
+ if (levelBefore != levelAfter) {
+ this.updateTicketLevel(chunk, levelAfter);
+ }
+
+ return current == ticket;
+ } finally {
+ if (ticketLock != null) {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+ }
+ }
+
+ public boolean removeTicketAtLevel(final TicketType type, final ChunkPos chunkPos, final int level, final T identifier) {
+ return this.removeTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkPos), level, identifier);
+ }
+
+ public boolean removeTicketAtLevel(final TicketType type, final int chunkX, final int chunkZ, final int level, final T identifier) {
+ return this.removeTicketAtLevel(type, CoordinateUtils.getChunkKey(chunkX, chunkZ), level, identifier);
+ }
+
+ public boolean removeTicketAtLevel(final TicketType type, final long chunk, final int level, final T identifier) {
+ return this.removeTicketAtLevel(type, chunk, level, identifier, true);
+ }
+
+ boolean removeTicketAtLevel(final TicketType type, final long chunk, final int level, final T identifier, final boolean lock) {
+ if (level > MAX_TICKET_LEVEL) {
+ return false;
+ }
+
+ final int chunkX = CoordinateUtils.getChunkX(chunk);
+ final int chunkZ = CoordinateUtils.getChunkZ(chunk);
+ final RegionFileIOThread.ChunkCoordinate chunkCoord = new RegionFileIOThread.ChunkCoordinate(chunk);
+ final Ticket probe = new Ticket<>(type, level, identifier, PROBE_MARKER);
+
+ final ReentrantAreaLock.Node ticketLock = lock ? this.ticketLockArea.lock(chunkX, chunkZ) : null;
+ try {
+ final SortedArraySet> ticketsAtChunk = this.tickets.get(chunkCoord);
+ if (ticketsAtChunk == null) {
+ return false;
+ }
+
+ final int oldLevel = getTicketLevelAt(ticketsAtChunk);
+ final Ticket ticket = (Ticket)ticketsAtChunk.removeAndGet(probe);
+
+ if (ticket == null) {
+ return false;
+ }
+
+ final int newLevel = getTicketLevelAt(ticketsAtChunk);
+ // we should not change the ticket levels while the target region may be ticking
+ if (oldLevel != newLevel) {
+ // Delay unload chunk patch originally by Aikar, updated to 1.20 by jpenilla
+ // these days, the patch is mostly useful to keep chunks ticking when players teleport
+ // so that their pets can teleport with them as well.
+ final long delayTimeout = this.world.paperConfig().chunks.delayChunkUnloadsBy.ticks();
+ final TicketType toAdd;
+ final long timeout;
+ if (type == RegionizedPlayerChunkLoader.REGION_PLAYER_TICKET && delayTimeout > 0) {
+ toAdd = TicketType.DELAY_UNLOAD;
+ timeout = delayTimeout;
+ } else {
+ toAdd = TicketType.UNKNOWN;
+ // always expect UNKNOWN to be > 1, but just in case
+ timeout = Math.max(1, toAdd.timeout);
+ }
+ final Ticket unknownTicket = new Ticket<>(toAdd, level, new ChunkPos(chunk), timeout);
+ if (ticketsAtChunk.add(unknownTicket)) {
+ this.addExpireCount(chunkX, chunkZ);
+ } else {
+ throw new IllegalStateException("Should have been able to add " + unknownTicket + " to " + ticketsAtChunk);
+ }
+ }
+
+ final long removeDelay = ticket.removeDelay;
+ if (removeDelay != NO_TIMEOUT_MARKER) {
+ this.removeExpireCount(chunkX, chunkZ);
+ }
+
+ return true;
+ } finally {
+ if (ticketLock != null) {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+ }
+ }
+
+ // atomic with respect to all add/remove/addandremove ticket calls for the given chunk
+ public void addAndRemoveTickets(final long chunk, final TicketType addType, final int addLevel, final T addIdentifier,
+ final TicketType removeType, final int removeLevel, final V removeIdentifier) {
+ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(CoordinateUtils.getChunkX(chunk), CoordinateUtils.getChunkZ(chunk));
+ try {
+ this.addTicketAtLevel(addType, chunk, addLevel, addIdentifier, false);
+ this.removeTicketAtLevel(removeType, chunk, removeLevel, removeIdentifier, false);
+ } finally {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+ }
+
+ // atomic with respect to all add/remove/addandremove ticket calls for the given chunk
+ public boolean addIfRemovedTicket(final long chunk, final TicketType addType, final int addLevel, final T addIdentifier,
+ final TicketType removeType, final int removeLevel, final V removeIdentifier) {
+ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(CoordinateUtils.getChunkX(chunk), CoordinateUtils.getChunkZ(chunk));
+ try {
+ if (this.removeTicketAtLevel(removeType, chunk, removeLevel, removeIdentifier, false)) {
+ this.addTicketAtLevel(addType, chunk, addLevel, addIdentifier, false);
+ return true;
+ }
+ return false;
+ } finally {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+ }
+
+ public void removeAllTicketsFor(final TicketType ticketType, final int ticketLevel, final T ticketIdentifier) {
+ if (ticketLevel > MAX_TICKET_LEVEL) {
+ return;
+ }
+
+ final Long2ObjectOpenHashMap> sections = new Long2ObjectOpenHashMap();
+ final int sectionShift = this.taskScheduler.getChunkSystemLockShift();
+ for (final RegionFileIOThread.ChunkCoordinate coord : this.tickets.keySet()) {
+ sections.computeIfAbsent(
+ CoordinateUtils.getChunkKey(
+ CoordinateUtils.getChunkX(coord.key) >> sectionShift,
+ CoordinateUtils.getChunkZ(coord.key) >> sectionShift
+ ),
+ (final long keyInMap) -> {
+ return new ArrayList<>();
+ }
+ ).add(coord);
+ }
+
+ for (final Iterator>> iterator = sections.long2ObjectEntrySet().fastIterator();
+ iterator.hasNext();) {
+ final Long2ObjectMap.Entry> entry = iterator.next();
+ final long sectionKey = entry.getLongKey();
+ final List coordinates = entry.getValue();
+
+ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(
+ CoordinateUtils.getChunkX(sectionKey) << sectionShift,
+ CoordinateUtils.getChunkZ(sectionKey) << sectionShift
+ );
+ try {
+ for (final RegionFileIOThread.ChunkCoordinate coord : coordinates) {
+ this.removeTicketAtLevel(ticketType, coord.key, ticketLevel, ticketIdentifier, false);
+ }
+ } finally {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+ }
+ }
+
+ public void tick() {
+ final int sectionShift = this.world.getRegionChunkShift();
+
+ final Predicate> expireNow = (final Ticket> ticket) -> {
+ if (ticket.removeDelay == NO_TIMEOUT_MARKER) {
+ return false;
+ }
+ return --ticket.removeDelay <= 0L;
+ };
+
+ for (final Iterator iterator = this.sectionToChunkToExpireCount.keySet().iterator(); iterator.hasNext();) {
+ final RegionFileIOThread.ChunkCoordinate section = iterator.next();
+ final long sectionKey = section.key;
+
+ if (!this.sectionToChunkToExpireCount.containsKey(section)) {
+ // removed concurrently
+ continue;
+ }
+
+ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(
+ CoordinateUtils.getChunkX(sectionKey) << sectionShift,
+ CoordinateUtils.getChunkZ(sectionKey) << sectionShift
+ );
+
+ try {
+ final Long2IntOpenHashMap chunkToExpireCount = this.sectionToChunkToExpireCount.get(section);
+ if (chunkToExpireCount == null) {
+ // lost to some race
+ continue;
+ }
+
+ for (final Iterator iterator1 = chunkToExpireCount.long2IntEntrySet().fastIterator(); iterator1.hasNext();) {
+ final Long2IntMap.Entry entry = iterator1.next();
+
+ final long chunkKey = entry.getLongKey();
+ final int expireCount = entry.getIntValue();
+
+ final RegionFileIOThread.ChunkCoordinate chunk = new RegionFileIOThread.ChunkCoordinate(chunkKey);
+
+ final SortedArraySet> tickets = this.tickets.get(chunk);
+ final int levelBefore = getTicketLevelAt(tickets);
+
+ final int sizeBefore = tickets.size();
+ tickets.removeIf(expireNow);
+ final int sizeAfter = tickets.size();
+ final int levelAfter = getTicketLevelAt(tickets);
+
+ if (tickets.isEmpty()) {
+ this.tickets.remove(chunk);
+ }
+ if (levelBefore != levelAfter) {
+ this.updateTicketLevel(chunkKey, levelAfter);
+ }
+
+ final int newExpireCount = expireCount - (sizeBefore - sizeAfter);
+
+ if (newExpireCount == expireCount) {
+ continue;
+ }
+
+ if (newExpireCount != 0) {
+ entry.setValue(newExpireCount);
+ } else {
+ iterator1.remove();
+ }
+ }
+
+ if (chunkToExpireCount.isEmpty()) {
+ this.sectionToChunkToExpireCount.remove(section);
+ }
+ } finally {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+ }
+
+ this.processTicketUpdates();
+ }
+
+ public NewChunkHolder getChunkHolder(final int chunkX, final int chunkZ) {
+ return this.chunkHolders.get(CoordinateUtils.getChunkKey(chunkX, chunkZ));
+ }
+
+ public NewChunkHolder getChunkHolder(final long position) {
+ return this.chunkHolders.get(position);
+ }
+
+ public void raisePriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
+ final NewChunkHolder chunkHolder = this.getChunkHolder(x, z);
+ if (chunkHolder != null) {
+ chunkHolder.raisePriority(priority);
+ }
+ }
+
+ public void setPriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
+ final NewChunkHolder chunkHolder = this.getChunkHolder(x, z);
+ if (chunkHolder != null) {
+ chunkHolder.setPriority(priority);
+ }
+ }
+
+ public void lowerPriority(final int x, final int z, final PrioritisedExecutor.Priority priority) {
+ final NewChunkHolder chunkHolder = this.getChunkHolder(x, z);
+ if (chunkHolder != null) {
+ chunkHolder.lowerPriority(priority);
+ }
+ }
+
+ private NewChunkHolder createChunkHolder(final long position) {
+ final NewChunkHolder ret = new NewChunkHolder(this.world, CoordinateUtils.getChunkX(position), CoordinateUtils.getChunkZ(position), this.taskScheduler);
+
+ ChunkSystem.onChunkHolderCreate(this.world, ret.vanillaChunkHolder);
+ ret.vanillaChunkHolder.onChunkAdd();
+
+ return ret;
+ }
+
+ // because this function creates the chunk holder without a ticket, it is the caller's responsibility to ensure
+ // the chunk holder eventually unloads. this should only be used to avoid using processTicketUpdates to create chunkholders,
+ // as processTicketUpdates may call plugin logic; in every other case a ticket is appropriate
+ private NewChunkHolder getOrCreateChunkHolder(final int chunkX, final int chunkZ) {
+ return this.getOrCreateChunkHolder(CoordinateUtils.getChunkKey(chunkX, chunkZ));
+ }
+
+ private NewChunkHolder getOrCreateChunkHolder(final long position) {
+ final int chunkX = CoordinateUtils.getChunkX(position);
+ final int chunkZ = CoordinateUtils.getChunkZ(position);
+
+ if (!this.ticketLockArea.isHeldByCurrentThread(chunkX, chunkZ)) {
+ throw new IllegalStateException("Must hold ticket level update lock!");
+ }
+ if (!this.taskScheduler.schedulingLockArea.isHeldByCurrentThread(chunkX, chunkZ)) {
+ throw new IllegalStateException("Must hold scheduler lock!!");
+ }
+
+ // we could just acquire these locks, but...
+ // must own the locks because the caller needs to ensure that no unload can occur AFTER this function returns
+
+ NewChunkHolder current = this.chunkHolders.get(position);
+ if (current != null) {
+ return current;
+ }
+
+ current = this.createChunkHolder(position);
+ synchronized (this.chunkHolders) {
+ this.chunkHolders.put(position, current);
+ }
+
+ return current;
+ }
+
+ private final AtomicLong entityLoadCounter = new AtomicLong();
+
+ public ChunkEntitySlices getOrCreateEntityChunk(final int chunkX, final int chunkZ, final boolean transientChunk) {
+ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot create entity chunk off-main");
+ ChunkEntitySlices ret;
+
+ NewChunkHolder current = this.getChunkHolder(chunkX, chunkZ);
+ if (current != null && (ret = current.getEntityChunk()) != null && (transientChunk || !ret.isTransient())) {
+ return ret;
+ }
+
+ final AtomicBoolean isCompleted = new AtomicBoolean();
+ final Thread waiter = Thread.currentThread();
+ final Long entityLoadId = Long.valueOf(this.entityLoadCounter.getAndIncrement());
+ NewChunkHolder.GenericDataLoadTaskCallback loadTask = null;
+ final ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(chunkX, chunkZ);
+ try {
+ this.addTicketAtLevel(TicketType.ENTITY_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, entityLoadId);
+ final ReentrantAreaLock.Node schedulingLock = this.taskScheduler.schedulingLockArea.lock(chunkX, chunkZ);
+ try {
+ current = this.getOrCreateChunkHolder(chunkX, chunkZ);
+ if ((ret = current.getEntityChunk()) != null && (transientChunk || !ret.isTransient())) {
+ this.removeTicketAtLevel(TicketType.ENTITY_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, entityLoadId);
+ return ret;
+ }
+
+ if (current.isEntityChunkNBTLoaded()) {
+ isCompleted.setPlain(true);
+ } else {
+ loadTask = current.getOrLoadEntityData((final GenericDataLoadTask.TaskResult result) -> {
+ if (!transientChunk) {
+ isCompleted.set(true);
+ LockSupport.unpark(waiter);
+ }
+ });
+ final ChunkLoadTask.EntityDataLoadTask entityLoad = current.getEntityDataLoadTask();
+
+ if (entityLoad != null && !transientChunk) {
+ entityLoad.raisePriority(PrioritisedExecutor.Priority.BLOCKING);
+ }
+ }
+ } finally {
+ this.taskScheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+ } finally {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+
+ if (loadTask != null) {
+ loadTask.schedule();
+ }
+
+ if (!transientChunk) {
+ // Note: no need to busy wait on the chunk queue, entity load will complete off-main
+ boolean interrupted = false;
+ while (!isCompleted.get()) {
+ interrupted |= Thread.interrupted();
+ LockSupport.park();
+ }
+
+ if (interrupted) {
+ Thread.currentThread().interrupt();
+ }
+ }
+
+ // now that the entity data is loaded, we can load it into the world
+
+ ret = current.loadInEntityChunk(transientChunk);
+
+ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
+ this.addAndRemoveTickets(chunkKey,
+ TicketType.UNKNOWN, MAX_TICKET_LEVEL, new ChunkPos(chunkX, chunkZ),
+ TicketType.ENTITY_LOAD, MAX_TICKET_LEVEL, entityLoadId
+ );
+
+ return ret;
+ }
+
+ public PoiChunk getPoiChunkIfLoaded(final int chunkX, final int chunkZ, final boolean checkLoadInCallback) {
+ final NewChunkHolder holder = this.getChunkHolder(chunkX, chunkZ);
+ if (holder != null) {
+ final PoiChunk ret = holder.getPoiChunk();
+ return ret == null || (checkLoadInCallback && !ret.isLoaded()) ? null : ret;
+ }
+ return null;
+ }
+
+ private final AtomicLong poiLoadCounter = new AtomicLong();
+
+ public PoiChunk loadPoiChunk(final int chunkX, final int chunkZ) {
+ TickThread.ensureTickThread(this.world, chunkX, chunkZ, "Cannot create poi chunk off-main");
+ PoiChunk ret;
+
+ NewChunkHolder current = this.getChunkHolder(chunkX, chunkZ);
+ if (current != null && (ret = current.getPoiChunk()) != null) {
+ if (!ret.isLoaded()) {
+ ret.load();
+ }
+ return ret;
+ }
+
+ final AtomicReference completed = new AtomicReference<>();
+ final AtomicBoolean isCompleted = new AtomicBoolean();
+ final Thread waiter = Thread.currentThread();
+ final Long poiLoadId = Long.valueOf(this.poiLoadCounter.getAndIncrement());
+ NewChunkHolder.GenericDataLoadTaskCallback loadTask = null;
+ final ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(chunkX, chunkZ); // Folia - use area based lock to reduce contention
+ try {
+ // Folia - use area based lock to reduce contention
+ this.addTicketAtLevel(TicketType.POI_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, poiLoadId);
+ final ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock.Node schedulingLock = this.taskScheduler.schedulingLockArea.lock(chunkX, chunkZ); // Folia - use area based lock to reduce contention
+ try {
+ current = this.getOrCreateChunkHolder(chunkX, chunkZ);
+ if (current.isPoiChunkLoaded()) {
+ this.removeTicketAtLevel(TicketType.POI_LOAD, chunkX, chunkZ, MAX_TICKET_LEVEL, poiLoadId);
+ return current.getPoiChunk();
+ }
+
+ loadTask = current.getOrLoadPoiData((final GenericDataLoadTask.TaskResult result) -> {
+ completed.setPlain(result.left());
+ isCompleted.set(true);
+ LockSupport.unpark(waiter);
+ });
+ final ChunkLoadTask.PoiDataLoadTask poiLoad = current.getPoiDataLoadTask();
+
+ if (poiLoad != null) {
+ poiLoad.raisePriority(PrioritisedExecutor.Priority.BLOCKING);
+ }
+ } finally {
+ this.taskScheduler.schedulingLockArea.unlock(schedulingLock); // Folia - use area based lock to reduce contention
+ }
+ } finally {
+ this.ticketLockArea.unlock(ticketLock); // Folia - use area based lock to reduce contention
+ }
+
+ if (loadTask != null) {
+ loadTask.schedule();
+ }
+
+ // Note: no need to busy wait on the chunk queue, poi load will complete off-main
+
+ boolean interrupted = false;
+ while (!isCompleted.get()) {
+ interrupted |= Thread.interrupted();
+ LockSupport.park();
+ }
+
+ if (interrupted) {
+ Thread.currentThread().interrupt();
+ }
+
+ ret = completed.getPlain();
+
+ ret.load();
+
+ final long chunkKey = CoordinateUtils.getChunkKey(chunkX, chunkZ);
+ this.addAndRemoveTickets(chunkKey,
+ TicketType.UNKNOWN, MAX_TICKET_LEVEL, new ChunkPos(chunkX, chunkZ),
+ TicketType.POI_LOAD, MAX_TICKET_LEVEL, poiLoadId
+ );
+
+ return ret;
+ }
+
+ void addChangedStatuses(final List changedFullStatus) {
+ if (changedFullStatus.isEmpty()) {
+ return;
+ }
+ if (!TickThread.isTickThread()) {
+ this.taskScheduler.scheduleChunkTask(() -> {
+ final ArrayDeque pendingFullLoadUpdate = ChunkHolderManager.this.pendingFullLoadUpdate;
+ for (int i = 0, len = changedFullStatus.size(); i < len; ++i) {
+ pendingFullLoadUpdate.add(changedFullStatus.get(i));
+ }
+
+ ChunkHolderManager.this.processPendingFullUpdate();
+ }, PrioritisedExecutor.Priority.HIGHEST);
+ } else {
+ final ArrayDeque pendingFullLoadUpdate = this.pendingFullLoadUpdate;
+ for (int i = 0, len = changedFullStatus.size(); i < len; ++i) {
+ pendingFullLoadUpdate.add(changedFullStatus.get(i));
+ }
+ }
+ }
+
+ private void removeChunkHolder(final NewChunkHolder holder) {
+ holder.killed = true;
+ holder.vanillaChunkHolder.onChunkRemove();
+ this.autoSaveQueue.remove(holder);
+ ChunkSystem.onChunkHolderDelete(this.world, holder.vanillaChunkHolder);
+ synchronized (this.chunkHolders) {
+ this.chunkHolders.remove(CoordinateUtils.getChunkKey(holder.chunkX, holder.chunkZ));
+ }
+ }
+
+ // note: never call while inside the chunk system, this will absolutely break everything
+ public void processUnloads() {
+ TickThread.ensureTickThread("Cannot unload chunks off-main");
+
+ if (BLOCK_TICKET_UPDATES.get() == Boolean.TRUE) {
+ throw new IllegalStateException("Cannot unload chunks recursively");
+ }
+ final int sectionShift = this.unloadQueue.coordinateShift; // sectionShift <= lock shift
+ final List unloadSectionsForRegion = this.unloadQueue.retrieveForAllRegions();
+ int unloadCountTentative = 0;
+ for (final ChunkQueue.SectionToUnload sectionRef : unloadSectionsForRegion) {
+ final ChunkQueue.UnloadSection section
+ = this.unloadQueue.getSectionUnsynchronized(sectionRef.sectionX(), sectionRef.sectionZ());
+
+ if (section == null) {
+ // removed concurrently
+ continue;
+ }
+
+ // technically reading the size field is unsafe, and it may be incorrect.
+ // We assume that the error here cumulatively goes away over many ticks. If it did not, then it is possible
+ // for chunks to never unload or not unload fast enough.
+ unloadCountTentative += section.chunks.size();
+ }
+
+ if (unloadCountTentative <= 0) {
+ // no work to do
+ return;
+ }
+
+ // Note: The behaviour that we process ticket updates while holding the lock has been dropped here, as it is racey behavior.
+ // But, we do need to process updates here so that any add ticket that is synchronised before this call does not go missed.
+ this.processTicketUpdates();
+
+ final int toUnloadCount = Math.max(50, (int)(unloadCountTentative * 0.05));
+ int processedCount = 0;
+
+ for (final ChunkQueue.SectionToUnload sectionRef : unloadSectionsForRegion) {
+ final List stage1 = new ArrayList<>();
+ final List stage2 = new ArrayList<>();
+
+ final int sectionLowerX = sectionRef.sectionX() << sectionShift;
+ final int sectionLowerZ = sectionRef.sectionZ() << sectionShift;
+
+ // stage 1: set up for stage 2 while holding critical locks
+ ReentrantAreaLock.Node ticketLock = this.ticketLockArea.lock(sectionLowerX, sectionLowerZ);
+ try {
+ final ReentrantAreaLock.Node scheduleLock = this.taskScheduler.schedulingLockArea.lock(sectionLowerX, sectionLowerZ);
+ try {
+ final ChunkQueue.UnloadSection section
+ = this.unloadQueue.getSectionUnsynchronized(sectionRef.sectionX(), sectionRef.sectionZ());
+
+ if (section == null) {
+ // removed concurrently
+ continue;
+ }
+
+ // collect the holders to run stage 1 on
+ final int sectionCount = section.chunks.size();
+
+ if ((sectionCount + processedCount) <= toUnloadCount) {
+ // we can just drain the entire section
+
+ for (final LongIterator iterator = section.chunks.iterator(); iterator.hasNext();) {
+ final NewChunkHolder holder = this.chunkHolders.get(iterator.nextLong());
+ if (holder == null) {
+ throw new IllegalStateException();
+ }
+ stage1.add(holder);
+ }
+
+ // remove section
+ this.unloadQueue.removeSection(sectionRef.sectionX(), sectionRef.sectionZ());
+ } else {
+ // processedCount + len = toUnloadCount
+ // we cannot drain the entire section
+ for (int i = 0, len = toUnloadCount - processedCount; i < len; ++i) {
+ final NewChunkHolder holder = this.chunkHolders.get(section.chunks.removeFirstLong());
+ if (holder == null) {
+ throw new IllegalStateException();
+ }
+ stage1.add(holder);
+ }
+ }
+
+ // run stage 1
+ for (int i = 0, len = stage1.size(); i < len; ++i) {
+ final NewChunkHolder chunkHolder = stage1.get(i);
+ if (chunkHolder.isSafeToUnload() != null) {
+ LOGGER.error("Chunkholder " + chunkHolder + " is not safe to unload but is inside the unload queue?");
+ continue;
+ }
+ final NewChunkHolder.UnloadState state = chunkHolder.unloadStage1();
+ if (state == null) {
+ // can unload immediately
+ this.removeChunkHolder(chunkHolder);
+ continue;
+ }
+ stage2.add(state);
+ }
+ } finally {
+ this.taskScheduler.schedulingLockArea.unlock(scheduleLock);
+ }
+ } finally {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+
+ // stage 2: invoke expensive unload logic, designed to run without locks thanks to stage 1
+ final List stage3 = new ArrayList<>(stage2.size());
+
+ final Boolean before = this.blockTicketUpdates();
+ try {
+ for (int i = 0, len = stage2.size(); i < len; ++i) {
+ final NewChunkHolder.UnloadState state = stage2.get(i);
+ final NewChunkHolder holder = state.holder();
+
+ holder.unloadStage2(state);
+ stage3.add(holder);
+ }
+ } finally {
+ this.unblockTicketUpdates(before);
+ }
+
+ // stage 3: actually attempt to remove the chunk holders
+ ticketLock = this.ticketLockArea.lock(sectionLowerX, sectionLowerZ);
+ try {
+ final ReentrantAreaLock.Node scheduleLock = this.taskScheduler.schedulingLockArea.lock(sectionLowerX, sectionLowerZ);
+ try {
+ for (int i = 0, len = stage3.size(); i < len; ++i) {
+ final NewChunkHolder holder = stage3.get(i);
+
+ if (holder.unloadStage3()) {
+ this.removeChunkHolder(holder);
+ } else {
+ // add cooldown so the next unload check is not immediately next tick
+ this.addTicketAtLevel(TicketType.UNLOAD_COOLDOWN, CoordinateUtils.getChunkKey(holder.chunkX, holder.chunkZ), MAX_TICKET_LEVEL, Unit.INSTANCE, false);
+ }
+ }
+ } finally {
+ this.taskScheduler.schedulingLockArea.unlock(scheduleLock);
+ }
+ } finally {
+ this.ticketLockArea.unlock(ticketLock);
+ }
+
+ processedCount += stage1.size();
+
+ if (processedCount >= toUnloadCount) {
+ break;
+ }
+ }
+ }
+
+ public enum TicketOperationType {
+ ADD, REMOVE, ADD_IF_REMOVED, ADD_AND_REMOVE
+ }
+
+ public static record TicketOperation (
+ TicketOperationType op, long chunkCoord,
+ TicketType ticketType, int ticketLevel, T identifier,
+ TicketType ticketType2, int ticketLevel2, V identifier2
+ ) {
+
+ private TicketOperation(TicketOperationType op, long chunkCoord,
+ TicketType ticketType, int ticketLevel, T identifier) {
+ this(op, chunkCoord, ticketType, ticketLevel, identifier, null, 0, null);
+ }
+
+ public static TicketOperation addOp(final ChunkPos chunk, final TicketType type, final int ticketLevel, final T identifier) {
+ return addOp(CoordinateUtils.getChunkKey(chunk), type, ticketLevel, identifier);
+ }
+
+ public static TicketOperation addOp(final int chunkX, final int chunkZ, final TicketType type, final int ticketLevel, final T identifier) {
+ return addOp(CoordinateUtils.getChunkKey(chunkX, chunkZ), type, ticketLevel, identifier);
+ }
+
+ public static TicketOperation addOp(final long chunk, final TicketType type, final int ticketLevel, final T identifier) {
+ return new TicketOperation<>(TicketOperationType.ADD, chunk, type, ticketLevel, identifier);
+ }
+
+ public static TicketOperation removeOp(final ChunkPos chunk, final TicketType type, final int ticketLevel, final T identifier) {
+ return removeOp(CoordinateUtils.getChunkKey(chunk), type, ticketLevel, identifier);
+ }
+
+ public static TicketOperation removeOp(final int chunkX, final int chunkZ, final TicketType type, final int ticketLevel, final T identifier) {
+ return removeOp(CoordinateUtils.getChunkKey(chunkX, chunkZ), type, ticketLevel, identifier);
+ }
+
+ public static TicketOperation removeOp(final long chunk, final TicketType type, final int ticketLevel, final T identifier) {
+ return new TicketOperation<>(TicketOperationType.REMOVE, chunk, type, ticketLevel, identifier);
+ }
+
+ public static TicketOperation addIfRemovedOp(final long chunk,
+ final TicketType addType, final int addLevel, final T addIdentifier,
+ final TicketType removeType, final int removeLevel, final V removeIdentifier) {
+ return new TicketOperation<>(
+ TicketOperationType.ADD_IF_REMOVED, chunk, addType, addLevel, addIdentifier,
+ removeType, removeLevel, removeIdentifier
+ );
+ }
+
+ public static TicketOperation addAndRemove(final long chunk,
+ final TicketType addType, final int addLevel, final T addIdentifier,
+ final TicketType removeType, final int removeLevel, final V removeIdentifier) {
+ return new TicketOperation<>(
+ TicketOperationType.ADD_AND_REMOVE, chunk, addType, addLevel, addIdentifier,
+ removeType, removeLevel, removeIdentifier
+ );
+ }
+ }
+
+ private boolean processTicketOp(TicketOperation operation) {
+ boolean ret = false;
+ switch (operation.op) {
+ case ADD: {
+ ret |= this.addTicketAtLevel(operation.ticketType, operation.chunkCoord, operation.ticketLevel, operation.identifier);
+ break;
+ }
+ case REMOVE: {
+ ret |= this.removeTicketAtLevel(operation.ticketType, operation.chunkCoord, operation.ticketLevel, operation.identifier);
+ break;
+ }
+ case ADD_IF_REMOVED: {
+ ret |= this.addIfRemovedTicket(
+ operation.chunkCoord,
+ operation.ticketType, operation.ticketLevel, operation.identifier,
+ operation.ticketType2, operation.ticketLevel2, operation.identifier2
+ );
+ break;
+ }
+ case ADD_AND_REMOVE: {
+ ret = true;
+ this.addAndRemoveTickets(
+ operation.chunkCoord,
+ operation.ticketType, operation.ticketLevel, operation.identifier,
+ operation.ticketType2, operation.ticketLevel2, operation.identifier2
+ );
+ break;
+ }
+ }
+
+ return ret;
+ }
+
+ public void performTicketUpdates(final Collection> operations) {
+ for (final TicketOperation, ?> operation : operations) {
+ this.processTicketOp(operation);
+ }
+ }
+
+ private final ThreadLocal BLOCK_TICKET_UPDATES = ThreadLocal.withInitial(() -> {
+ return Boolean.FALSE;
+ });
+
+ public Boolean blockTicketUpdates() {
+ final Boolean ret = BLOCK_TICKET_UPDATES.get();
+ BLOCK_TICKET_UPDATES.set(Boolean.TRUE);
+ return ret;
+ }
+
+ public void unblockTicketUpdates(final Boolean before) {
+ BLOCK_TICKET_UPDATES.set(before);
+ }
+
+ public boolean processTicketUpdates() {
+ return this.processTicketUpdates(true, true, null);
+ }
+
+ private static final ThreadLocal> CURRENT_TICKET_UPDATE_SCHEDULING = new ThreadLocal<>();
+
+ static List getCurrentTicketUpdateScheduling() {
+ return CURRENT_TICKET_UPDATE_SCHEDULING.get();
+ }
+
+ private boolean processTicketUpdates(final boolean checkLocks, final boolean processFullUpdates, List scheduledTasks) {
+ TickThread.ensureTickThread("Cannot process ticket levels off-main");
+ if (BLOCK_TICKET_UPDATES.get() == Boolean.TRUE) {
+ throw new IllegalStateException("Cannot update ticket level while unloading chunks or updating entity manager");
+ }
+
+ List changedFullStatus = null;
+
+ final boolean isTickThread = TickThread.isTickThread();
+
+ boolean ret = false;
+ final boolean canProcessFullUpdates = processFullUpdates & isTickThread;
+ final boolean canProcessScheduling = scheduledTasks == null;
+
+ if (this.ticketLevelPropagator.hasPendingUpdates()) {
+ if (scheduledTasks == null) {
+ scheduledTasks = new ArrayList<>();
+ }
+ changedFullStatus = new ArrayList<>();
+
+ ret |= this.ticketLevelPropagator.performUpdates(
+ this.ticketLockArea, this.taskScheduler.schedulingLockArea,
+ scheduledTasks, changedFullStatus
+ );
+ }
+
+ if (changedFullStatus != null) {
+ this.addChangedStatuses(changedFullStatus);
+ }
+
+ if (canProcessScheduling && scheduledTasks != null) {
+ for (int i = 0, len = scheduledTasks.size(); i < len; ++i) {
+ scheduledTasks.get(i).schedule();
+ }
+ }
+
+ if (canProcessFullUpdates) {
+ ret |= this.processPendingFullUpdate();
+ }
+
+ return ret;
+ }
+
+ // only call on tick thread
+ protected final boolean processPendingFullUpdate() {
+ final ArrayDeque pendingFullLoadUpdate = this.pendingFullLoadUpdate;
+
+ boolean ret = false;
+
+ List changedFullStatus = new ArrayList<>();
+
+ NewChunkHolder holder;
+ while ((holder = pendingFullLoadUpdate.poll()) != null) {
+ ret |= holder.handleFullStatusChange(changedFullStatus);
+
+ if (!changedFullStatus.isEmpty()) {
+ for (int i = 0, len = changedFullStatus.size(); i < len; ++i) {
+ pendingFullLoadUpdate.add(changedFullStatus.get(i));
+ }
+ changedFullStatus.clear();
+ }
+ }
+
+ return ret;
+ }
+
+ public JsonObject getDebugJsonForWatchdog() {
+ return this.getDebugJsonNoLock();
+ }
+
+ private JsonObject getDebugJsonNoLock() {
+ final JsonObject ret = new JsonObject();
+ ret.addProperty("current_tick", Long.valueOf(this.currentTick));
+
+ final JsonArray unloadQueue = new JsonArray();
+ ret.add("unload_queue", unloadQueue);
+ ret.addProperty("lock_shift", Integer.valueOf(this.taskScheduler.getChunkSystemLockShift()));
+ ret.addProperty("ticket_shift", Integer.valueOf(ThreadedTicketLevelPropagator.SECTION_SHIFT));
+ ret.addProperty("region_shift", Integer.valueOf(this.world.getRegionChunkShift()));
+ for (final ChunkQueue.SectionToUnload section : this.unloadQueue.retrieveForAllRegions()) {
+ final JsonObject sectionJson = new JsonObject();
+ unloadQueue.add(sectionJson);
+ sectionJson.addProperty("sectionX", section.sectionX());
+ sectionJson.addProperty("sectionZ", section.sectionX());
+ sectionJson.addProperty("order", section.order());
+
+ final JsonArray coordinates = new JsonArray();
+ sectionJson.add("coordinates", coordinates);
+
+ final ChunkQueue.UnloadSection actualSection = this.unloadQueue.getSectionUnsynchronized(section.sectionX(), section.sectionZ());
+ for (final LongIterator iterator = actualSection.chunks.iterator(); iterator.hasNext();) {
+ final long coordinate = iterator.nextLong();
+
+ final JsonObject coordinateJson = new JsonObject();
+ coordinates.add(coordinateJson);
+
+ coordinateJson.addProperty("chunkX", Integer.valueOf(CoordinateUtils.getChunkX(coordinate)));
+ coordinateJson.addProperty("chunkZ", Integer.valueOf(CoordinateUtils.getChunkZ(coordinate)));
+ }
+ }
+
+ final JsonArray holders = new JsonArray();
+ ret.add("chunkholders", holders);
+
+ for (final NewChunkHolder holder : this.getChunkHolders()) {
+ holders.add(holder.getDebugJson());
+ }
+
+ // TODO
+ /*
+ final JsonArray removeTickToChunkExpireTicketCount = new JsonArray();
+ ret.add("remove_tick_to_chunk_expire_ticket_count", removeTickToChunkExpireTicketCount);
+
+ for (final Long2ObjectMap.Entry tickEntry : this.removeTickToChunkExpireTicketCount.long2ObjectEntrySet()) {
+ final long tick = tickEntry.getLongKey();
+ final Long2IntOpenHashMap coordinateToCount = tickEntry.getValue();
+
+ final JsonObject tickJson = new JsonObject();
+ removeTickToChunkExpireTicketCount.add(tickJson);
+
+ tickJson.addProperty("tick", Long.valueOf(tick));
+
+ final JsonArray tickEntries = new JsonArray();
+ tickJson.add("entries", tickEntries);
+
+ for (final Long2IntMap.Entry entry : coordinateToCount.long2IntEntrySet()) {
+ final long coordinate = entry.getLongKey();
+ final int count = entry.getIntValue();
+
+ final JsonObject entryJson = new JsonObject();
+ tickEntries.add(entryJson);
+
+ entryJson.addProperty("chunkX", Long.valueOf(CoordinateUtils.getChunkX(coordinate)));
+ entryJson.addProperty("chunkZ", Long.valueOf(CoordinateUtils.getChunkZ(coordinate)));
+ entryJson.addProperty("count", Integer.valueOf(count));
+ }
+ }
+
+ final JsonArray allTicketsJson = new JsonArray();
+ ret.add("tickets", allTicketsJson);
+
+ for (final Long2ObjectMap.Entry>> coordinateTickets : this.tickets.long2ObjectEntrySet()) {
+ final long coordinate = coordinateTickets.getLongKey();
+ final SortedArraySet> tickets = coordinateTickets.getValue();
+
+ final JsonObject coordinateJson = new JsonObject();
+ allTicketsJson.add(coordinateJson);
+
+ coordinateJson.addProperty("chunkX", Long.valueOf(CoordinateUtils.getChunkX(coordinate)));
+ coordinateJson.addProperty("chunkZ", Long.valueOf(CoordinateUtils.getChunkZ(coordinate)));
+
+ final JsonArray ticketsSerialized = new JsonArray();
+ coordinateJson.add("tickets", ticketsSerialized);
+
+ for (final Ticket> ticket : tickets) {
+ final JsonObject ticketSerialized = new JsonObject();
+ ticketsSerialized.add(ticketSerialized);
+
+ ticketSerialized.addProperty("type", ticket.getType().toString());
+ ticketSerialized.addProperty("level", Integer.valueOf(ticket.getTicketLevel()));
+ ticketSerialized.addProperty("identifier", Objects.toString(ticket.key));
+ ticketSerialized.addProperty("remove_tick", Long.valueOf(ticket.removalTick));
+ }
+ }
+ */
+
+ return ret;
+ }
+
+ public JsonObject getDebugJson() {
+ return this.getDebugJsonNoLock(); // Folia - use area based lock to reduce contention
+ }
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLightTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLightTask.java
new file mode 100644
index 0000000000000000000000000000000000000000..53ddd7e9ac05e6a9eb809f329796e6d4f6bb2ab1
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLightTask.java
@@ -0,0 +1,181 @@
+package io.papermc.paper.chunk.system.scheduling;
+
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
+import ca.spottedleaf.starlight.common.light.StarLightEngine;
+import ca.spottedleaf.starlight.common.light.StarLightInterface;
+import io.papermc.paper.chunk.system.light.LightQueue;
+import net.minecraft.server.level.ServerLevel;
+import net.minecraft.world.level.ChunkPos;
+import net.minecraft.world.level.chunk.ChunkAccess;
+import net.minecraft.world.level.chunk.ChunkStatus;
+import net.minecraft.world.level.chunk.ProtoChunk;
+import org.apache.logging.log4j.LogManager;
+import org.apache.logging.log4j.Logger;
+import java.util.function.BooleanSupplier;
+
+public final class ChunkLightTask extends ChunkProgressionTask {
+
+ private static final Logger LOGGER = LogManager.getLogger();
+
+ protected final ChunkAccess fromChunk;
+
+ private final LightTaskPriorityHolder priorityHolder;
+
+ public ChunkLightTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX, final int chunkZ,
+ final ChunkAccess chunk, final PrioritisedExecutor.Priority priority) {
+ super(scheduler, world, chunkX, chunkZ);
+ if (!PrioritisedExecutor.Priority.isValidPriority(priority)) {
+ throw new IllegalArgumentException("Invalid priority " + priority);
+ }
+ this.priorityHolder = new LightTaskPriorityHolder(priority, this);
+ this.fromChunk = chunk;
+ }
+
+ @Override
+ public boolean isScheduled() {
+ return this.priorityHolder.isScheduled();
+ }
+
+ @Override
+ public ChunkStatus getTargetStatus() {
+ return ChunkStatus.LIGHT;
+ }
+
+ @Override
+ public void schedule() {
+ this.priorityHolder.schedule();
+ }
+
+ @Override
+ public void cancel() {
+ this.priorityHolder.cancel();
+ }
+
+ @Override
+ public PrioritisedExecutor.Priority getPriority() {
+ return this.priorityHolder.getPriority();
+ }
+
+ @Override
+ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
+ this.priorityHolder.raisePriority(priority);
+ }
+
+ @Override
+ public void setPriority(final PrioritisedExecutor.Priority priority) {
+ this.priorityHolder.setPriority(priority);
+ }
+
+ @Override
+ public void raisePriority(final PrioritisedExecutor.Priority priority) {
+ this.priorityHolder.raisePriority(priority);
+ }
+
+ private static final class LightTaskPriorityHolder extends PriorityHolder {
+
+ protected final ChunkLightTask task;
+
+ protected LightTaskPriorityHolder(final PrioritisedExecutor.Priority priority, final ChunkLightTask task) {
+ super(priority);
+ this.task = task;
+ }
+
+ @Override
+ protected void cancelScheduled() {
+ final ChunkLightTask task = this.task;
+ task.complete(null, null);
+ }
+
+ @Override
+ protected PrioritisedExecutor.Priority getScheduledPriority() {
+ final ChunkLightTask task = this.task;
+ return task.world.getChunkSource().getLightEngine().theLightEngine.lightQueue.getPriority(task.chunkX, task.chunkZ);
+ }
+
+ @Override
+ protected void scheduleTask(final PrioritisedExecutor.Priority priority) {
+ final ChunkLightTask task = this.task;
+ final StarLightInterface starLightInterface = task.world.getChunkSource().getLightEngine().theLightEngine;
+ final LightQueue lightQueue = starLightInterface.lightQueue;
+ lightQueue.queueChunkLightTask(new ChunkPos(task.chunkX, task.chunkZ), new LightTask(starLightInterface, task), priority);
+ lightQueue.setPriority(task.chunkX, task.chunkZ, priority);
+ }
+
+ @Override
+ protected void lowerPriorityScheduled(final PrioritisedExecutor.Priority priority) {
+ final ChunkLightTask task = this.task;
+ final StarLightInterface starLightInterface = task.world.getChunkSource().getLightEngine().theLightEngine;
+ final LightQueue lightQueue = starLightInterface.lightQueue;
+ lightQueue.lowerPriority(task.chunkX, task.chunkZ, priority);
+ }
+
+ @Override
+ protected void setPriorityScheduled(final PrioritisedExecutor.Priority priority) {
+ final ChunkLightTask task = this.task;
+ final StarLightInterface starLightInterface = task.world.getChunkSource().getLightEngine().theLightEngine;
+ final LightQueue lightQueue = starLightInterface.lightQueue;
+ lightQueue.setPriority(task.chunkX, task.chunkZ, priority);
+ }
+
+ @Override
+ protected void raisePriorityScheduled(final PrioritisedExecutor.Priority priority) {
+ final ChunkLightTask task = this.task;
+ final StarLightInterface starLightInterface = task.world.getChunkSource().getLightEngine().theLightEngine;
+ final LightQueue lightQueue = starLightInterface.lightQueue;
+ lightQueue.raisePriority(task.chunkX, task.chunkZ, priority);
+ }
+ }
+
+ private static final class LightTask implements BooleanSupplier {
+
+ protected final StarLightInterface lightEngine;
+ protected final ChunkLightTask task;
+
+ public LightTask(final StarLightInterface lightEngine, final ChunkLightTask task) {
+ this.lightEngine = lightEngine;
+ this.task = task;
+ }
+
+ @Override
+ public boolean getAsBoolean() {
+ final ChunkLightTask task = this.task;
+ // executed on light thread
+ if (!task.priorityHolder.markExecuting()) {
+ // cancelled
+ return false;
+ }
+
+ try {
+ final Boolean[] emptySections = StarLightEngine.getEmptySectionsForChunk(task.fromChunk);
+
+ if (task.fromChunk.isLightCorrect() && task.fromChunk.getStatus().isOrAfter(ChunkStatus.LIGHT)) {
+ this.lightEngine.forceLoadInChunk(task.fromChunk, emptySections);
+ this.lightEngine.checkChunkEdges(task.chunkX, task.chunkZ);
+ } else {
+ task.fromChunk.setLightCorrect(false);
+ this.lightEngine.lightChunk(task.fromChunk, emptySections);
+ task.fromChunk.setLightCorrect(true);
+ }
+ // we need to advance status
+ if (task.fromChunk instanceof ProtoChunk chunk && chunk.getStatus() == ChunkStatus.LIGHT.getParent()) {
+ chunk.setStatus(ChunkStatus.LIGHT);
+ }
+ } catch (final Throwable thr) {
+ if (!(thr instanceof ThreadDeath)) {
+ LOGGER.fatal("Failed to light chunk " + task.fromChunk.getPos().toString() + " in world '" + this.lightEngine.getWorld().getWorld().getName() + "'", thr);
+ }
+
+ task.complete(null, thr);
+
+ if (thr instanceof ThreadDeath) {
+ throw (ThreadDeath)thr;
+ }
+
+ return true;
+ }
+
+ task.complete(task.fromChunk, null);
+ return true;
+ }
+ }
+}
diff --git a/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLoadTask.java b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLoadTask.java
new file mode 100644
index 0000000000000000000000000000000000000000..f0b03a05387e38ca6933b1ea6fe59b537d5535b9
--- /dev/null
+++ b/src/main/java/io/papermc/paper/chunk/system/scheduling/ChunkLoadTask.java
@@ -0,0 +1,484 @@
+package io.papermc.paper.chunk.system.scheduling;
+
+import ca.spottedleaf.concurrentutil.collection.MultiThreadedQueue;
+import ca.spottedleaf.concurrentutil.executor.standard.PrioritisedExecutor;
+import ca.spottedleaf.concurrentutil.lock.ReentrantAreaLock;
+import ca.spottedleaf.concurrentutil.util.ConcurrentUtil;
+import ca.spottedleaf.dataconverter.minecraft.MCDataConverter;
+import ca.spottedleaf.dataconverter.minecraft.datatypes.MCTypeRegistry;
+import com.mojang.logging.LogUtils;
+import io.papermc.paper.chunk.system.io.RegionFileIOThread;
+import io.papermc.paper.chunk.system.poi.PoiChunk;
+import net.minecraft.SharedConstants;
+import net.minecraft.core.registries.Registries;
+import net.minecraft.nbt.CompoundTag;
+import net.minecraft.server.level.ChunkMap;
+import net.minecraft.server.level.ServerLevel;
+import net.minecraft.world.level.ChunkPos;
+import net.minecraft.world.level.chunk.ChunkAccess;
+import net.minecraft.world.level.chunk.ChunkStatus;
+import net.minecraft.world.level.chunk.ProtoChunk;
+import net.minecraft.world.level.chunk.UpgradeData;
+import net.minecraft.world.level.chunk.storage.ChunkSerializer;
+import net.minecraft.world.level.chunk.storage.EntityStorage;
+import net.minecraft.world.level.levelgen.blending.BlendingData;
+import org.slf4j.Logger;
+import java.lang.invoke.VarHandle;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicInteger;
+import java.util.function.Consumer;
+
+public final class ChunkLoadTask extends ChunkProgressionTask {
+
+ private static final Logger LOGGER = LogUtils.getClassLogger();
+
+ private final NewChunkHolder chunkHolder;
+ private final ChunkDataLoadTask loadTask;
+
+ private volatile boolean cancelled;
+ private NewChunkHolder.GenericDataLoadTaskCallback entityLoadTask;
+ private NewChunkHolder.GenericDataLoadTaskCallback poiLoadTask;
+ private GenericDataLoadTask.TaskResult loadResult;
+ private final AtomicInteger taskCountToComplete = new AtomicInteger(3); // one for poi, one for entity, and one for chunk data
+
+ protected ChunkLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX, final int chunkZ,
+ final NewChunkHolder chunkHolder, final PrioritisedExecutor.Priority priority) {
+ super(scheduler, world, chunkX, chunkZ);
+ this.chunkHolder = chunkHolder;
+ this.loadTask = new ChunkDataLoadTask(scheduler, world, chunkX, chunkZ, priority);
+ this.loadTask.addCallback((final GenericDataLoadTask.TaskResult result) -> {
+ ChunkLoadTask.this.loadResult = result; // must be before getAndDecrement
+ ChunkLoadTask.this.tryCompleteLoad();
+ });
+ }
+
+ private void tryCompleteLoad() {
+ if (this.taskCountToComplete.decrementAndGet() == 0) {
+ final GenericDataLoadTask.TaskResult result = this.cancelled ? null : this.loadResult; // only after the getAndDecrement
+ ChunkLoadTask.this.complete(result == null ? null : result.left(), result == null ? null : result.right());
+ }
+ }
+
+ @Override
+ public ChunkStatus getTargetStatus() {
+ return ChunkStatus.EMPTY;
+ }
+
+ private boolean scheduled;
+
+ @Override
+ public boolean isScheduled() {
+ return this.scheduled;
+ }
+
+ @Override
+ public void schedule() {
+ final NewChunkHolder.GenericDataLoadTaskCallback entityLoadTask;
+ final NewChunkHolder.GenericDataLoadTaskCallback poiLoadTask;
+
+ final Consumer> scheduleLoadTask = (final GenericDataLoadTask.TaskResult, ?> result) -> {
+ ChunkLoadTask.this.tryCompleteLoad();
+ };
+
+ // NOTE: it is IMPOSSIBLE for getOrLoadEntityData/getOrLoadPoiData to complete synchronously, because
+ // they must schedule a task to off main or to on main to complete
+ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
+ try {
+ if (this.scheduled) {
+ throw new IllegalStateException("schedule() called twice");
+ }
+ this.scheduled = true;
+ if (this.cancelled) {
+ return;
+ }
+ if (!this.chunkHolder.isEntityChunkNBTLoaded()) {
+ entityLoadTask = this.chunkHolder.getOrLoadEntityData((Consumer)scheduleLoadTask);
+ } else {
+ entityLoadTask = null;
+ this.taskCountToComplete.getAndDecrement(); // we know the chunk load is not done here, as it is not scheduled
+ }
+
+ if (!this.chunkHolder.isPoiChunkLoaded()) {
+ poiLoadTask = this.chunkHolder.getOrLoadPoiData((Consumer)scheduleLoadTask);
+ } else {
+ poiLoadTask = null;
+ this.taskCountToComplete.getAndDecrement(); // we know the chunk load is not done here, as it is not scheduled
+ }
+
+ this.entityLoadTask = entityLoadTask;
+ this.poiLoadTask = poiLoadTask;
+ } finally {
+ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+
+ if (entityLoadTask != null) {
+ entityLoadTask.schedule();
+ }
+
+ if (poiLoadTask != null) {
+ poiLoadTask.schedule();
+ }
+
+ this.loadTask.schedule(false);
+ }
+
+ @Override
+ public void cancel() {
+ // must be before load task access, so we can synchronise with the writes to the fields
+ final boolean scheduled;
+ final ReentrantAreaLock.Node schedulingLock = this.scheduler.schedulingLockArea.lock(this.chunkX, this.chunkZ);
+ try {
+ // must read field here, as it may be written later conucrrently -
+ // we need to know if we scheduled _before_ cancellation
+ scheduled = this.scheduled;
+ this.cancelled = true;
+ } finally {
+ this.scheduler.schedulingLockArea.unlock(schedulingLock);
+ }
+
+ /*
+ Note: The entityLoadTask/poiLoadTask do not complete when cancelled,
+ so we need to manually try to complete in those cases
+ It is also important to note that we set the cancelled field first, just in case
+ the chunk load task attempts to complete with a non-null value
+ */
+
+ if (scheduled) {
+ // since we scheduled, we need to cancel the tasks
+ if (this.entityLoadTask != null) {
+ if (this.entityLoadTask.cancel()) {
+ this.tryCompleteLoad();
+ }
+ }
+ if (this.poiLoadTask != null) {
+ if (this.poiLoadTask.cancel()) {
+ this.tryCompleteLoad();
+ }
+ }
+ } else {
+ // since nothing was scheduled, we need to decrement the task count here ourselves
+
+ // for entity load task
+ this.tryCompleteLoad();
+
+ // for poi load task
+ this.tryCompleteLoad();
+ }
+ this.loadTask.cancel();
+ }
+
+ @Override
+ public PrioritisedExecutor.Priority getPriority() {
+ return this.loadTask.getPriority();
+ }
+
+ @Override
+ public void lowerPriority(final PrioritisedExecutor.Priority priority) {
+ final EntityDataLoadTask entityLoad = this.chunkHolder.getEntityDataLoadTask();
+ if (entityLoad != null) {
+ entityLoad.lowerPriority(priority);
+ }
+
+ final PoiDataLoadTask poiLoad = this.chunkHolder.getPoiDataLoadTask();
+
+ if (poiLoad != null) {
+ poiLoad.lowerPriority(priority);
+ }
+
+ this.loadTask.lowerPriority(priority);
+ }
+
+ @Override
+ public void setPriority(final PrioritisedExecutor.Priority priority) {
+ final EntityDataLoadTask entityLoad = this.chunkHolder.getEntityDataLoadTask();
+ if (entityLoad != null) {
+ entityLoad.setPriority(priority);
+ }
+
+ final PoiDataLoadTask poiLoad = this.chunkHolder.getPoiDataLoadTask();
+
+ if (poiLoad != null) {
+ poiLoad.setPriority(priority);
+ }
+
+ this.loadTask.setPriority(priority);
+ }
+
+ @Override
+ public void raisePriority(final PrioritisedExecutor.Priority priority) {
+ final EntityDataLoadTask entityLoad = this.chunkHolder.getEntityDataLoadTask();
+ if (entityLoad != null) {
+ entityLoad.raisePriority(priority);
+ }
+
+ final PoiDataLoadTask poiLoad = this.chunkHolder.getPoiDataLoadTask();
+
+ if (poiLoad != null) {
+ poiLoad.raisePriority(priority);
+ }
+
+ this.loadTask.raisePriority(priority);
+ }
+
+ protected static abstract class CallbackDataLoadTask extends GenericDataLoadTask {
+
+ private TaskResult result;
+ private final MultiThreadedQueue>> waiters = new MultiThreadedQueue<>();
+
+ protected volatile boolean completed;
+ protected static final VarHandle COMPLETED_HANDLE = ConcurrentUtil.getVarHandle(CallbackDataLoadTask.class, "completed", boolean.class);
+
+ protected CallbackDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
+ final int chunkZ, final RegionFileIOThread.RegionFileType type,
+ final PrioritisedExecutor.Priority priority) {
+ super(scheduler, world, chunkX, chunkZ, type, priority);
+ }
+
+ public void addCallback(final Consumer> consumer) {
+ if (!this.waiters.add(consumer)) {
+ try {
+ consumer.accept(this.result);
+ } catch (final Throwable throwable) {
+ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
+ "Consumer", ChunkTaskScheduler.stringIfNull(consumer),
+ "Completed throwable", ChunkTaskScheduler.stringIfNull(this.result.right())
+ ), throwable);
+ if (throwable instanceof ThreadDeath) {
+ throw (ThreadDeath)throwable;
+ }
+ }
+ }
+ }
+
+ @Override
+ protected void onComplete(final TaskResult result) {
+ if ((boolean)COMPLETED_HANDLE.getAndSet((CallbackDataLoadTask)this, (boolean)true)) {
+ throw new IllegalStateException("Already completed");
+ }
+ this.result = result;
+ Consumer> consumer;
+ while ((consumer = this.waiters.pollOrBlockAdds()) != null) {
+ try {
+ consumer.accept(result);
+ } catch (final Throwable throwable) {
+ this.scheduler.unrecoverableChunkSystemFailure(this.chunkX, this.chunkZ, Map.of(
+ "Consumer", ChunkTaskScheduler.stringIfNull(consumer),
+ "Completed throwable", ChunkTaskScheduler.stringIfNull(result.right())
+ ), throwable);
+ if (throwable instanceof ThreadDeath) {
+ throw (ThreadDeath)throwable;
+ }
+ return;
+ }
+ }
+ }
+ }
+
+ public static final class ChunkDataLoadTask extends CallbackDataLoadTask {
+ protected ChunkDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
+ final int chunkZ, final PrioritisedExecutor.Priority priority) {
+ super(scheduler, world, chunkX, chunkZ, RegionFileIOThread.RegionFileType.CHUNK_DATA, priority);
+ }
+
+ @Override
+ protected boolean hasOffMain() {
+ return true;
+ }
+
+ @Override
+ protected boolean hasOnMain() {
+ return false;
+ }
+
+ @Override
+ protected PrioritisedExecutor.PrioritisedTask createOffMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
+ return this.scheduler.loadExecutor.createTask(run, priority);
+ }
+
+ @Override
+ protected PrioritisedExecutor.PrioritisedTask createOnMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ protected TaskResult completeOnMainOffMain(final ChunkAccess data, final Throwable throwable) {
+ throw new UnsupportedOperationException();
+ }
+
+ private ProtoChunk getEmptyChunk() {
+ return new ProtoChunk(
+ new ChunkPos(this.chunkX, this.chunkZ), UpgradeData.EMPTY, this.world,
+ this.world.registryAccess().registryOrThrow(Registries.BIOME), (BlendingData)null
+ );
+ }
+
+ @Override
+ protected TaskResult runOffMain(final CompoundTag data, final Throwable throwable) {
+ if (throwable != null) {
+ LOGGER.error("Failed to load chunk data for task: " + this.toString() + ", chunk data will be lost", throwable);
+ return new TaskResult<>(this.getEmptyChunk(), null);
+ }
+
+ if (data == null) {
+ return new TaskResult<>(this.getEmptyChunk(), null);
+ }
+
+ // need to convert data, and then deserialize it
+
+ try {
+ final ChunkPos chunkPos = new ChunkPos(this.chunkX, this.chunkZ);
+ final ChunkMap chunkMap = this.world.getChunkSource().chunkMap;
+ // run converters
+ // note: upgradeChunkTag copies the data already
+ final CompoundTag converted = chunkMap.upgradeChunkTag(
+ this.world.getTypeKey(), chunkMap.overworldDataStorage, data, chunkMap.generator.getTypeNameForDataFixer(),
+ chunkPos, this.world
+ );
+ // deserialize
+ final ChunkSerializer.InProgressChunkHolder chunkHolder = ChunkSerializer.readInProgressChunkHolder(
+ this.world, chunkMap.getPoiManager(), chunkPos, converted
+ );
+
+ return new TaskResult<>(chunkHolder.protoChunk, null);
+ } catch (final ThreadDeath death) {
+ throw death;
+ } catch (final Throwable thr2) {
+ LOGGER.error("Failed to parse chunk data for task: " + this.toString() + ", chunk data will be lost", thr2);
+ return new TaskResult<>(this.getEmptyChunk(), null);
+ }
+ }
+
+ @Override
+ protected TaskResult runOnMain(final ChunkAccess data, final Throwable throwable) {
+ throw new UnsupportedOperationException();
+ }
+ }
+
+ public static final class PoiDataLoadTask extends CallbackDataLoadTask {
+ public PoiDataLoadTask(final ChunkTaskScheduler scheduler, final ServerLevel world, final int chunkX,
+ final int chunkZ, final PrioritisedExecutor.Priority priority) {
+ super(scheduler, world, chunkX, chunkZ, RegionFileIOThread.RegionFileType.POI_DATA, priority);
+ }
+
+ @Override
+ protected boolean hasOffMain() {
+ return true;
+ }
+
+ @Override
+ protected boolean hasOnMain() {
+ return false;
+ }
+
+ @Override
+ protected PrioritisedExecutor.PrioritisedTask createOffMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
+ return this.scheduler.loadExecutor.createTask(run, priority);
+ }
+
+ @Override
+ protected PrioritisedExecutor.PrioritisedTask createOnMain(final Runnable run, final PrioritisedExecutor.Priority priority) {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ protected TaskResult completeOnMainOffMain(final PoiChunk data, final Throwable throwable) {
+ throw new UnsupportedOperationException();
+ }
+
+ @Override
+ protected TaskResult