Mirror von
https://github.com/PaperMC/Paper.git
synchronisiert 2024-12-21 05:50:05 +01:00
4411 Zeilen
205 KiB
Diff
4411 Zeilen
205 KiB
Diff
|
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
|
||
|
From: Spottedleaf <Spottedleaf@users.noreply.github.com>
|
||
|
Date: Sat, 13 Jul 2019 09:23:10 -0700
|
||
|
Subject: [PATCH] Asynchronous chunk IO and loading
|
||
|
|
||
|
This patch re-adds a file IO thread as well as shoving de-serializing
|
||
|
chunk NBT data onto worker threads. This patch also will shove
|
||
|
chunk data serialization onto the same worker threads when the chunk
|
||
|
is unloaded - this cannot be done for regular saves since that's unsafe.
|
||
|
|
||
|
The file IO Thread
|
||
|
|
||
|
Unlike 1.13 and below, the file IO thread is prioritized - IO tasks can
|
||
|
be reoredered, however they are "stuck" to a world & coordinate.
|
||
|
|
||
|
Scheduling IO tasks works as follows, given a world & coordinate - location:
|
||
|
|
||
|
The IO thread has been designed to ensure that reads and writes appear to
|
||
|
occur synchronously for a given location, however the implementation also
|
||
|
has the unfortunate side-effect of making every write appear as if
|
||
|
they occur without failure.
|
||
|
|
||
|
The IO thread has also been designed to accomodate Mojang's decision to
|
||
|
store chunk data and POI data separately. It can independently schedule
|
||
|
tasks for each.
|
||
|
|
||
|
However threads can wait for writes to complete and check if:
|
||
|
- The write was overwriten by another scheduler
|
||
|
- The write failed (however it does not indicate whether it was overwritten by another scheduler)
|
||
|
|
||
|
Scheduling reads:
|
||
|
|
||
|
- If a write task is in progress, the task is not scheduled and returns the in-progress write data
|
||
|
This means that readers cannot modify the NBTTagCompound returned and must clone if it they wish to write
|
||
|
- If a write task is not in progress but a read task is in progress, then the read task is simply chained
|
||
|
This means that again, readers cannot modify the NBTTagCompound returned
|
||
|
|
||
|
Scheduling writes:
|
||
|
|
||
|
- If a read task is in progress, ignore the read task and schedule the write
|
||
|
We cannot complete the read task since we assume it wants old data - not current
|
||
|
- If a write task is pending, overwrite the write data
|
||
|
The file IO thread does correctly handle cases where the data is overwritten when it
|
||
|
is writing data (before completing a task it will check if the data was overwritten and
|
||
|
will retry).
|
||
|
|
||
|
When the file IO thread executes a task for a location, the it will
|
||
|
execute the read task first (if it exists), then it will execute the
|
||
|
write task. This ensures that, even when scheduling at different
|
||
|
priorities, that reads/writes for a location act synchronously.
|
||
|
|
||
|
The downside of the file IO thread is that write failure can only be
|
||
|
indicated to the scheduling thread if:
|
||
|
|
||
|
- No other thread decides to schedule another write for the location
|
||
|
concurrently
|
||
|
- The scheduling thread blocks on the write to complete (however the
|
||
|
current implementation can be modified to indicate success
|
||
|
asynchronously)
|
||
|
|
||
|
The file io thread can be modified easily to provide indications
|
||
|
of write failure and write overwriting if needed.
|
||
|
|
||
|
The upside of the file IO thread is that if a write failures, then
|
||
|
chunk data is not lost until server restart. This leaves more room
|
||
|
for spurious failure.
|
||
|
|
||
|
Finally, the io thread will indicate to the console when reads
|
||
|
or writes fail - with relevant detail.
|
||
|
|
||
|
Asynchronous chunk data serialization for unloading chunks
|
||
|
|
||
|
When chunks unload they make a call to PlayerChunkMap#saveChunk(IChunkAccess).
|
||
|
Even if I make the IO asynchronous for this call, the data serialization
|
||
|
still hits pretty hard. And given that now the chunk system will
|
||
|
aggressively unload chunks more often (queued immediately at
|
||
|
ticket level 45 or higher), unloads occur more often, and
|
||
|
combined with our changes to the unload queue to make it
|
||
|
significantly more aggresive - chunk unloads can hit pretty hard.
|
||
|
Especially players running around with elytras and fireworks.
|
||
|
|
||
|
For serializing chunk data off main, there are some tasks which cannot be
|
||
|
done asynchronously. Lighting data must be saved beforehand as well as
|
||
|
potentially some tick lists. These are completed before scheduling the
|
||
|
asynchronous save.
|
||
|
|
||
|
However serializing chunk data off of the main thread is still risky.
|
||
|
Even though this patch schedules the save to occur after ALL references
|
||
|
of the chunk are removed from the world, plugins can still technically
|
||
|
access entities inside the chunks. For this, if the serialization task
|
||
|
fails for any reason, it will be re-scheduled to be serialized on the
|
||
|
main thread - with the hopes that the reason it failed was due to a plugin
|
||
|
and not an error with the save code itself. Like vanilla code - if the
|
||
|
serialization fails, the chunk data is lost.
|
||
|
|
||
|
Asynchronous chunk io/loading
|
||
|
|
||
|
Mojang's current implementation for loading chunk data off disk is
|
||
|
to return a CompletableFuture that will be completed by scheduling a
|
||
|
task to be executed on the world's chunk queue (which is only drained
|
||
|
on the main thread). This task will read the IO off disk and it will
|
||
|
apply data conversions & deserialization synchronously. Obviously
|
||
|
all 3 of these operations are expensive however all can be completed
|
||
|
asynchronously instead.
|
||
|
|
||
|
The solution this patch uses is as follows:
|
||
|
|
||
|
0. If an asynchronous chunk save is in progress (see above), wait
|
||
|
for that task to complete. It will use the serialized NBTTagCompound
|
||
|
created by the task. If the task fails to complete, then we would continue
|
||
|
with step 1. If it does not, we skip step 1. (Note: We actually load
|
||
|
POI data no matter what in this case).
|
||
|
1. Schedule an IO task to read chunk & poi data off disk.
|
||
|
2. The IO task will schedule a chunk load task.
|
||
|
3. The chunk load task executes on the async chunk loader threads
|
||
|
and will apply datafixers & de-serialize the chunk into a ProtoChunk
|
||
|
or ProtoChunkExtension.
|
||
|
4. The in progress chunk is then passed on to the world's chunk queue
|
||
|
to complete the ComletableFuture and execute any of the synchronous
|
||
|
tasks required to be executed by the chunk load task (i.e lighting
|
||
|
and some poi tasks).
|
||
|
|
||
|
diff --git a/src/main/java/co/aikar/timings/WorldTimingsHandler.java b/src/main/java/co/aikar/timings/WorldTimingsHandler.java
|
||
|
index 79ede25e4fe7a648b1d29c49d876482a2158f892..24eac9400fbf971742e89bbf47b0ba52b587c4eb 100644
|
||
|
--- a/src/main/java/co/aikar/timings/WorldTimingsHandler.java
|
||
|
+++ b/src/main/java/co/aikar/timings/WorldTimingsHandler.java
|
||
|
@@ -59,6 +59,17 @@ public class WorldTimingsHandler {
|
||
|
|
||
|
public final Timing miscMobSpawning;
|
||
|
|
||
|
+ public final Timing poiUnload;
|
||
|
+ public final Timing chunkUnload;
|
||
|
+ public final Timing poiSaveDataSerialization;
|
||
|
+ public final Timing chunkSave;
|
||
|
+ public final Timing chunkSaveOverwriteCheck;
|
||
|
+ public final Timing chunkSaveDataSerialization;
|
||
|
+ public final Timing chunkSaveIOWait;
|
||
|
+ public final Timing chunkUnloadPrepareSave;
|
||
|
+ public final Timing chunkUnloadPOISerialization;
|
||
|
+ public final Timing chunkUnloadDataSave;
|
||
|
+
|
||
|
public WorldTimingsHandler(Level server) {
|
||
|
String name = ((PrimaryLevelData) server.getLevelData()).getLevelName() + " - ";
|
||
|
|
||
|
@@ -112,6 +123,17 @@ public class WorldTimingsHandler {
|
||
|
|
||
|
|
||
|
miscMobSpawning = Timings.ofSafe(name + "Mob spawning - Misc");
|
||
|
+
|
||
|
+ poiUnload = Timings.ofSafe(name + "Chunk unload - POI");
|
||
|
+ chunkUnload = Timings.ofSafe(name + "Chunk unload - Chunk");
|
||
|
+ poiSaveDataSerialization = Timings.ofSafe(name + "Chunk save - POI Data serialization");
|
||
|
+ chunkSave = Timings.ofSafe(name + "Chunk save - Chunk");
|
||
|
+ chunkSaveOverwriteCheck = Timings.ofSafe(name + "Chunk save - Chunk Overwrite Check");
|
||
|
+ chunkSaveDataSerialization = Timings.ofSafe(name + "Chunk save - Chunk Data serialization");
|
||
|
+ chunkSaveIOWait = Timings.ofSafe(name + "Chunk save - Chunk IO Wait");
|
||
|
+ chunkUnloadPrepareSave = Timings.ofSafe(name + "Chunk unload - Async Save Prepare");
|
||
|
+ chunkUnloadPOISerialization = Timings.ofSafe(name + "Chunk unload - POI Data Serialization");
|
||
|
+ chunkUnloadDataSave = Timings.ofSafe(name + "Chunk unload - Data Serialization");
|
||
|
}
|
||
|
|
||
|
public static Timing getTickList(ServerLevel worldserver, String timingsType) {
|
||
|
diff --git a/src/main/java/com/destroystokyo/paper/PaperCommand.java b/src/main/java/com/destroystokyo/paper/PaperCommand.java
|
||
|
index 53dd6c18de8e80378852bbb141016d9574d42162..62711d95db62221a2e4e6423c518afe13a6c7dbe 100644
|
||
|
--- a/src/main/java/com/destroystokyo/paper/PaperCommand.java
|
||
|
+++ b/src/main/java/com/destroystokyo/paper/PaperCommand.java
|
||
|
@@ -1,5 +1,6 @@
|
||
|
package com.destroystokyo.paper;
|
||
|
|
||
|
+import com.destroystokyo.paper.io.chunk.ChunkTaskManager;
|
||
|
import com.google.common.base.Functions;
|
||
|
import com.google.common.base.Joiner;
|
||
|
import com.google.common.collect.ImmutableSet;
|
||
|
@@ -43,7 +44,7 @@ import java.util.stream.Collectors;
|
||
|
|
||
|
public class PaperCommand extends Command {
|
||
|
private static final String BASE_PERM = "bukkit.command.paper.";
|
||
|
- private static final ImmutableSet<String> SUBCOMMANDS = ImmutableSet.<String>builder().add("heap", "entity", "reload", "version", "debug", "chunkinfo").build();
|
||
|
+ private static final ImmutableSet<String> SUBCOMMANDS = ImmutableSet.<String>builder().add("heap", "entity", "reload", "version", "debug", "chunkinfo", "dumpwaiting").build();
|
||
|
|
||
|
public PaperCommand(String name) {
|
||
|
super(name);
|
||
|
@@ -155,6 +156,9 @@ public class PaperCommand extends Command {
|
||
|
case "debug":
|
||
|
doDebug(sender, args);
|
||
|
break;
|
||
|
+ case "dumpwaiting":
|
||
|
+ ChunkTaskManager.dumpAllChunkLoadInfo();
|
||
|
+ break;
|
||
|
case "chunkinfo":
|
||
|
doChunkInfo(sender, args);
|
||
|
break;
|
||
|
diff --git a/src/main/java/com/destroystokyo/paper/PaperConfig.java b/src/main/java/com/destroystokyo/paper/PaperConfig.java
|
||
|
index 469f78775b03cf363d88e35c69c0dc185c22547c..8bf4d2b8c38c02d6a5b2fea37113689a252f1571 100644
|
||
|
--- a/src/main/java/com/destroystokyo/paper/PaperConfig.java
|
||
|
+++ b/src/main/java/com/destroystokyo/paper/PaperConfig.java
|
||
|
@@ -1,5 +1,6 @@
|
||
|
package com.destroystokyo.paper;
|
||
|
|
||
|
+import com.destroystokyo.paper.io.chunk.ChunkTaskManager;
|
||
|
import com.google.common.base.Strings;
|
||
|
import com.google.common.base.Throwables;
|
||
|
|
||
|
@@ -352,4 +353,54 @@ public class PaperConfig {
|
||
|
maxBookPageSize = getInt("settings.book-size.page-max", maxBookPageSize);
|
||
|
maxBookTotalSizeMultiplier = getDouble("settings.book-size.total-multiplier", maxBookTotalSizeMultiplier);
|
||
|
}
|
||
|
+
|
||
|
+ public static boolean asyncChunks = false;
|
||
|
+ private static void asyncChunks() {
|
||
|
+ ConfigurationSection section;
|
||
|
+ if (version < 15) {
|
||
|
+ section = config.createSection("settings.async-chunks");
|
||
|
+ section.set("threads", -1);
|
||
|
+ } else {
|
||
|
+ section = config.getConfigurationSection("settings.async-chunks");
|
||
|
+ if (section == null) {
|
||
|
+ section = config.createSection("settings.async-chunks");
|
||
|
+ }
|
||
|
+ }
|
||
|
+ // Clean up old configs
|
||
|
+ if (section.contains("load-threads")) {
|
||
|
+ if (!section.contains("threads")) {
|
||
|
+ section.set("threads", section.get("load-threads"));
|
||
|
+ }
|
||
|
+ section.set("load-threads", null);
|
||
|
+ }
|
||
|
+ section.set("generation", null);
|
||
|
+ section.set("enabled", null);
|
||
|
+ section.set("thread-per-world-generation", null);
|
||
|
+
|
||
|
+ int threads = getInt("settings.async-chunks.threads", -1);
|
||
|
+ int cpus = Runtime.getRuntime().availableProcessors();
|
||
|
+ if (threads <= 0) {
|
||
|
+ threads = (int) Math.min(Integer.getInteger("paper.maxChunkThreads", 8), Math.max(1, cpus - 1));
|
||
|
+ }
|
||
|
+ if (cpus == 1 && !Boolean.getBoolean("Paper.allowAsyncChunksSingleCore")) {
|
||
|
+ asyncChunks = false;
|
||
|
+ } else {
|
||
|
+ asyncChunks = true;
|
||
|
+ }
|
||
|
+
|
||
|
+ // Let Shared Host set some limits
|
||
|
+ String sharedHostThreads = System.getenv("PAPER_ASYNC_CHUNKS_SHARED_HOST_THREADS");
|
||
|
+ if (sharedHostThreads != null) {
|
||
|
+ try {
|
||
|
+ threads = Math.max(1, Math.min(threads, Integer.parseInt(sharedHostThreads)));
|
||
|
+ } catch (NumberFormatException ignored) {}
|
||
|
+ }
|
||
|
+
|
||
|
+ if (!asyncChunks) {
|
||
|
+ log("Async Chunks: Disabled - Chunks will be managed synchronously, and will cause tremendous lag.");
|
||
|
+ } else {
|
||
|
+ ChunkTaskManager.initGlobalLoadThreads(threads);
|
||
|
+ log("Async Chunks: Enabled - Chunks will be loaded much faster, without lag.");
|
||
|
+ }
|
||
|
+ }
|
||
|
}
|
||
|
diff --git a/src/main/java/com/destroystokyo/paper/io/IOUtil.java b/src/main/java/com/destroystokyo/paper/io/IOUtil.java
|
||
|
new file mode 100644
|
||
|
index 0000000000000000000000000000000000000000..5af0ac3d9e87c06053e65433060f15779c156c2a
|
||
|
--- /dev/null
|
||
|
+++ b/src/main/java/com/destroystokyo/paper/io/IOUtil.java
|
||
|
@@ -0,0 +1,62 @@
|
||
|
+package com.destroystokyo.paper.io;
|
||
|
+
|
||
|
+import org.bukkit.Bukkit;
|
||
|
+
|
||
|
+public final class IOUtil {
|
||
|
+
|
||
|
+ /* Copied from concrete or concurrentutil */
|
||
|
+
|
||
|
+ public static long getCoordinateKey(final int x, final int z) {
|
||
|
+ return ((long)z << 32) | (x & 0xFFFFFFFFL);
|
||
|
+ }
|
||
|
+
|
||
|
+ public static int getCoordinateX(final long key) {
|
||
|
+ return (int)key;
|
||
|
+ }
|
||
|
+
|
||
|
+ public static int getCoordinateZ(final long key) {
|
||
|
+ return (int)(key >>> 32);
|
||
|
+ }
|
||
|
+
|
||
|
+ public static int getRegionCoordinate(final int chunkCoordinate) {
|
||
|
+ return chunkCoordinate >> 5;
|
||
|
+ }
|
||
|
+
|
||
|
+ public static int getChunkInRegion(final int chunkCoordinate) {
|
||
|
+ return chunkCoordinate & 31;
|
||
|
+ }
|
||
|
+
|
||
|
+ public static String genericToString(final Object object) {
|
||
|
+ return object == null ? "null" : object.getClass().getName() + ":" + object.toString();
|
||
|
+ }
|
||
|
+
|
||
|
+ public static <T> T notNull(final T obj) {
|
||
|
+ if (obj == null) {
|
||
|
+ throw new NullPointerException();
|
||
|
+ }
|
||
|
+ return obj;
|
||
|
+ }
|
||
|
+
|
||
|
+ public static <T> T notNull(final T obj, final String msgIfNull) {
|
||
|
+ if (obj == null) {
|
||
|
+ throw new NullPointerException(msgIfNull);
|
||
|
+ }
|
||
|
+ return obj;
|
||
|
+ }
|
||
|
+
|
||
|
+ public static void arrayBounds(final int off, final int len, final int arrayLength, final String msgPrefix) {
|
||
|
+ if (off < 0 || len < 0 || (arrayLength - off) < len) {
|
||
|
+ throw new ArrayIndexOutOfBoundsException(msgPrefix + ": off: " + off + ", len: " + len + ", array length: " + arrayLength);
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ public static int getPriorityForCurrentThread() {
|
||
|
+ return Bukkit.isPrimaryThread() ? PrioritizedTaskQueue.HIGHEST_PRIORITY : PrioritizedTaskQueue.NORMAL_PRIORITY;
|
||
|
+ }
|
||
|
+
|
||
|
+ @SuppressWarnings("unchecked")
|
||
|
+ public static <T extends Throwable> void rethrow(final Throwable throwable) throws T {
|
||
|
+ throw (T)throwable;
|
||
|
+ }
|
||
|
+
|
||
|
+}
|
||
|
diff --git a/src/main/java/com/destroystokyo/paper/io/PaperFileIOThread.java b/src/main/java/com/destroystokyo/paper/io/PaperFileIOThread.java
|
||
|
new file mode 100644
|
||
|
index 0000000000000000000000000000000000000000..a630a84b60b4517e3bc330d4983b914bd064efa4
|
||
|
--- /dev/null
|
||
|
+++ b/src/main/java/com/destroystokyo/paper/io/PaperFileIOThread.java
|
||
|
@@ -0,0 +1,606 @@
|
||
|
+package com.destroystokyo.paper.io;
|
||
|
+
|
||
|
+import net.minecraft.nbt.CompoundTag;
|
||
|
+import net.minecraft.server.MinecraftServer;
|
||
|
+import net.minecraft.server.level.ServerLevel;
|
||
|
+import net.minecraft.world.level.ChunkPos;
|
||
|
+import net.minecraft.world.level.chunk.storage.RegionFile;
|
||
|
+import org.apache.logging.log4j.Logger;
|
||
|
+
|
||
|
+import java.io.IOException;
|
||
|
+import java.util.concurrent.CompletableFuture;
|
||
|
+import java.util.concurrent.ConcurrentHashMap;
|
||
|
+import java.util.concurrent.atomic.AtomicLong;
|
||
|
+import java.util.function.Consumer;
|
||
|
+import java.util.function.Function;
|
||
|
+
|
||
|
+/**
|
||
|
+ * Prioritized singleton thread responsible for all chunk IO that occurs in a minecraft server.
|
||
|
+ *
|
||
|
+ * <p>
|
||
|
+ * Singleton access: {@link Holder#INSTANCE}
|
||
|
+ * </p>
|
||
|
+ *
|
||
|
+ * <p>
|
||
|
+ * All functions provided are MT-Safe, however certain ordering constraints are (but not enforced):
|
||
|
+ * <li>
|
||
|
+ * Chunk saves may not occur for unloaded chunks.
|
||
|
+ * </li>
|
||
|
+ * <li>
|
||
|
+ * Tasks must be scheduled on the main thread.
|
||
|
+ * </li>
|
||
|
+ * </p>
|
||
|
+ *
|
||
|
+ * @see Holder#INSTANCE
|
||
|
+ * @see #scheduleSave(ServerLevel, int, int, CompoundTag, CompoundTag, int)
|
||
|
+ * @see #loadChunkDataAsync(ServerLevel, int, int, int, Consumer, boolean, boolean, boolean)
|
||
|
+ */
|
||
|
+public final class PaperFileIOThread extends QueueExecutorThread {
|
||
|
+
|
||
|
+ public static final Logger LOGGER = MinecraftServer.LOGGER;
|
||
|
+ public static final CompoundTag FAILURE_VALUE = new CompoundTag();
|
||
|
+
|
||
|
+ public static final class Holder {
|
||
|
+
|
||
|
+ public static final PaperFileIOThread INSTANCE = new PaperFileIOThread();
|
||
|
+
|
||
|
+ static {
|
||
|
+ INSTANCE.start();
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ private final AtomicLong writeCounter = new AtomicLong();
|
||
|
+
|
||
|
+ private PaperFileIOThread() {
|
||
|
+ super(new PrioritizedTaskQueue<>(), (int)(1.0e6)); // 1.0ms spinwait time
|
||
|
+ this.setName("Paper RegionFile IO Thread");
|
||
|
+ this.setPriority(Thread.NORM_PRIORITY - 1); // we keep priority close to normal because threads can wait on us
|
||
|
+ this.setUncaughtExceptionHandler((final Thread unused, final Throwable thr) -> {
|
||
|
+ LOGGER.fatal("Uncaught exception thrown from IO thread, report this!", thr);
|
||
|
+ });
|
||
|
+ }
|
||
|
+
|
||
|
+ /* run() is implemented by superclass */
|
||
|
+
|
||
|
+ /*
|
||
|
+ *
|
||
|
+ * IO thread will perform reads before writes
|
||
|
+ *
|
||
|
+ * How reads/writes are scheduled:
|
||
|
+ *
|
||
|
+ * If read in progress while scheduling write, ignore read and schedule write
|
||
|
+ * If read in progress while scheduling read (no write in progress), chain the read task
|
||
|
+ *
|
||
|
+ *
|
||
|
+ * If write in progress while scheduling read, use the pending write data and ret immediately
|
||
|
+ * If write in progress while scheduling write (ignore read in progress), overwrite the write in progress data
|
||
|
+ *
|
||
|
+ * This allows the reads and writes to act as if they occur synchronously to the thread scheduling them, however
|
||
|
+ * it fails to properly propagate write failures. When writes fail the data is kept so future reads will actually
|
||
|
+ * read the failed write data. This should hopefully act as a way to prevent data loss for spurious fails for writing data.
|
||
|
+ *
|
||
|
+ */
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Attempts to bump the priority of all IO tasks for the given chunk coordinates. This has no effect if no tasks are queued.
|
||
|
+ * @param world Chunk's world
|
||
|
+ * @param chunkX Chunk's x coordinate
|
||
|
+ * @param chunkZ Chunk's z coordinate
|
||
|
+ * @param priority Priority level to try to bump to
|
||
|
+ */
|
||
|
+ public void bumpPriority(final ServerLevel world, final int chunkX, final int chunkZ, final int priority) {
|
||
|
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
|
||
|
+ throw new IllegalArgumentException("Invalid priority: " + priority);
|
||
|
+ }
|
||
|
+
|
||
|
+ final Long key = Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ));
|
||
|
+
|
||
|
+ final ChunkDataTask poiTask = world.poiDataController.tasks.get(key);
|
||
|
+ final ChunkDataTask chunkTask = world.chunkDataController.tasks.get(key);
|
||
|
+
|
||
|
+ if (poiTask != null) {
|
||
|
+ poiTask.raisePriority(priority);
|
||
|
+ }
|
||
|
+ if (chunkTask != null) {
|
||
|
+ chunkTask.raisePriority(priority);
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ public CompoundTag getPendingWrite(final ServerLevel world, final int chunkX, final int chunkZ, final boolean poiData) {
|
||
|
+ final ChunkDataController taskController = poiData ? world.poiDataController : world.chunkDataController;
|
||
|
+
|
||
|
+ final ChunkDataTask dataTask = taskController.tasks.get(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)));
|
||
|
+
|
||
|
+ if (dataTask == null) {
|
||
|
+ return null;
|
||
|
+ }
|
||
|
+
|
||
|
+ final ChunkDataController.InProgressWrite write = dataTask.inProgressWrite;
|
||
|
+
|
||
|
+ if (write == null) {
|
||
|
+ return null;
|
||
|
+ }
|
||
|
+
|
||
|
+ return write.data;
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Sets the priority of all IO tasks for the given chunk coordinates. This has no effect if no tasks are queued.
|
||
|
+ * @param world Chunk's world
|
||
|
+ * @param chunkX Chunk's x coordinate
|
||
|
+ * @param chunkZ Chunk's z coordinate
|
||
|
+ * @param priority Priority level to set to
|
||
|
+ */
|
||
|
+ public void setPriority(final ServerLevel world, final int chunkX, final int chunkZ, final int priority) {
|
||
|
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
|
||
|
+ throw new IllegalArgumentException("Invalid priority: " + priority);
|
||
|
+ }
|
||
|
+
|
||
|
+ final Long key = Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ));
|
||
|
+
|
||
|
+ final ChunkDataTask poiTask = world.poiDataController.tasks.get(key);
|
||
|
+ final ChunkDataTask chunkTask = world.chunkDataController.tasks.get(key);
|
||
|
+
|
||
|
+ if (poiTask != null) {
|
||
|
+ poiTask.updatePriority(priority);
|
||
|
+ }
|
||
|
+ if (chunkTask != null) {
|
||
|
+ chunkTask.updatePriority(priority);
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Schedules the chunk data to be written asynchronously.
|
||
|
+ * <p>
|
||
|
+ * Impl notes:
|
||
|
+ * </p>
|
||
|
+ * <li>
|
||
|
+ * This function presumes a chunk load for the coordinates is not called during this function (anytime after is OK). This means
|
||
|
+ * saves must be scheduled before a chunk is unloaded.
|
||
|
+ * </li>
|
||
|
+ * <li>
|
||
|
+ * Writes may be called concurrently, although only the "later" write will go through.
|
||
|
+ * </li>
|
||
|
+ * @param world Chunk's world
|
||
|
+ * @param chunkX Chunk's x coordinate
|
||
|
+ * @param chunkZ Chunk's z coordinate
|
||
|
+ * @param poiData Chunk point of interest data. If {@code null}, then no poi data is saved.
|
||
|
+ * @param chunkData Chunk data. If {@code null}, then no chunk data is saved.
|
||
|
+ * @param priority Priority level for this task. See {@link PrioritizedTaskQueue}
|
||
|
+ * @throws IllegalArgumentException If both {@code poiData} and {@code chunkData} are {@code null}.
|
||
|
+ * @throws IllegalStateException If the file io thread has shutdown.
|
||
|
+ */
|
||
|
+ public void scheduleSave(final ServerLevel world, final int chunkX, final int chunkZ,
|
||
|
+ final CompoundTag poiData, final CompoundTag chunkData,
|
||
|
+ final int priority) throws IllegalArgumentException {
|
||
|
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
|
||
|
+ throw new IllegalArgumentException("Invalid priority: " + priority);
|
||
|
+ }
|
||
|
+
|
||
|
+ final long writeCounter = this.writeCounter.getAndIncrement();
|
||
|
+
|
||
|
+ if (poiData != null) {
|
||
|
+ this.scheduleWrite(world.poiDataController, world, chunkX, chunkZ, poiData, priority, writeCounter);
|
||
|
+ }
|
||
|
+ if (chunkData != null) {
|
||
|
+ this.scheduleWrite(world.chunkDataController, world, chunkX, chunkZ, chunkData, priority, writeCounter);
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ private void scheduleWrite(final ChunkDataController dataController, final ServerLevel world,
|
||
|
+ final int chunkX, final int chunkZ, final CompoundTag data, final int priority, final long writeCounter) {
|
||
|
+ dataController.tasks.compute(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)), (final Long keyInMap, final ChunkDataTask taskRunning) -> {
|
||
|
+ if (taskRunning == null) {
|
||
|
+ // no task is scheduled
|
||
|
+
|
||
|
+ // create task
|
||
|
+ final ChunkDataTask newTask = new ChunkDataTask(priority, world, chunkX, chunkZ, dataController);
|
||
|
+ newTask.inProgressWrite = new ChunkDataController.InProgressWrite();
|
||
|
+ newTask.inProgressWrite.writeCounter = writeCounter;
|
||
|
+ newTask.inProgressWrite.data = data;
|
||
|
+
|
||
|
+ PaperFileIOThread.this.queueTask(newTask); // schedule
|
||
|
+ return newTask;
|
||
|
+ }
|
||
|
+
|
||
|
+ taskRunning.raisePriority(priority);
|
||
|
+
|
||
|
+ if (taskRunning.inProgressWrite == null) {
|
||
|
+ taskRunning.inProgressWrite = new ChunkDataController.InProgressWrite();
|
||
|
+ }
|
||
|
+
|
||
|
+ boolean reschedule = taskRunning.inProgressWrite.writeCounter == -1L;
|
||
|
+
|
||
|
+ // synchronize for readers
|
||
|
+ //noinspection SynchronizationOnLocalVariableOrMethodParameter
|
||
|
+ synchronized (taskRunning) {
|
||
|
+ taskRunning.inProgressWrite.data = data;
|
||
|
+ taskRunning.inProgressWrite.writeCounter = writeCounter;
|
||
|
+ }
|
||
|
+
|
||
|
+ if (reschedule) {
|
||
|
+ // We need to reschedule this task since the previous one is not currently scheduled since it failed
|
||
|
+ taskRunning.reschedule(priority);
|
||
|
+ }
|
||
|
+
|
||
|
+ return taskRunning;
|
||
|
+ });
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Same as {@link #loadChunkDataAsync(ServerLevel, int, int, int, Consumer, boolean, boolean, boolean)}, except this function returns
|
||
|
+ * a {@link CompletableFuture} which is potentially completed <b>ASYNCHRONOUSLY ON THE FILE IO THREAD</b> when the load task
|
||
|
+ * has completed.
|
||
|
+ * <p>
|
||
|
+ * Note that if the chunk fails to load the returned future is completed with {@code null}.
|
||
|
+ * </p>
|
||
|
+ */
|
||
|
+ public CompletableFuture<ChunkData> loadChunkDataAsyncFuture(final ServerLevel world, final int chunkX, final int chunkZ,
|
||
|
+ final int priority, final boolean readPoiData, final boolean readChunkData,
|
||
|
+ final boolean intendingToBlock) {
|
||
|
+ final CompletableFuture<ChunkData> future = new CompletableFuture<>();
|
||
|
+ this.loadChunkDataAsync(world, chunkX, chunkZ, priority, future::complete, readPoiData, readChunkData, intendingToBlock);
|
||
|
+ return future;
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Schedules a load to be executed asynchronously.
|
||
|
+ * <p>
|
||
|
+ * Impl notes:
|
||
|
+ * </p>
|
||
|
+ * <li>
|
||
|
+ * If a chunk fails to load, the {@code onComplete} parameter is completed with {@code null}.
|
||
|
+ * </li>
|
||
|
+ * <li>
|
||
|
+ * It is possible for the {@code onComplete} parameter to be given {@link ChunkData} containing data
|
||
|
+ * this call did not request.
|
||
|
+ * </li>
|
||
|
+ * <li>
|
||
|
+ * The {@code onComplete} parameter may be completed during the execution of this function synchronously or it may
|
||
|
+ * be completed asynchronously on this file io thread. Interacting with the file IO thread in the completion of
|
||
|
+ * data is undefined behaviour, and can cause deadlock.
|
||
|
+ * </li>
|
||
|
+ * @param world Chunk's world
|
||
|
+ * @param chunkX Chunk's x coordinate
|
||
|
+ * @param chunkZ Chunk's z coordinate
|
||
|
+ * @param priority Priority level for this task. See {@link PrioritizedTaskQueue}
|
||
|
+ * @param onComplete Consumer to execute once this task has completed
|
||
|
+ * @param readPoiData Whether to read point of interest data. If {@code false}, the {@code NBTTagCompound} will be {@code null}.
|
||
|
+ * @param readChunkData Whether to read chunk data. If {@code false}, the {@code NBTTagCompound} will be {@code null}.
|
||
|
+ * @return The {@link PrioritizedTaskQueue.PrioritizedTask} associated with this task. Note that this task does not support
|
||
|
+ * cancellation.
|
||
|
+ */
|
||
|
+ public void loadChunkDataAsync(final ServerLevel world, final int chunkX, final int chunkZ,
|
||
|
+ final int priority, final Consumer<ChunkData> onComplete,
|
||
|
+ final boolean readPoiData, final boolean readChunkData,
|
||
|
+ final boolean intendingToBlock) {
|
||
|
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
|
||
|
+ throw new IllegalArgumentException("Invalid priority: " + priority);
|
||
|
+ }
|
||
|
+
|
||
|
+ if (!(readPoiData | readChunkData)) {
|
||
|
+ throw new IllegalArgumentException("Must read chunk data or poi data");
|
||
|
+ }
|
||
|
+
|
||
|
+ final ChunkData complete = new ChunkData();
|
||
|
+ final boolean[] requireCompletion = new boolean[] { readPoiData, readChunkData };
|
||
|
+
|
||
|
+ if (readPoiData) {
|
||
|
+ this.scheduleRead(world.poiDataController, world, chunkX, chunkZ, (final CompoundTag poiData) -> {
|
||
|
+ complete.poiData = poiData;
|
||
|
+
|
||
|
+ final boolean finished;
|
||
|
+
|
||
|
+ // avoid a race condition where the file io thread completes and we complete synchronously
|
||
|
+ // Note: Synchronization can be elided if both of the accesses are volatile
|
||
|
+ synchronized (requireCompletion) {
|
||
|
+ requireCompletion[0] = false; // 0 -> poi data
|
||
|
+ finished = !requireCompletion[1]; // 1 -> chunk data
|
||
|
+ }
|
||
|
+
|
||
|
+ if (finished) {
|
||
|
+ onComplete.accept(complete);
|
||
|
+ }
|
||
|
+ }, priority, intendingToBlock);
|
||
|
+ }
|
||
|
+
|
||
|
+ if (readChunkData) {
|
||
|
+ this.scheduleRead(world.chunkDataController, world, chunkX, chunkZ, (final CompoundTag chunkData) -> {
|
||
|
+ complete.chunkData = chunkData;
|
||
|
+
|
||
|
+ final boolean finished;
|
||
|
+
|
||
|
+ // avoid a race condition where the file io thread completes and we complete synchronously
|
||
|
+ // Note: Synchronization can be elided if both of the accesses are volatile
|
||
|
+ synchronized (requireCompletion) {
|
||
|
+ requireCompletion[1] = false; // 1 -> chunk data
|
||
|
+ finished = !requireCompletion[0]; // 0 -> poi data
|
||
|
+ }
|
||
|
+
|
||
|
+ if (finished) {
|
||
|
+ onComplete.accept(complete);
|
||
|
+ }
|
||
|
+ }, priority, intendingToBlock);
|
||
|
+ }
|
||
|
+
|
||
|
+ }
|
||
|
+
|
||
|
+ // Note: the onComplete may be called asynchronously or synchronously here.
|
||
|
+ private void scheduleRead(final ChunkDataController dataController, final ServerLevel world,
|
||
|
+ final int chunkX, final int chunkZ, final Consumer<CompoundTag> onComplete, final int priority,
|
||
|
+ final boolean intendingToBlock) {
|
||
|
+
|
||
|
+ Function<RegionFile, Boolean> tryLoadFunction = (final RegionFile file) -> {
|
||
|
+ if (file == null) {
|
||
|
+ return Boolean.TRUE;
|
||
|
+ }
|
||
|
+ return Boolean.valueOf(file.hasChunk(new ChunkPos(chunkX, chunkZ)));
|
||
|
+ };
|
||
|
+
|
||
|
+ dataController.tasks.compute(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)), (final Long keyInMap, final ChunkDataTask running) -> {
|
||
|
+ if (running == null) {
|
||
|
+ // not scheduled
|
||
|
+
|
||
|
+ final Boolean shouldSchedule = intendingToBlock ? dataController.computeForRegionFile(chunkX, chunkZ, tryLoadFunction) :
|
||
|
+ dataController.computeForRegionFileIfLoaded(chunkX, chunkZ, tryLoadFunction);
|
||
|
+
|
||
|
+ if (shouldSchedule == Boolean.FALSE) {
|
||
|
+ // not on disk
|
||
|
+ onComplete.accept(null);
|
||
|
+ return null;
|
||
|
+ }
|
||
|
+
|
||
|
+ // set up task
|
||
|
+ final ChunkDataTask newTask = new ChunkDataTask(priority, world, chunkX, chunkZ, dataController);
|
||
|
+ newTask.inProgressRead = new ChunkDataController.InProgressRead();
|
||
|
+ newTask.inProgressRead.readFuture.thenAccept(onComplete);
|
||
|
+
|
||
|
+ PaperFileIOThread.this.queueTask(newTask); // schedule task
|
||
|
+ return newTask;
|
||
|
+ }
|
||
|
+
|
||
|
+ running.raisePriority(priority);
|
||
|
+
|
||
|
+ if (running.inProgressWrite == null) {
|
||
|
+ // chain to the read future
|
||
|
+ running.inProgressRead.readFuture.thenAccept(onComplete);
|
||
|
+ return running;
|
||
|
+ }
|
||
|
+
|
||
|
+ // at this stage we have to use the in progress write's data to avoid an order issue
|
||
|
+ // we don't synchronize since all writes to data occur in the compute() call
|
||
|
+ onComplete.accept(running.inProgressWrite.data);
|
||
|
+ return running;
|
||
|
+ });
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Same as {@link #loadChunkDataAsync(ServerLevel, int, int, int, Consumer, boolean, boolean, boolean)}, except this function returns
|
||
|
+ * the {@link ChunkData} associated with the specified chunk when the task is complete.
|
||
|
+ * @return The chunk data, or {@code null} if the chunk failed to load.
|
||
|
+ */
|
||
|
+ public ChunkData loadChunkData(final ServerLevel world, final int chunkX, final int chunkZ, final int priority,
|
||
|
+ final boolean readPoiData, final boolean readChunkData) {
|
||
|
+ return this.loadChunkDataAsyncFuture(world, chunkX, chunkZ, priority, readPoiData, readChunkData, true).join();
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Schedules the given task at the specified priority to be executed on the IO thread.
|
||
|
+ * <p>
|
||
|
+ * Internal api. Do not use.
|
||
|
+ * </p>
|
||
|
+ */
|
||
|
+ public void runTask(final int priority, final Runnable runnable) {
|
||
|
+ this.queueTask(new GeneralTask(priority, runnable));
|
||
|
+ }
|
||
|
+
|
||
|
+ static final class GeneralTask extends PrioritizedTaskQueue.PrioritizedTask implements Runnable {
|
||
|
+
|
||
|
+ private final Runnable run;
|
||
|
+
|
||
|
+ public GeneralTask(final int priority, final Runnable run) {
|
||
|
+ super(priority);
|
||
|
+ this.run = IOUtil.notNull(run, "Task may not be null");
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public void run() {
|
||
|
+ try {
|
||
|
+ this.run.run();
|
||
|
+ } catch (final Throwable throwable) {
|
||
|
+ if (throwable instanceof ThreadDeath) {
|
||
|
+ throw (ThreadDeath)throwable;
|
||
|
+ }
|
||
|
+ LOGGER.fatal("Failed to execute general task on IO thread " + IOUtil.genericToString(this.run), throwable);
|
||
|
+ }
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ public static final class ChunkData {
|
||
|
+
|
||
|
+ public CompoundTag poiData;
|
||
|
+ public CompoundTag chunkData;
|
||
|
+
|
||
|
+ public ChunkData() {}
|
||
|
+
|
||
|
+ public ChunkData(final CompoundTag poiData, final CompoundTag chunkData) {
|
||
|
+ this.poiData = poiData;
|
||
|
+ this.chunkData = chunkData;
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ public static abstract class ChunkDataController {
|
||
|
+
|
||
|
+ // ConcurrentHashMap synchronizes per chain, so reduce the chance of task's hashes colliding.
|
||
|
+ public final ConcurrentHashMap<Long, ChunkDataTask> tasks = new ConcurrentHashMap<>(64, 0.5f);
|
||
|
+
|
||
|
+ public abstract void writeData(final int x, final int z, final CompoundTag compound) throws IOException;
|
||
|
+ public abstract CompoundTag readData(final int x, final int z) throws IOException;
|
||
|
+
|
||
|
+ public abstract <T> T computeForRegionFile(final int chunkX, final int chunkZ, final Function<RegionFile, T> function);
|
||
|
+ public abstract <T> T computeForRegionFileIfLoaded(final int chunkX, final int chunkZ, final Function<RegionFile, T> function);
|
||
|
+
|
||
|
+ public static final class InProgressWrite {
|
||
|
+ public long writeCounter;
|
||
|
+ public CompoundTag data;
|
||
|
+ }
|
||
|
+
|
||
|
+ public static final class InProgressRead {
|
||
|
+ public final CompletableFuture<CompoundTag> readFuture = new CompletableFuture<>();
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ public static final class ChunkDataTask extends PrioritizedTaskQueue.PrioritizedTask implements Runnable {
|
||
|
+
|
||
|
+ public ChunkDataController.InProgressWrite inProgressWrite;
|
||
|
+ public ChunkDataController.InProgressRead inProgressRead;
|
||
|
+
|
||
|
+ private final ServerLevel world;
|
||
|
+ private final int x;
|
||
|
+ private final int z;
|
||
|
+ private final ChunkDataController taskController;
|
||
|
+
|
||
|
+ public ChunkDataTask(final int priority, final ServerLevel world, final int x, final int z, final ChunkDataController taskController) {
|
||
|
+ super(priority);
|
||
|
+ this.world = world;
|
||
|
+ this.x = x;
|
||
|
+ this.z = z;
|
||
|
+ this.taskController = taskController;
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public String toString() {
|
||
|
+ return "Task for world: '" + this.world.getWorld().getName() + "' at " + this.x + "," + this.z +
|
||
|
+ " poi: " + (this.taskController == this.world.poiDataController) + ", hash: " + this.hashCode();
|
||
|
+ }
|
||
|
+
|
||
|
+ /*
|
||
|
+ *
|
||
|
+ * IO thread will perform reads before writes
|
||
|
+ *
|
||
|
+ * How reads/writes are scheduled:
|
||
|
+ *
|
||
|
+ * If read in progress while scheduling write, ignore read and schedule write
|
||
|
+ * If read in progress while scheduling read (no write in progress), chain the read task
|
||
|
+ *
|
||
|
+ *
|
||
|
+ * If write in progress while scheduling read, use the pending write data and ret immediately
|
||
|
+ * If write in progress while scheduling write (ignore read in progress), overwrite the write in progress data
|
||
|
+ *
|
||
|
+ * This allows the reads and writes to act as if they occur synchronously to the thread scheduling them, however
|
||
|
+ * it fails to properly propagate write failures
|
||
|
+ *
|
||
|
+ */
|
||
|
+
|
||
|
+ void reschedule(final int priority) {
|
||
|
+ // priority is checked before this stage // TODO what
|
||
|
+ this.queue.lazySet(null);
|
||
|
+ this.priority.lazySet(priority);
|
||
|
+ PaperFileIOThread.Holder.INSTANCE.queueTask(this);
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public void run() {
|
||
|
+ ChunkDataController.InProgressRead read = this.inProgressRead;
|
||
|
+ if (read != null) {
|
||
|
+ CompoundTag compound = PaperFileIOThread.FAILURE_VALUE;
|
||
|
+ try {
|
||
|
+ compound = this.taskController.readData(this.x, this.z);
|
||
|
+ } catch (final Throwable thr) {
|
||
|
+ if (thr instanceof ThreadDeath) {
|
||
|
+ throw (ThreadDeath)thr;
|
||
|
+ }
|
||
|
+ LOGGER.fatal("Failed to read chunk data for task: " + this.toString(), thr);
|
||
|
+ // fall through to complete with null data
|
||
|
+ }
|
||
|
+ read.readFuture.complete(compound);
|
||
|
+ }
|
||
|
+
|
||
|
+ final Long chunkKey = Long.valueOf(IOUtil.getCoordinateKey(this.x, this.z));
|
||
|
+
|
||
|
+ ChunkDataController.InProgressWrite write = this.inProgressWrite;
|
||
|
+
|
||
|
+ if (write == null) {
|
||
|
+ // IntelliJ warns this is invalid, however it does not consider that writes to the task map & the inProgress field can occur concurrently.
|
||
|
+ ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final Long keyInMap, final ChunkDataTask valueInMap) -> {
|
||
|
+ if (valueInMap == null) {
|
||
|
+ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
|
||
|
+ }
|
||
|
+ if (valueInMap != ChunkDataTask.this) {
|
||
|
+ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
|
||
|
+ }
|
||
|
+ return valueInMap.inProgressWrite == null ? null : valueInMap;
|
||
|
+ });
|
||
|
+
|
||
|
+ if (inMap == null) {
|
||
|
+ return; // set the task value to null, indicating we're done
|
||
|
+ }
|
||
|
+
|
||
|
+ // not null, which means there was a concurrent write
|
||
|
+ write = this.inProgressWrite;
|
||
|
+ }
|
||
|
+
|
||
|
+ // check if another process is writing
|
||
|
+ /*try { TODO: Can we restore this?
|
||
|
+ ((WorldServer)this.world).checkSession();
|
||
|
+ } catch (final Exception ex) {
|
||
|
+ LOGGER.fatal("Couldn't save chunk; already in use by another instance of Minecraft?", ex);
|
||
|
+ // we don't need to set the write counter to -1 as we know at this stage there's no point in re-scheduling
|
||
|
+ // writes since they'll fail anyways.
|
||
|
+ return;
|
||
|
+ }
|
||
|
+*/
|
||
|
+ for (;;) {
|
||
|
+ final long writeCounter;
|
||
|
+ final CompoundTag data;
|
||
|
+
|
||
|
+ //noinspection SynchronizationOnLocalVariableOrMethodParameter
|
||
|
+ synchronized (write) {
|
||
|
+ writeCounter = write.writeCounter;
|
||
|
+ data = write.data;
|
||
|
+ }
|
||
|
+
|
||
|
+ boolean failedWrite = false;
|
||
|
+
|
||
|
+ try {
|
||
|
+ this.taskController.writeData(this.x, this.z, data);
|
||
|
+ } catch (final Throwable thr) {
|
||
|
+ if (thr instanceof ThreadDeath) {
|
||
|
+ throw (ThreadDeath)thr;
|
||
|
+ }
|
||
|
+ LOGGER.fatal("Failed to write chunk data for task: " + this.toString(), thr);
|
||
|
+ failedWrite = true;
|
||
|
+ }
|
||
|
+
|
||
|
+ boolean finalFailWrite = failedWrite;
|
||
|
+
|
||
|
+ ChunkDataTask inMap = this.taskController.tasks.compute(chunkKey, (final Long keyInMap, final ChunkDataTask valueInMap) -> {
|
||
|
+ if (valueInMap == null) {
|
||
|
+ throw new IllegalStateException("Write completed concurrently, expected this task: " + ChunkDataTask.this.toString() + ", report this!");
|
||
|
+ }
|
||
|
+ if (valueInMap != ChunkDataTask.this) {
|
||
|
+ throw new IllegalStateException("Chunk task mismatch, expected this task: " + ChunkDataTask.this.toString() + ", got: " + valueInMap.toString() + ", report this!");
|
||
|
+ }
|
||
|
+ if (valueInMap.inProgressWrite.writeCounter == writeCounter) {
|
||
|
+ if (finalFailWrite) {
|
||
|
+ valueInMap.inProgressWrite.writeCounter = -1L;
|
||
|
+ }
|
||
|
+
|
||
|
+ return null;
|
||
|
+ }
|
||
|
+ return valueInMap;
|
||
|
+ // Hack end
|
||
|
+ });
|
||
|
+
|
||
|
+ if (inMap == null) {
|
||
|
+ // write counter matched, so we wrote the most up-to-date pending data, we're done here
|
||
|
+ // or we failed to write and successfully set the write counter to -1
|
||
|
+ return; // we're done here
|
||
|
+ }
|
||
|
+
|
||
|
+ // fetch & write new data
|
||
|
+ continue;
|
||
|
+ }
|
||
|
+ }
|
||
|
+ }
|
||
|
+}
|
||
|
diff --git a/src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java b/src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java
|
||
|
new file mode 100644
|
||
|
index 0000000000000000000000000000000000000000..97f2e433c483f1ebd7500ae142269e144ef5fda4
|
||
|
--- /dev/null
|
||
|
+++ b/src/main/java/com/destroystokyo/paper/io/PrioritizedTaskQueue.java
|
||
|
@@ -0,0 +1,277 @@
|
||
|
+package com.destroystokyo.paper.io;
|
||
|
+
|
||
|
+import java.util.concurrent.ConcurrentLinkedQueue;
|
||
|
+import java.util.concurrent.atomic.AtomicBoolean;
|
||
|
+import java.util.concurrent.atomic.AtomicInteger;
|
||
|
+import java.util.concurrent.atomic.AtomicReference;
|
||
|
+
|
||
|
+public class PrioritizedTaskQueue<T extends PrioritizedTaskQueue.PrioritizedTask> {
|
||
|
+
|
||
|
+ // lower numbers are a higher priority (except < 0)
|
||
|
+ // higher priorities are always executed before lower priorities
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Priority value indicating the task has completed or is being completed.
|
||
|
+ */
|
||
|
+ public static final int COMPLETING_PRIORITY = -1;
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Highest priority, should only be used for main thread tasks or tasks that are blocking the main thread.
|
||
|
+ */
|
||
|
+ public static final int HIGHEST_PRIORITY = 0;
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Should be only used in an IO task so that chunk loads do not wait on other IO tasks.
|
||
|
+ * This only exists because IO tasks are scheduled before chunk load tasks to decrease IO waiting times.
|
||
|
+ */
|
||
|
+ public static final int HIGHER_PRIORITY = 1;
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Should be used for scheduling chunk loads/generation that would increase response times to users.
|
||
|
+ */
|
||
|
+ public static final int HIGH_PRIORITY = 2;
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Default priority.
|
||
|
+ */
|
||
|
+ public static final int NORMAL_PRIORITY = 3;
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Use for tasks not at all critical and can potentially be delayed.
|
||
|
+ */
|
||
|
+ public static final int LOW_PRIORITY = 4;
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Use for tasks that should "eventually" execute.
|
||
|
+ */
|
||
|
+ public static final int LOWEST_PRIORITY = 5;
|
||
|
+
|
||
|
+ private static final int TOTAL_PRIORITIES = 6;
|
||
|
+
|
||
|
+ final ConcurrentLinkedQueue<T>[] queues = (ConcurrentLinkedQueue<T>[])new ConcurrentLinkedQueue[TOTAL_PRIORITIES];
|
||
|
+
|
||
|
+ private final AtomicBoolean shutdown = new AtomicBoolean();
|
||
|
+
|
||
|
+ {
|
||
|
+ for (int i = 0; i < TOTAL_PRIORITIES; ++i) {
|
||
|
+ this.queues[i] = new ConcurrentLinkedQueue<>();
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Returns whether the specified priority is valid
|
||
|
+ */
|
||
|
+ public static boolean validPriority(final int priority) {
|
||
|
+ return priority >= 0 && priority < TOTAL_PRIORITIES;
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Queues a task.
|
||
|
+ * @throws IllegalStateException If the task has already been queued. Use {@link PrioritizedTask#raisePriority(int)} to
|
||
|
+ * raise a task's priority.
|
||
|
+ * This can also be thrown if the queue has shutdown.
|
||
|
+ */
|
||
|
+ public void add(final T task) throws IllegalStateException {
|
||
|
+ int priority = task.getPriority();
|
||
|
+ if (priority != COMPLETING_PRIORITY) {
|
||
|
+ task.setQueue(this);
|
||
|
+ this.queues[priority].add(task);
|
||
|
+ }
|
||
|
+ if (this.shutdown.get()) {
|
||
|
+ // note: we're not actually sure at this point if our task will go through
|
||
|
+ throw new IllegalStateException("Queue has shutdown, refusing to execute task " + IOUtil.genericToString(task));
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Polls the highest priority task currently available. {@code null} if none.
|
||
|
+ */
|
||
|
+ public T poll() {
|
||
|
+ T task;
|
||
|
+ for (int i = 0; i < TOTAL_PRIORITIES; ++i) {
|
||
|
+ final ConcurrentLinkedQueue<T> queue = this.queues[i];
|
||
|
+
|
||
|
+ while ((task = queue.poll()) != null) {
|
||
|
+ final int prevPriority = task.tryComplete(i);
|
||
|
+ if (prevPriority != COMPLETING_PRIORITY && prevPriority <= i) {
|
||
|
+ // if the prev priority was greater-than or equal to our current priority
|
||
|
+ return task;
|
||
|
+ }
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ return null;
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Returns whether this queue may have tasks queued.
|
||
|
+ * <p>
|
||
|
+ * This operation is not atomic, but is MT-Safe.
|
||
|
+ * </p>
|
||
|
+ * @return {@code true} if tasks may be queued, {@code false} otherwise
|
||
|
+ */
|
||
|
+ public boolean hasTasks() {
|
||
|
+ for (int i = 0; i < TOTAL_PRIORITIES; ++i) {
|
||
|
+ final ConcurrentLinkedQueue<T> queue = this.queues[i];
|
||
|
+
|
||
|
+ if (queue.peek() != null) {
|
||
|
+ return true;
|
||
|
+ }
|
||
|
+ }
|
||
|
+ return false;
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Prevent further additions to this queue. Attempts to add after this call has completed (potentially during) will
|
||
|
+ * result in {@link IllegalStateException} being thrown.
|
||
|
+ * <p>
|
||
|
+ * This operation is atomic with respect to other shutdown calls
|
||
|
+ * </p>
|
||
|
+ * <p>
|
||
|
+ * After this call has completed, regardless of return value, this queue will be shutdown.
|
||
|
+ * </p>
|
||
|
+ * @return {@code true} if the queue was shutdown, {@code false} if it has shut down already
|
||
|
+ */
|
||
|
+ public boolean shutdown() {
|
||
|
+ return this.shutdown.getAndSet(false);
|
||
|
+ }
|
||
|
+
|
||
|
+ public abstract static class PrioritizedTask {
|
||
|
+
|
||
|
+ protected final AtomicReference<PrioritizedTaskQueue> queue = new AtomicReference<>();
|
||
|
+
|
||
|
+ protected final AtomicInteger priority;
|
||
|
+
|
||
|
+ protected PrioritizedTask() {
|
||
|
+ this(PrioritizedTaskQueue.NORMAL_PRIORITY);
|
||
|
+ }
|
||
|
+
|
||
|
+ protected PrioritizedTask(final int priority) {
|
||
|
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
|
||
|
+ throw new IllegalArgumentException("Invalid priority " + priority);
|
||
|
+ }
|
||
|
+ this.priority = new AtomicInteger(priority);
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Returns the current priority. Note that {@link PrioritizedTaskQueue#COMPLETING_PRIORITY} will be returned
|
||
|
+ * if this task is completing or has completed.
|
||
|
+ */
|
||
|
+ public final int getPriority() {
|
||
|
+ return this.priority.get();
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Returns whether this task is scheduled to execute, or has been already executed.
|
||
|
+ */
|
||
|
+ public boolean isScheduled() {
|
||
|
+ return this.queue.get() != null;
|
||
|
+ }
|
||
|
+
|
||
|
+ final int tryComplete(final int minPriority) {
|
||
|
+ for (int curr = this.getPriorityVolatile();;) {
|
||
|
+ if (curr == COMPLETING_PRIORITY) {
|
||
|
+ return COMPLETING_PRIORITY;
|
||
|
+ }
|
||
|
+ if (curr > minPriority) {
|
||
|
+ // curr is lower priority
|
||
|
+ return curr;
|
||
|
+ }
|
||
|
+
|
||
|
+ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, COMPLETING_PRIORITY))) {
|
||
|
+ return curr;
|
||
|
+ }
|
||
|
+ continue;
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Forces this task to be completed.
|
||
|
+ * @return {@code true} if the task was cancelled, {@code false} if the task has already completed or is being completed.
|
||
|
+ */
|
||
|
+ public boolean cancel() {
|
||
|
+ return this.exchangePriorityVolatile(PrioritizedTaskQueue.COMPLETING_PRIORITY) != PrioritizedTaskQueue.COMPLETING_PRIORITY;
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Attempts to raise the priority to the priority level specified.
|
||
|
+ * @param priority Priority specified
|
||
|
+ * @return {@code true} if successful, {@code false} otherwise.
|
||
|
+ */
|
||
|
+ public boolean raisePriority(final int priority) {
|
||
|
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
|
||
|
+ throw new IllegalArgumentException("Invalid priority");
|
||
|
+ }
|
||
|
+
|
||
|
+ for (int curr = this.getPriorityVolatile();;) {
|
||
|
+ if (curr == COMPLETING_PRIORITY) {
|
||
|
+ return false;
|
||
|
+ }
|
||
|
+ if (priority >= curr) {
|
||
|
+ return true;
|
||
|
+ }
|
||
|
+
|
||
|
+ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority))) {
|
||
|
+ PrioritizedTaskQueue queue = this.queue.get();
|
||
|
+ if (queue != null) {
|
||
|
+ //noinspection unchecked
|
||
|
+ queue.queues[priority].add(this); // silently fail on shutdown
|
||
|
+ }
|
||
|
+ return true;
|
||
|
+ }
|
||
|
+ continue;
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Attempts to set this task's priority level to the level specified.
|
||
|
+ * @param priority Specified priority level.
|
||
|
+ * @return {@code true} if successful, {@code false} if this task is completing or has completed.
|
||
|
+ */
|
||
|
+ public boolean updatePriority(final int priority) {
|
||
|
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
|
||
|
+ throw new IllegalArgumentException("Invalid priority");
|
||
|
+ }
|
||
|
+
|
||
|
+ for (int curr = this.getPriorityVolatile();;) {
|
||
|
+ if (curr == COMPLETING_PRIORITY) {
|
||
|
+ return false;
|
||
|
+ }
|
||
|
+ if (curr == priority) {
|
||
|
+ return true;
|
||
|
+ }
|
||
|
+
|
||
|
+ if (curr == (curr = this.compareAndExchangePriorityVolatile(curr, priority))) {
|
||
|
+ PrioritizedTaskQueue queue = this.queue.get();
|
||
|
+ if (queue != null) {
|
||
|
+ //noinspection unchecked
|
||
|
+ queue.queues[priority].add(this); // silently fail on shutdown
|
||
|
+ }
|
||
|
+ return true;
|
||
|
+ }
|
||
|
+ continue;
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ void setQueue(final PrioritizedTaskQueue queue) {
|
||
|
+ this.queue.set(queue);
|
||
|
+ }
|
||
|
+
|
||
|
+ /* priority */
|
||
|
+
|
||
|
+ protected final int getPriorityVolatile() {
|
||
|
+ return this.priority.get();
|
||
|
+ }
|
||
|
+
|
||
|
+ protected final int compareAndExchangePriorityVolatile(final int expect, final int update) {
|
||
|
+ if (this.priority.compareAndSet(expect, update)) {
|
||
|
+ return expect;
|
||
|
+ }
|
||
|
+ return this.priority.get();
|
||
|
+ }
|
||
|
+
|
||
|
+ protected final int exchangePriorityVolatile(final int value) {
|
||
|
+ return this.priority.getAndSet(value);
|
||
|
+ }
|
||
|
+ }
|
||
|
+}
|
||
|
diff --git a/src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java b/src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java
|
||
|
new file mode 100644
|
||
|
index 0000000000000000000000000000000000000000..ee906b594b306906c170180a29a8b61997d05168
|
||
|
--- /dev/null
|
||
|
+++ b/src/main/java/com/destroystokyo/paper/io/QueueExecutorThread.java
|
||
|
@@ -0,0 +1,241 @@
|
||
|
+package com.destroystokyo.paper.io;
|
||
|
+
|
||
|
+import net.minecraft.server.MinecraftServer;
|
||
|
+import org.apache.logging.log4j.Logger;
|
||
|
+
|
||
|
+import java.util.concurrent.ConcurrentLinkedQueue;
|
||
|
+import java.util.concurrent.atomic.AtomicBoolean;
|
||
|
+import java.util.concurrent.locks.LockSupport;
|
||
|
+
|
||
|
+public class QueueExecutorThread<T extends PrioritizedTaskQueue.PrioritizedTask & Runnable> extends Thread {
|
||
|
+
|
||
|
+ private static final Logger LOGGER = MinecraftServer.LOGGER;
|
||
|
+
|
||
|
+ protected final PrioritizedTaskQueue<T> queue;
|
||
|
+ protected final long spinWaitTime;
|
||
|
+
|
||
|
+ protected volatile boolean closed;
|
||
|
+
|
||
|
+ protected final AtomicBoolean parked = new AtomicBoolean();
|
||
|
+
|
||
|
+ protected volatile ConcurrentLinkedQueue<Thread> flushQueue = new ConcurrentLinkedQueue<>();
|
||
|
+ protected volatile long flushCycles;
|
||
|
+
|
||
|
+ public QueueExecutorThread(final PrioritizedTaskQueue<T> queue) {
|
||
|
+ this(queue, (int)(1.e6)); // 1.0ms
|
||
|
+ }
|
||
|
+
|
||
|
+ public QueueExecutorThread(final PrioritizedTaskQueue<T> queue, final long spinWaitTime) { // in ms
|
||
|
+ this.queue = queue;
|
||
|
+ this.spinWaitTime = spinWaitTime;
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public void run() {
|
||
|
+ final long spinWaitTime = this.spinWaitTime;
|
||
|
+ main_loop:
|
||
|
+ for (;;) {
|
||
|
+ this.pollTasks(true);
|
||
|
+
|
||
|
+ // spinwait
|
||
|
+
|
||
|
+ final long start = System.nanoTime();
|
||
|
+
|
||
|
+ for (;;) {
|
||
|
+ // If we are interrpted for any reason, park() will always return immediately. Clear so that we don't needlessly use cpu in such an event.
|
||
|
+ Thread.interrupted();
|
||
|
+ LockSupport.parkNanos("Spinwaiting on tasks", 1000L); // 1us
|
||
|
+
|
||
|
+ if (this.pollTasks(true)) {
|
||
|
+ // restart loop, found tasks
|
||
|
+ continue main_loop;
|
||
|
+ }
|
||
|
+
|
||
|
+ if (this.handleClose()) {
|
||
|
+ return; // we're done
|
||
|
+ }
|
||
|
+
|
||
|
+ if ((System.nanoTime() - start) >= spinWaitTime) {
|
||
|
+ break;
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ if (this.handleClose()) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ this.parked.set(true);
|
||
|
+
|
||
|
+ // We need to parse here to avoid a race condition where a thread queues a task before we set parked to true
|
||
|
+ // (i.e it will not notify us)
|
||
|
+ if (this.pollTasks(true)) {
|
||
|
+ this.parked.set(false);
|
||
|
+ continue;
|
||
|
+ }
|
||
|
+
|
||
|
+ if (this.handleClose()) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ // we don't need to check parked before sleeping, but we do need to check parked in a do-while loop
|
||
|
+ // LockSupport.park() can fail for any reason
|
||
|
+ do {
|
||
|
+ Thread.interrupted();
|
||
|
+ LockSupport.park("Waiting on tasks");
|
||
|
+ } while (this.parked.get());
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ protected boolean handleClose() {
|
||
|
+ if (this.closed) {
|
||
|
+ this.pollTasks(true); // this ensures we've emptied the queue
|
||
|
+ this.handleFlushThreads(true);
|
||
|
+ return true;
|
||
|
+ }
|
||
|
+ return false;
|
||
|
+ }
|
||
|
+
|
||
|
+ protected boolean pollTasks(boolean flushTasks) {
|
||
|
+ Runnable task;
|
||
|
+ boolean ret = false;
|
||
|
+
|
||
|
+ while ((task = this.queue.poll()) != null) {
|
||
|
+ ret = true;
|
||
|
+ try {
|
||
|
+ task.run();
|
||
|
+ } catch (final Throwable throwable) {
|
||
|
+ if (throwable instanceof ThreadDeath) {
|
||
|
+ throw (ThreadDeath)throwable;
|
||
|
+ }
|
||
|
+ LOGGER.fatal("Exception thrown from prioritized runnable task in thread '" + this.getName() + "': " + IOUtil.genericToString(task), throwable);
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ if (flushTasks) {
|
||
|
+ this.handleFlushThreads(false);
|
||
|
+ }
|
||
|
+
|
||
|
+ return ret;
|
||
|
+ }
|
||
|
+
|
||
|
+ protected void handleFlushThreads(final boolean shutdown) {
|
||
|
+ Thread parking;
|
||
|
+ ConcurrentLinkedQueue<Thread> flushQueue = this.flushQueue;
|
||
|
+ do {
|
||
|
+ ++flushCycles; // may be plain read opaque write
|
||
|
+ while ((parking = flushQueue.poll()) != null) {
|
||
|
+ LockSupport.unpark(parking);
|
||
|
+ }
|
||
|
+ } while (this.pollTasks(false));
|
||
|
+
|
||
|
+ if (shutdown) {
|
||
|
+ this.flushQueue = null;
|
||
|
+
|
||
|
+ // defend against a race condition where a flush thread double-checks right before we set to null
|
||
|
+ while ((parking = flushQueue.poll()) != null) {
|
||
|
+ LockSupport.unpark(parking);
|
||
|
+ }
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Notify's this thread that a task has been added to its queue
|
||
|
+ * @return {@code true} if this thread was waiting for tasks, {@code false} if it is executing tasks
|
||
|
+ */
|
||
|
+ public boolean notifyTasks() {
|
||
|
+ if (this.parked.get() && this.parked.getAndSet(false)) {
|
||
|
+ LockSupport.unpark(this);
|
||
|
+ return true;
|
||
|
+ }
|
||
|
+ return false;
|
||
|
+ }
|
||
|
+
|
||
|
+ protected void queueTask(final T task) {
|
||
|
+ this.queue.add(task);
|
||
|
+ this.notifyTasks();
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Waits until this thread's queue is empty.
|
||
|
+ *
|
||
|
+ * @throws IllegalStateException If the current thread is {@code this} thread.
|
||
|
+ */
|
||
|
+ public void flush() {
|
||
|
+ final Thread currentThread = Thread.currentThread();
|
||
|
+
|
||
|
+ if (currentThread == this) {
|
||
|
+ // avoid deadlock
|
||
|
+ throw new IllegalStateException("Cannot flush the queue executor thread while on the queue executor thread");
|
||
|
+ }
|
||
|
+
|
||
|
+ // order is important
|
||
|
+
|
||
|
+ int successes = 0;
|
||
|
+ long lastCycle = -1L;
|
||
|
+
|
||
|
+ do {
|
||
|
+ final ConcurrentLinkedQueue<Thread> flushQueue = this.flushQueue;
|
||
|
+ if (flushQueue == null) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ flushQueue.add(currentThread);
|
||
|
+
|
||
|
+ // double check flush queue
|
||
|
+ if (this.flushQueue == null) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ final long currentCycle = this.flushCycles; // may be opaque read
|
||
|
+
|
||
|
+ if (currentCycle == lastCycle) {
|
||
|
+ Thread.yield();
|
||
|
+ continue;
|
||
|
+ }
|
||
|
+
|
||
|
+ // force response
|
||
|
+ this.parked.set(false);
|
||
|
+ LockSupport.unpark(this);
|
||
|
+
|
||
|
+ LockSupport.park("flushing queue executor thread");
|
||
|
+
|
||
|
+ // returns whether there are tasks queued, does not return whether there are tasks executing
|
||
|
+ // this is why we cycle twice twice through flush (we know a pollTask call is made after a flush cycle)
|
||
|
+ // we really only need to guarantee that the tasks this thread has queued has gone through, and can leave
|
||
|
+ // tasks queued concurrently that are unsychronized with this thread as undefined behavior
|
||
|
+ if (this.queue.hasTasks()) {
|
||
|
+ successes = 0;
|
||
|
+ } else {
|
||
|
+ ++successes;
|
||
|
+ }
|
||
|
+
|
||
|
+ } while (successes != 2);
|
||
|
+
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Closes this queue executor's queue and optionally waits for it to empty.
|
||
|
+ * <p>
|
||
|
+ * If wait is {@code true}, then the queue will be empty by the time this call completes.
|
||
|
+ * </p>
|
||
|
+ * <p>
|
||
|
+ * This function is MT-Safe.
|
||
|
+ * </p>
|
||
|
+ * @param wait If this call is to wait until the queue is empty
|
||
|
+ * @param killQueue Whether to shutdown this thread's queue
|
||
|
+ * @return whether this thread shut down the queue
|
||
|
+ */
|
||
|
+ public boolean close(final boolean wait, final boolean killQueue) {
|
||
|
+ boolean ret = !killQueue ? false : this.queue.shutdown();
|
||
|
+ this.closed = true;
|
||
|
+
|
||
|
+ // force thread to respond to the shutdown
|
||
|
+ this.parked.set(false);
|
||
|
+ LockSupport.unpark(this);
|
||
|
+
|
||
|
+ if (wait) {
|
||
|
+ this.flush();
|
||
|
+ }
|
||
|
+ return ret;
|
||
|
+ }
|
||
|
+}
|
||
|
diff --git a/src/main/java/com/destroystokyo/paper/io/chunk/ChunkLoadTask.java b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkLoadTask.java
|
||
|
new file mode 100644
|
||
|
index 0000000000000000000000000000000000000000..26a5da48c87674f320aa9f7382217cde2c93e08c
|
||
|
--- /dev/null
|
||
|
+++ b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkLoadTask.java
|
||
|
@@ -0,0 +1,145 @@
|
||
|
+package com.destroystokyo.paper.io.chunk;
|
||
|
+
|
||
|
+import co.aikar.timings.Timing;
|
||
|
+import com.destroystokyo.paper.io.PaperFileIOThread;
|
||
|
+import com.destroystokyo.paper.io.IOUtil;
|
||
|
+import java.util.ArrayDeque;
|
||
|
+import java.util.function.Consumer;
|
||
|
+import net.minecraft.server.level.ChunkMap;
|
||
|
+import net.minecraft.server.level.ServerLevel;
|
||
|
+import net.minecraft.world.level.ChunkPos;
|
||
|
+import net.minecraft.world.level.chunk.storage.ChunkSerializer;
|
||
|
+
|
||
|
+public final class ChunkLoadTask extends ChunkTask {
|
||
|
+
|
||
|
+ public boolean cancelled;
|
||
|
+
|
||
|
+ Consumer<ChunkSerializer.InProgressChunkHolder> onComplete;
|
||
|
+ public PaperFileIOThread.ChunkData chunkData;
|
||
|
+
|
||
|
+ private boolean hasCompleted;
|
||
|
+
|
||
|
+ public ChunkLoadTask(final ServerLevel world, final int chunkX, final int chunkZ, final int priority,
|
||
|
+ final ChunkTaskManager taskManager,
|
||
|
+ final Consumer<ChunkSerializer.InProgressChunkHolder> onComplete) {
|
||
|
+ super(world, chunkX, chunkZ, priority, taskManager);
|
||
|
+ this.onComplete = onComplete;
|
||
|
+ }
|
||
|
+
|
||
|
+ private static final ArrayDeque<Runnable> EMPTY_QUEUE = new ArrayDeque<>();
|
||
|
+
|
||
|
+ private static ChunkSerializer.InProgressChunkHolder createEmptyHolder() {
|
||
|
+ return new ChunkSerializer.InProgressChunkHolder(null, EMPTY_QUEUE);
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public void run() {
|
||
|
+ try {
|
||
|
+ this.executeTask();
|
||
|
+ } catch (final Throwable ex) {
|
||
|
+ PaperFileIOThread.LOGGER.error("Failed to execute chunk load task: " + this.toString(), ex);
|
||
|
+ if (!this.hasCompleted) {
|
||
|
+ this.complete(ChunkLoadTask.createEmptyHolder());
|
||
|
+ }
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ private boolean checkCancelled() {
|
||
|
+ if (this.cancelled) {
|
||
|
+ // IntelliJ does not understand writes may occur to cancelled concurrently.
|
||
|
+ return this.taskManager.chunkLoadTasks.compute(Long.valueOf(IOUtil.getCoordinateKey(this.chunkX, this.chunkZ)), (final Long keyInMap, final ChunkLoadTask valueInMap) -> {
|
||
|
+ if (valueInMap != ChunkLoadTask.this) {
|
||
|
+ throw new IllegalStateException("Expected this task to be scheduled, but another was! Other: " + valueInMap + ", current: " + ChunkLoadTask.this);
|
||
|
+ }
|
||
|
+
|
||
|
+ if (valueInMap.cancelled) {
|
||
|
+ return null;
|
||
|
+ }
|
||
|
+ return valueInMap;
|
||
|
+ }) == null;
|
||
|
+ }
|
||
|
+ return false;
|
||
|
+ }
|
||
|
+
|
||
|
+ public void executeTask() {
|
||
|
+ if (this.checkCancelled()) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ // either executed synchronously or asynchronously
|
||
|
+ final PaperFileIOThread.ChunkData chunkData = this.chunkData;
|
||
|
+
|
||
|
+ if (chunkData.poiData == PaperFileIOThread.FAILURE_VALUE || chunkData.chunkData == PaperFileIOThread.FAILURE_VALUE) {
|
||
|
+ PaperFileIOThread.LOGGER.error("Could not load chunk for task: " + this.toString() + ", file IO thread has dumped the relevant exception above");
|
||
|
+ this.complete(ChunkLoadTask.createEmptyHolder());
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ if (chunkData.chunkData == null) {
|
||
|
+ // not on disk
|
||
|
+ this.complete(ChunkLoadTask.createEmptyHolder());
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ final ChunkPos chunkPos = new ChunkPos(this.chunkX, this.chunkZ);
|
||
|
+
|
||
|
+ final ChunkMap chunkManager = this.world.getChunkSource().chunkMap;
|
||
|
+
|
||
|
+ try (Timing ignored = this.world.timings.chunkLoadLevelTimer.startTimingIfSync()) {
|
||
|
+ final ChunkSerializer.InProgressChunkHolder chunkHolder;
|
||
|
+
|
||
|
+ // apply fixes
|
||
|
+
|
||
|
+ try {
|
||
|
+ chunkData.chunkData = chunkManager.getChunkData(this.world.getTypeKey(),
|
||
|
+ chunkManager.getWorldPersistentDataSupplier(), chunkData.chunkData, chunkPos, this.world); // clone data for safety, file IO thread does not clone
|
||
|
+ } catch (final Throwable ex) {
|
||
|
+ PaperFileIOThread.LOGGER.error("Could not apply datafixers for chunk task: " + this.toString(), ex);
|
||
|
+ this.complete(ChunkLoadTask.createEmptyHolder());
|
||
|
+ }
|
||
|
+
|
||
|
+ if (this.checkCancelled()) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ try {
|
||
|
+ this.world.getChunkSource().chunkMap.updateChunkStatusOnDisk(chunkPos, chunkData.chunkData);
|
||
|
+ } catch (final Throwable ex) {
|
||
|
+ PaperFileIOThread.LOGGER.warn("Failed to update chunk status cache for task: " + this.toString(), ex);
|
||
|
+ // non-fatal, continue
|
||
|
+ }
|
||
|
+
|
||
|
+ try {
|
||
|
+ chunkHolder = ChunkSerializer.loadChunk(this.world,
|
||
|
+ chunkManager.structureManager, chunkManager.getVillagePlace(), chunkPos,
|
||
|
+ chunkData.chunkData, true);
|
||
|
+ } catch (final Throwable ex) {
|
||
|
+ PaperFileIOThread.LOGGER.error("Could not de-serialize chunk data for task: " + this.toString(), ex);
|
||
|
+ this.complete(ChunkLoadTask.createEmptyHolder());
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ this.complete(chunkHolder);
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ private void complete(final ChunkSerializer.InProgressChunkHolder holder) {
|
||
|
+ this.hasCompleted = true;
|
||
|
+ holder.poiData = this.chunkData == null ? null : this.chunkData.poiData;
|
||
|
+
|
||
|
+ this.taskManager.chunkLoadTasks.compute(Long.valueOf(IOUtil.getCoordinateKey(this.chunkX, this.chunkZ)), (final Long keyInMap, final ChunkLoadTask valueInMap) -> {
|
||
|
+ if (valueInMap != ChunkLoadTask.this) {
|
||
|
+ throw new IllegalStateException("Expected this task to be scheduled, but another was! Other: " + valueInMap + ", current: " + ChunkLoadTask.this);
|
||
|
+ }
|
||
|
+ if (valueInMap.cancelled) {
|
||
|
+ return null;
|
||
|
+ }
|
||
|
+ try {
|
||
|
+ ChunkLoadTask.this.onComplete.accept(holder);
|
||
|
+ } catch (final Throwable thr) {
|
||
|
+ PaperFileIOThread.LOGGER.error("Failed to complete chunk data for task: " + this.toString(), thr);
|
||
|
+ }
|
||
|
+ return null;
|
||
|
+ });
|
||
|
+ }
|
||
|
+}
|
||
|
diff --git a/src/main/java/com/destroystokyo/paper/io/chunk/ChunkSaveTask.java b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkSaveTask.java
|
||
|
new file mode 100644
|
||
|
index 0000000000000000000000000000000000000000..69ebbfa171385c46a84d1a0d241d168a8c2af145
|
||
|
--- /dev/null
|
||
|
+++ b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkSaveTask.java
|
||
|
@@ -0,0 +1,111 @@
|
||
|
+package com.destroystokyo.paper.io.chunk;
|
||
|
+
|
||
|
+import co.aikar.timings.Timing;
|
||
|
+import com.destroystokyo.paper.io.PaperFileIOThread;
|
||
|
+import com.destroystokyo.paper.io.IOUtil;
|
||
|
+import com.destroystokyo.paper.io.PrioritizedTaskQueue;
|
||
|
+
|
||
|
+import java.util.concurrent.CompletableFuture;
|
||
|
+import java.util.concurrent.atomic.AtomicInteger;
|
||
|
+import net.minecraft.nbt.CompoundTag;
|
||
|
+import net.minecraft.server.level.ServerLevel;
|
||
|
+import net.minecraft.world.level.chunk.ChunkAccess;
|
||
|
+import net.minecraft.world.level.chunk.storage.ChunkSerializer;
|
||
|
+
|
||
|
+public final class ChunkSaveTask extends ChunkTask {
|
||
|
+
|
||
|
+ public final ChunkSerializer.AsyncSaveData asyncSaveData;
|
||
|
+ public final ChunkAccess chunk;
|
||
|
+ public final CompletableFuture<CompoundTag> onComplete = new CompletableFuture<>();
|
||
|
+
|
||
|
+ private final AtomicInteger attemptedPriority;
|
||
|
+
|
||
|
+ public ChunkSaveTask(final ServerLevel world, final int chunkX, final int chunkZ, final int priority,
|
||
|
+ final ChunkTaskManager taskManager, final ChunkSerializer.AsyncSaveData asyncSaveData,
|
||
|
+ final ChunkAccess chunk) {
|
||
|
+ super(world, chunkX, chunkZ, priority, taskManager);
|
||
|
+ this.chunk = chunk;
|
||
|
+ this.asyncSaveData = asyncSaveData;
|
||
|
+ this.attemptedPriority = new AtomicInteger(priority);
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public void run() {
|
||
|
+ // can be executed asynchronously or synchronously
|
||
|
+ final CompoundTag compound;
|
||
|
+
|
||
|
+ try (Timing ignored = this.world.timings.chunkUnloadDataSave.startTimingIfSync()) {
|
||
|
+ compound = ChunkSerializer.saveChunk(this.world, this.chunk, this.asyncSaveData);
|
||
|
+ } catch (final Throwable ex) {
|
||
|
+ // has a plugin modified something it should not have and made us CME?
|
||
|
+ PaperFileIOThread.LOGGER.error("Failed to serialize unloading chunk data for task: " + this.toString() + ", falling back to a synchronous execution", ex);
|
||
|
+
|
||
|
+ // Note: We add to the server thread queue here since this is what the server will drain tasks from
|
||
|
+ // when waiting for chunks
|
||
|
+ ChunkTaskManager.queueChunkWaitTask(() -> {
|
||
|
+ try (Timing ignored = this.world.timings.chunkUnloadDataSave.startTiming()) {
|
||
|
+ CompoundTag data = PaperFileIOThread.FAILURE_VALUE;
|
||
|
+
|
||
|
+ try {
|
||
|
+ data = ChunkSerializer.saveChunk(this.world, this.chunk, this.asyncSaveData);
|
||
|
+ PaperFileIOThread.LOGGER.info("Successfully serialized chunk data for task: " + this.toString() + " synchronously");
|
||
|
+ } catch (final Throwable ex1) {
|
||
|
+ PaperFileIOThread.LOGGER.fatal("Failed to synchronously serialize unloading chunk data for task: " + this.toString() + "! Chunk data will be lost", ex1);
|
||
|
+ }
|
||
|
+
|
||
|
+ ChunkSaveTask.this.complete(data);
|
||
|
+ }
|
||
|
+ });
|
||
|
+
|
||
|
+ return; // the main thread will now complete the data
|
||
|
+ }
|
||
|
+
|
||
|
+ this.complete(compound);
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public boolean raisePriority(final int priority) {
|
||
|
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
|
||
|
+ throw new IllegalStateException("Invalid priority: " + priority);
|
||
|
+ }
|
||
|
+
|
||
|
+ // we know priority is valid here
|
||
|
+ for (int curr = this.attemptedPriority.get();;) {
|
||
|
+ if (curr <= priority) {
|
||
|
+ break; // curr is higher/same priority
|
||
|
+ }
|
||
|
+ if (this.attemptedPriority.compareAndSet(curr, priority)) {
|
||
|
+ break;
|
||
|
+ }
|
||
|
+ curr = this.attemptedPriority.get();
|
||
|
+ }
|
||
|
+
|
||
|
+ return super.raisePriority(priority);
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public boolean updatePriority(final int priority) {
|
||
|
+ if (!PrioritizedTaskQueue.validPriority(priority)) {
|
||
|
+ throw new IllegalStateException("Invalid priority: " + priority);
|
||
|
+ }
|
||
|
+ this.attemptedPriority.set(priority);
|
||
|
+ return super.updatePriority(priority);
|
||
|
+ }
|
||
|
+
|
||
|
+ private void complete(final CompoundTag compound) {
|
||
|
+ try {
|
||
|
+ this.onComplete.complete(compound);
|
||
|
+ } catch (final Throwable thr) {
|
||
|
+ PaperFileIOThread.LOGGER.error("Failed to complete chunk data for task: " + this.toString(), thr);
|
||
|
+ }
|
||
|
+ if (compound != PaperFileIOThread.FAILURE_VALUE) {
|
||
|
+ PaperFileIOThread.Holder.INSTANCE.scheduleSave(this.world, this.chunkX, this.chunkZ, null, compound, this.attemptedPriority.get());
|
||
|
+ }
|
||
|
+ this.taskManager.chunkSaveTasks.compute(Long.valueOf(IOUtil.getCoordinateKey(this.chunkX, this.chunkZ)), (final Long keyInMap, final ChunkSaveTask valueInMap) -> {
|
||
|
+ if (valueInMap != ChunkSaveTask.this) {
|
||
|
+ throw new IllegalStateException("Expected this task to be scheduled, but another was! Other: " + valueInMap + ", this: " + ChunkSaveTask.this);
|
||
|
+ }
|
||
|
+ return null;
|
||
|
+ });
|
||
|
+ }
|
||
|
+}
|
||
|
diff --git a/src/main/java/com/destroystokyo/paper/io/chunk/ChunkTask.java b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkTask.java
|
||
|
new file mode 100644
|
||
|
index 0000000000000000000000000000000000000000..058fb5a41565e6ce2acbd1f4d071a1b8be449f5d
|
||
|
--- /dev/null
|
||
|
+++ b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkTask.java
|
||
|
@@ -0,0 +1,40 @@
|
||
|
+package com.destroystokyo.paper.io.chunk;
|
||
|
+
|
||
|
+import com.destroystokyo.paper.io.PaperFileIOThread;
|
||
|
+import com.destroystokyo.paper.io.PrioritizedTaskQueue;
|
||
|
+import net.minecraft.server.level.ServerLevel;
|
||
|
+
|
||
|
+abstract class ChunkTask extends PrioritizedTaskQueue.PrioritizedTask implements Runnable {
|
||
|
+
|
||
|
+ public final ServerLevel world;
|
||
|
+ public final int chunkX;
|
||
|
+ public final int chunkZ;
|
||
|
+ public final ChunkTaskManager taskManager;
|
||
|
+
|
||
|
+ public ChunkTask(final ServerLevel world, final int chunkX, final int chunkZ, final int priority,
|
||
|
+ final ChunkTaskManager taskManager) {
|
||
|
+ super(priority);
|
||
|
+ this.world = world;
|
||
|
+ this.chunkX = chunkX;
|
||
|
+ this.chunkZ = chunkZ;
|
||
|
+ this.taskManager = taskManager;
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public String toString() {
|
||
|
+ return "Chunk task: class:" + this.getClass().getName() + ", for world '" + this.world.getWorld().getName() +
|
||
|
+ "', (" + this.chunkX + "," + this.chunkZ + "), hashcode:" + this.hashCode() + ", priority: " + this.getPriority();
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public boolean raisePriority(final int priority) {
|
||
|
+ PaperFileIOThread.Holder.INSTANCE.bumpPriority(this.world, this.chunkX, this.chunkZ, priority);
|
||
|
+ return super.raisePriority(priority);
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public boolean updatePriority(final int priority) {
|
||
|
+ PaperFileIOThread.Holder.INSTANCE.setPriority(this.world, this.chunkX, this.chunkZ, priority);
|
||
|
+ return super.updatePriority(priority);
|
||
|
+ }
|
||
|
+}
|
||
|
diff --git a/src/main/java/com/destroystokyo/paper/io/chunk/ChunkTaskManager.java b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkTaskManager.java
|
||
|
new file mode 100644
|
||
|
index 0000000000000000000000000000000000000000..499aff1f1e1ffc01ba8f9de43ca17899525a306f
|
||
|
--- /dev/null
|
||
|
+++ b/src/main/java/com/destroystokyo/paper/io/chunk/ChunkTaskManager.java
|
||
|
@@ -0,0 +1,513 @@
|
||
|
+package com.destroystokyo.paper.io.chunk;
|
||
|
+
|
||
|
+import com.destroystokyo.paper.io.PaperFileIOThread;
|
||
|
+import com.destroystokyo.paper.io.IOUtil;
|
||
|
+import com.destroystokyo.paper.io.PrioritizedTaskQueue;
|
||
|
+import com.destroystokyo.paper.io.QueueExecutorThread;
|
||
|
+import net.minecraft.nbt.CompoundTag;
|
||
|
+import net.minecraft.server.MinecraftServer;
|
||
|
+import net.minecraft.server.level.ChunkHolder;
|
||
|
+import net.minecraft.server.level.ServerChunkCache;
|
||
|
+import net.minecraft.server.level.ServerLevel;
|
||
|
+import net.minecraft.util.thread.BlockableEventLoop;
|
||
|
+import net.minecraft.world.level.chunk.ChunkAccess;
|
||
|
+import net.minecraft.world.level.chunk.ChunkStatus;
|
||
|
+import net.minecraft.world.level.chunk.storage.ChunkSerializer;
|
||
|
+import org.apache.commons.lang.StringUtils;
|
||
|
+import org.apache.logging.log4j.Level;
|
||
|
+import org.bukkit.Bukkit;
|
||
|
+import org.spigotmc.AsyncCatcher;
|
||
|
+
|
||
|
+import java.util.ArrayDeque;
|
||
|
+import java.util.HashSet;
|
||
|
+import java.util.Set;
|
||
|
+import java.util.concurrent.CompletableFuture;
|
||
|
+import java.util.concurrent.ConcurrentHashMap;
|
||
|
+import java.util.concurrent.ConcurrentLinkedQueue;
|
||
|
+import java.util.function.Consumer;
|
||
|
+
|
||
|
+public final class ChunkTaskManager {
|
||
|
+
|
||
|
+ private final QueueExecutorThread<ChunkTask>[] workers;
|
||
|
+ private final ServerLevel world;
|
||
|
+
|
||
|
+ private final PrioritizedTaskQueue<ChunkTask> queue;
|
||
|
+ private final boolean perWorldQueue;
|
||
|
+
|
||
|
+ final ConcurrentHashMap<Long, ChunkLoadTask> chunkLoadTasks = new ConcurrentHashMap<>(64, 0.5f);
|
||
|
+ final ConcurrentHashMap<Long, ChunkSaveTask> chunkSaveTasks = new ConcurrentHashMap<>(64, 0.5f);
|
||
|
+
|
||
|
+ private final PrioritizedTaskQueue<ChunkTask> chunkTasks = new PrioritizedTaskQueue<>(); // used if async chunks are disabled in config
|
||
|
+
|
||
|
+ protected static QueueExecutorThread<ChunkTask>[] globalWorkers;
|
||
|
+ protected static QueueExecutorThread<ChunkTask> globalUrgentWorker;
|
||
|
+ protected static PrioritizedTaskQueue<ChunkTask> globalQueue;
|
||
|
+ protected static PrioritizedTaskQueue<ChunkTask> globalUrgentQueue;
|
||
|
+
|
||
|
+ protected static final ConcurrentLinkedQueue<Runnable> CHUNK_WAIT_QUEUE = new ConcurrentLinkedQueue<>();
|
||
|
+
|
||
|
+ public static final ArrayDeque<ChunkInfo> WAITING_CHUNKS = new ArrayDeque<>(); // stack
|
||
|
+
|
||
|
+ private static final class ChunkInfo {
|
||
|
+
|
||
|
+ public final int chunkX;
|
||
|
+ public final int chunkZ;
|
||
|
+ public final ServerLevel world;
|
||
|
+
|
||
|
+ public ChunkInfo(final int chunkX, final int chunkZ, final ServerLevel world) {
|
||
|
+ this.chunkX = chunkX;
|
||
|
+ this.chunkZ = chunkZ;
|
||
|
+ this.world = world;
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public String toString() {
|
||
|
+ return "[( " + this.chunkX + "," + this.chunkZ + ") in '" + this.world.getWorld().getName() + "']";
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ public static void pushChunkWait(final ServerLevel world, final int chunkX, final int chunkZ) {
|
||
|
+ synchronized (WAITING_CHUNKS) {
|
||
|
+ WAITING_CHUNKS.push(new ChunkInfo(chunkX, chunkZ, world));
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ public static void popChunkWait() {
|
||
|
+ synchronized (WAITING_CHUNKS) {
|
||
|
+ WAITING_CHUNKS.pop();
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ private static ChunkInfo[] getChunkInfos() {
|
||
|
+ ChunkInfo[] chunks;
|
||
|
+ synchronized (WAITING_CHUNKS) {
|
||
|
+ chunks = WAITING_CHUNKS.toArray(new ChunkInfo[0]);
|
||
|
+ }
|
||
|
+ return chunks;
|
||
|
+ }
|
||
|
+
|
||
|
+ public static void dumpAllChunkLoadInfo() {
|
||
|
+ ChunkInfo[] chunks = getChunkInfos();
|
||
|
+ if (chunks.length > 0) {
|
||
|
+ PaperFileIOThread.LOGGER.log(Level.ERROR, "Chunk wait task info below: ");
|
||
|
+
|
||
|
+ for (final ChunkInfo chunkInfo : chunks) {
|
||
|
+ final long key = IOUtil.getCoordinateKey(chunkInfo.chunkX, chunkInfo.chunkZ);
|
||
|
+ final ChunkLoadTask loadTask = chunkInfo.world.asyncChunkTaskManager.chunkLoadTasks.get(key);
|
||
|
+ final ChunkSaveTask saveTask = chunkInfo.world.asyncChunkTaskManager.chunkSaveTasks.get(key);
|
||
|
+
|
||
|
+ PaperFileIOThread.LOGGER.log(Level.ERROR, chunkInfo.chunkX + "," + chunkInfo.chunkZ + " in '" + chunkInfo.world.getWorld().getName() + ":");
|
||
|
+ PaperFileIOThread.LOGGER.log(Level.ERROR, "Load Task - " + (loadTask == null ? "none" : loadTask.toString()));
|
||
|
+ PaperFileIOThread.LOGGER.log(Level.ERROR, "Save Task - " + (saveTask == null ? "none" : saveTask.toString()));
|
||
|
+ // log current status of chunk to indicate whether we're waiting on generation or loading
|
||
|
+ ChunkHolder chunkHolder = chunkInfo.world.getChunkSource().chunkMap.getVisibleChunkIfPresent(key);
|
||
|
+
|
||
|
+ dumpChunkInfo(new HashSet<>(), chunkHolder, chunkInfo.chunkX, chunkInfo.chunkZ);
|
||
|
+ }
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ static void dumpChunkInfo(Set<ChunkHolder> seenChunks, ChunkHolder chunkHolder, int x, int z) {
|
||
|
+ dumpChunkInfo(seenChunks, chunkHolder, x, z, 0, 1);
|
||
|
+ }
|
||
|
+
|
||
|
+ static void dumpChunkInfo(Set<ChunkHolder> seenChunks, ChunkHolder chunkHolder, int x, int z, int indent, int maxDepth) {
|
||
|
+ if (seenChunks.contains(chunkHolder)) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+ if (indent > maxDepth) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+ seenChunks.add(chunkHolder);
|
||
|
+ String indentStr = StringUtils.repeat(" ", indent);
|
||
|
+ if (chunkHolder == null) {
|
||
|
+ PaperFileIOThread.LOGGER.log(Level.ERROR, indentStr + "Chunk Holder - null for (" + x +"," + z +")");
|
||
|
+ } else {
|
||
|
+ ChunkAccess chunk = chunkHolder.getAvailableChunkNow();
|
||
|
+ ChunkStatus holderStatus = chunkHolder.getChunkHolderStatus();
|
||
|
+ PaperFileIOThread.LOGGER.log(Level.ERROR, indentStr + "Chunk Holder - non-null");
|
||
|
+ PaperFileIOThread.LOGGER.log(Level.ERROR, indentStr + "Chunk Status - " + ((chunk == null) ? "null chunk" : chunk.getStatus().toString()));
|
||
|
+ PaperFileIOThread.LOGGER.log(Level.ERROR, indentStr + "Chunk Ticket Status - " + ChunkHolder.getStatus(chunkHolder.getTicketLevel()));
|
||
|
+ PaperFileIOThread.LOGGER.log(Level.ERROR, indentStr + "Chunk Holder Status - " + ((holderStatus == null) ? "null" : holderStatus.toString()));
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ public static void initGlobalLoadThreads(int threads) {
|
||
|
+ if (threads <= 0 || globalWorkers != null) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ globalWorkers = new QueueExecutorThread[threads];
|
||
|
+ globalQueue = new PrioritizedTaskQueue<>();
|
||
|
+ globalUrgentQueue = new PrioritizedTaskQueue<>();
|
||
|
+
|
||
|
+ for (int i = 0; i < threads; ++i) {
|
||
|
+ globalWorkers[i] = new QueueExecutorThread<>(globalQueue, (long)0.10e6); //0.1ms
|
||
|
+ globalWorkers[i].setName("Paper Async Chunk Task Thread #" + i);
|
||
|
+ globalWorkers[i].setPriority(Thread.NORM_PRIORITY - 1);
|
||
|
+ globalWorkers[i].setUncaughtExceptionHandler((final Thread thread, final Throwable throwable) -> {
|
||
|
+ PaperFileIOThread.LOGGER.fatal("Thread '" + thread.getName() + "' threw an uncaught exception!", throwable);
|
||
|
+ });
|
||
|
+
|
||
|
+ globalWorkers[i].start();
|
||
|
+ }
|
||
|
+
|
||
|
+ globalUrgentWorker = new QueueExecutorThread<>(globalUrgentQueue, (long)0.10e6); //0.1ms
|
||
|
+ globalUrgentWorker.setName("Paper Async Chunk Urgent Task Thread");
|
||
|
+ globalUrgentWorker.setPriority(Thread.NORM_PRIORITY+1);
|
||
|
+ globalUrgentWorker.setUncaughtExceptionHandler((final Thread thread, final Throwable throwable) -> {
|
||
|
+ PaperFileIOThread.LOGGER.fatal("Thread '" + thread.getName() + "' threw an uncaught exception!", throwable);
|
||
|
+ });
|
||
|
+
|
||
|
+ globalUrgentWorker.start();
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Creates this chunk task manager to operate off the specified number of threads. If the specified number of threads is
|
||
|
+ * less-than or equal to 0, then this chunk task manager will operate off of the world's chunk task queue.
|
||
|
+ * @param world Specified world.
|
||
|
+ * @param threads Specified number of threads.
|
||
|
+ * @see ServerChunkCache#mainThreadProcessor
|
||
|
+ */
|
||
|
+ public ChunkTaskManager(final ServerLevel world, final int threads) {
|
||
|
+ this.world = world;
|
||
|
+ this.workers = threads <= 0 ? null : new QueueExecutorThread[threads];
|
||
|
+ this.queue = new PrioritizedTaskQueue<>();
|
||
|
+ this.perWorldQueue = true;
|
||
|
+
|
||
|
+ for (int i = 0; i < threads; ++i) {
|
||
|
+ this.workers[i] = new QueueExecutorThread<>(this.queue, (long)0.10e6); //0.1ms
|
||
|
+ this.workers[i].setName("Async chunk loader thread #" + i + " for world: " + world.getWorld().getName());
|
||
|
+ this.workers[i].setPriority(Thread.NORM_PRIORITY - 1);
|
||
|
+ this.workers[i].setUncaughtExceptionHandler((final Thread thread, final Throwable throwable) -> {
|
||
|
+ PaperFileIOThread.LOGGER.fatal("Thread '" + thread.getName() + "' threw an uncaught exception!", throwable);
|
||
|
+ });
|
||
|
+
|
||
|
+ this.workers[i].start();
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Creates the chunk task manager to work from the global workers. When {@link #close(boolean)} is invoked,
|
||
|
+ * the global queue is not shutdown. If the global workers is configured to be disabled or use 0 threads, then
|
||
|
+ * this chunk task manager will operate off of the world's chunk task queue.
|
||
|
+ * @param world The world that this task manager is responsible for
|
||
|
+ * @see ServerChunkCache#mainThreadProcessor
|
||
|
+ */
|
||
|
+ public ChunkTaskManager(final ServerLevel world) {
|
||
|
+ this.world = world;
|
||
|
+ this.workers = globalWorkers;
|
||
|
+ this.queue = globalQueue;
|
||
|
+ this.perWorldQueue = false;
|
||
|
+ }
|
||
|
+
|
||
|
+ public boolean pollNextChunkTask() {
|
||
|
+ final ChunkTask task = this.chunkTasks.poll();
|
||
|
+
|
||
|
+ if (task != null) {
|
||
|
+ task.run();
|
||
|
+ return true;
|
||
|
+ }
|
||
|
+ return false;
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Polls and runs the next available chunk wait queue task. This is to be used when the server is waiting on a chunk queue.
|
||
|
+ * (per-world can cause issues if all the worker threads are blocked waiting for a response from the main thread)
|
||
|
+ */
|
||
|
+ public static boolean pollChunkWaitQueue() {
|
||
|
+ final Runnable run = CHUNK_WAIT_QUEUE.poll();
|
||
|
+ if (run != null) {
|
||
|
+ run.run();
|
||
|
+ return true;
|
||
|
+ }
|
||
|
+ return false;
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Queues a chunk wait task. Note that this will execute out of order with respect to tasks scheduled on a world's
|
||
|
+ * chunk task queue, since this is the global chunk wait queue.
|
||
|
+ */
|
||
|
+ public static void queueChunkWaitTask(final Runnable runnable) {
|
||
|
+ CHUNK_WAIT_QUEUE.add(runnable);
|
||
|
+ }
|
||
|
+
|
||
|
+ private static void drainChunkWaitQueue() {
|
||
|
+ Runnable run;
|
||
|
+ while ((run = CHUNK_WAIT_QUEUE.poll()) != null) {
|
||
|
+ run.run();
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * The exact same as {@link #scheduleChunkLoad(int, int, int, Consumer, boolean)}, except that the chunk data is provided as
|
||
|
+ * the {@code data} parameter.
|
||
|
+ */
|
||
|
+ public ChunkLoadTask scheduleChunkLoad(final int chunkX, final int chunkZ, final int priority,
|
||
|
+ final Consumer<ChunkSerializer.InProgressChunkHolder> onComplete,
|
||
|
+ final boolean intendingToBlock, final CompletableFuture<CompoundTag> dataFuture) {
|
||
|
+ final ServerLevel world = this.world;
|
||
|
+
|
||
|
+ return this.chunkLoadTasks.compute(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)), (final Long keyInMap, final ChunkLoadTask valueInMap) -> {
|
||
|
+ if (valueInMap != null) {
|
||
|
+ if (!valueInMap.cancelled) {
|
||
|
+ throw new IllegalStateException("Double scheduling chunk load for task: " + valueInMap.toString());
|
||
|
+ }
|
||
|
+ valueInMap.cancelled = false;
|
||
|
+ valueInMap.onComplete = onComplete;
|
||
|
+ return valueInMap;
|
||
|
+ }
|
||
|
+
|
||
|
+ final ChunkLoadTask ret = new ChunkLoadTask(world, chunkX, chunkZ, priority, ChunkTaskManager.this, onComplete);
|
||
|
+
|
||
|
+ dataFuture.thenAccept((final CompoundTag data) -> {
|
||
|
+ final boolean failed = data == PaperFileIOThread.FAILURE_VALUE;
|
||
|
+ PaperFileIOThread.Holder.INSTANCE.loadChunkDataAsync(world, chunkX, chunkZ, priority, (final PaperFileIOThread.ChunkData chunkData) -> {
|
||
|
+ ret.chunkData = chunkData;
|
||
|
+ if (!failed) {
|
||
|
+ chunkData.chunkData = data;
|
||
|
+ }
|
||
|
+ ChunkTaskManager.this.internalSchedule(ret); // only schedule to the worker threads here
|
||
|
+ }, true, failed, intendingToBlock); // read data off disk if the future fails
|
||
|
+ });
|
||
|
+
|
||
|
+ return ret;
|
||
|
+ });
|
||
|
+ }
|
||
|
+
|
||
|
+ public void cancelChunkLoad(final int chunkX, final int chunkZ) {
|
||
|
+ this.chunkLoadTasks.compute(IOUtil.getCoordinateKey(chunkX, chunkZ), (final Long keyInMap, final ChunkLoadTask valueInMap) -> {
|
||
|
+ if (valueInMap == null) {
|
||
|
+ return null;
|
||
|
+ }
|
||
|
+
|
||
|
+ if (valueInMap.cancelled) {
|
||
|
+ PaperFileIOThread.LOGGER.warn("Task " + valueInMap.toString() + " is already cancelled!");
|
||
|
+ }
|
||
|
+ valueInMap.cancelled = true;
|
||
|
+ if (valueInMap.cancel()) {
|
||
|
+ return null;
|
||
|
+ }
|
||
|
+
|
||
|
+ return valueInMap;
|
||
|
+ });
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Schedules an asynchronous chunk load for the specified coordinates. The onComplete parameter may be invoked asynchronously
|
||
|
+ * on a worker thread or on the world's chunk executor queue. As such the code that is executed for the parameter should be
|
||
|
+ * carefully chosen.
|
||
|
+ * @param chunkX Chunk's x coordinate
|
||
|
+ * @param chunkZ Chunk's z coordinate
|
||
|
+ * @param priority Priority for this task
|
||
|
+ * @param onComplete The consumer to invoke with the {@link ChunkSerializer.InProgressChunkHolder} object once this task is complete
|
||
|
+ * @param intendingToBlock Whether the caller is intending to block on this task completing (this is a performance tune, and has no adverse side-effects)
|
||
|
+ * @return The {@link ChunkLoadTask} associated with
|
||
|
+ */
|
||
|
+ public ChunkLoadTask scheduleChunkLoad(final int chunkX, final int chunkZ, final int priority,
|
||
|
+ final Consumer<ChunkSerializer.InProgressChunkHolder> onComplete,
|
||
|
+ final boolean intendingToBlock) {
|
||
|
+ final ServerLevel world = this.world;
|
||
|
+
|
||
|
+ return this.chunkLoadTasks.compute(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)), (final Long keyInMap, final ChunkLoadTask valueInMap) -> {
|
||
|
+ if (valueInMap != null) {
|
||
|
+ if (!valueInMap.cancelled) {
|
||
|
+ throw new IllegalStateException("Double scheduling chunk load for task: " + valueInMap.toString());
|
||
|
+ }
|
||
|
+ valueInMap.cancelled = false;
|
||
|
+ valueInMap.onComplete = onComplete;
|
||
|
+ return valueInMap;
|
||
|
+ }
|
||
|
+
|
||
|
+ final ChunkLoadTask ret = new ChunkLoadTask(world, chunkX, chunkZ, priority, ChunkTaskManager.this, onComplete);
|
||
|
+
|
||
|
+ PaperFileIOThread.Holder.INSTANCE.loadChunkDataAsync(world, chunkX, chunkZ, priority, (final PaperFileIOThread.ChunkData chunkData) -> {
|
||
|
+ ret.chunkData = chunkData;
|
||
|
+ ChunkTaskManager.this.internalSchedule(ret); // only schedule to the worker threads here
|
||
|
+ }, true, true, intendingToBlock);
|
||
|
+
|
||
|
+ return ret;
|
||
|
+ });
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Schedules an async save for the specified chunk. The chunk, at the beginning of this call, must be completely unloaded
|
||
|
+ * from the world.
|
||
|
+ * @param chunkX Chunk's x coordinate
|
||
|
+ * @param chunkZ Chunk's z coordinate
|
||
|
+ * @param priority Priority for this task
|
||
|
+ * @param asyncSaveData Async save data. See {@link ChunkSerializer#getAsyncSaveData(ServerLevel, ChunkAccess)}
|
||
|
+ * @param chunk Chunk to save
|
||
|
+ * @return The {@link ChunkSaveTask} associated with the save task.
|
||
|
+ */
|
||
|
+ public ChunkSaveTask scheduleChunkSave(final int chunkX, final int chunkZ, final int priority,
|
||
|
+ final ChunkSerializer.AsyncSaveData asyncSaveData,
|
||
|
+ final ChunkAccess chunk) {
|
||
|
+ AsyncCatcher.catchOp("chunk save schedule");
|
||
|
+
|
||
|
+ final ServerLevel world = this.world;
|
||
|
+
|
||
|
+ return this.chunkSaveTasks.compute(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)), (final Long keyInMap, final ChunkSaveTask valueInMap) -> {
|
||
|
+ if (valueInMap != null) {
|
||
|
+ throw new IllegalStateException("Double scheduling chunk save for task: " + valueInMap.toString());
|
||
|
+ }
|
||
|
+
|
||
|
+ final ChunkSaveTask ret = new ChunkSaveTask(world, chunkX, chunkZ, priority, ChunkTaskManager.this, asyncSaveData, chunk);
|
||
|
+
|
||
|
+ ChunkTaskManager.this.internalSchedule(ret);
|
||
|
+
|
||
|
+ return ret;
|
||
|
+ });
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Returns a completable future which will be completed with the <b>un-copied</b> chunk data for an in progress async save.
|
||
|
+ * Returns {@code null} if no save is in progress.
|
||
|
+ * @param chunkX Chunk's x coordinate
|
||
|
+ * @param chunkZ Chunk's z coordinate
|
||
|
+ */
|
||
|
+ public CompletableFuture<CompoundTag> getChunkSaveFuture(final int chunkX, final int chunkZ) {
|
||
|
+ final ChunkSaveTask chunkSaveTask = this.chunkSaveTasks.get(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)));
|
||
|
+ if (chunkSaveTask == null) {
|
||
|
+ return null;
|
||
|
+ }
|
||
|
+ return chunkSaveTask.onComplete;
|
||
|
+ }
|
||
|
+
|
||
|
+ /**
|
||
|
+ * Returns the chunk object being used to serialize data async for an unloaded chunk. Note that modifying this chunk
|
||
|
+ * is not safe to do as another thread is handling its save. The chunk is also not loaded into the world.
|
||
|
+ * @param chunkX Chunk's x coordinate
|
||
|
+ * @param chunkZ Chunk's z coordinate
|
||
|
+ * @return Chunk object for an in-progress async save, or {@code null} if no save is in progress
|
||
|
+ */
|
||
|
+ public ChunkAccess getChunkInSaveProgress(final int chunkX, final int chunkZ) {
|
||
|
+ final ChunkSaveTask chunkSaveTask = this.chunkSaveTasks.get(Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ)));
|
||
|
+ if (chunkSaveTask == null) {
|
||
|
+ return null;
|
||
|
+ }
|
||
|
+ return chunkSaveTask.chunk;
|
||
|
+ }
|
||
|
+
|
||
|
+ public void flush() {
|
||
|
+ // flush here since we schedule tasks on the IO thread that can schedule tasks here
|
||
|
+ drainChunkWaitQueue();
|
||
|
+ PaperFileIOThread.Holder.INSTANCE.flush();
|
||
|
+ drainChunkWaitQueue();
|
||
|
+
|
||
|
+ if (this.workers == null) {
|
||
|
+ if (Bukkit.isPrimaryThread() || MinecraftServer.getServer().hasStopped()) {
|
||
|
+ ((BlockableEventLoop<Runnable>)this.world.getChunkSource().mainThreadProcessor).runAllTasks();
|
||
|
+ } else {
|
||
|
+ CompletableFuture<Void> wait = new CompletableFuture<>();
|
||
|
+ MinecraftServer.getServer().scheduleOnMain(() -> {
|
||
|
+ ((BlockableEventLoop<Runnable>)this.world.getChunkSource().mainThreadProcessor).runAllTasks();
|
||
|
+ });
|
||
|
+ wait.join();
|
||
|
+ }
|
||
|
+ } else {
|
||
|
+ for (final QueueExecutorThread<ChunkTask> worker : this.workers) {
|
||
|
+ worker.flush();
|
||
|
+ }
|
||
|
+ }
|
||
|
+ if (globalUrgentWorker != null) globalUrgentWorker.flush();
|
||
|
+
|
||
|
+ // flush again since tasks we execute async saves
|
||
|
+ drainChunkWaitQueue();
|
||
|
+ PaperFileIOThread.Holder.INSTANCE.flush();
|
||
|
+ }
|
||
|
+
|
||
|
+ public void close(final boolean wait) {
|
||
|
+ // flush here since we schedule tasks on the IO thread that can schedule tasks to this task manager
|
||
|
+ // we do this regardless of the wait param since after we invoke close no tasks can be queued
|
||
|
+ PaperFileIOThread.Holder.INSTANCE.flush();
|
||
|
+
|
||
|
+ if (this.workers == null) {
|
||
|
+ if (wait) {
|
||
|
+ this.flush();
|
||
|
+ }
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ if (this.workers != globalWorkers) {
|
||
|
+ for (final QueueExecutorThread<ChunkTask> worker : this.workers) {
|
||
|
+ worker.close(false, this.perWorldQueue);
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ if (wait) {
|
||
|
+ this.flush();
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ public void raisePriority(final int chunkX, final int chunkZ, final int priority) {
|
||
|
+ final Long chunkKey = Long.valueOf(IOUtil.getCoordinateKey(chunkX, chunkZ));
|
||
|
+
|
||
|
+ ChunkTask chunkSaveTask = this.chunkSaveTasks.get(chunkKey);
|
||
|
+ if (chunkSaveTask != null) {
|
||
|
+ // don't bump save into urgent queue
|
||
|
+ raiseTaskPriority(chunkSaveTask, priority != PrioritizedTaskQueue.HIGHEST_PRIORITY ? priority : PrioritizedTaskQueue.HIGH_PRIORITY);
|
||
|
+ }
|
||
|
+
|
||
|
+ ChunkLoadTask chunkLoadTask = this.chunkLoadTasks.get(chunkKey);
|
||
|
+ if (chunkLoadTask != null) {
|
||
|
+ raiseTaskPriority(chunkLoadTask, priority);
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ private void raiseTaskPriority(ChunkTask task, int priority) {
|
||
|
+ final boolean raised = task.raisePriority(priority);
|
||
|
+ if (task.isScheduled() && raised && this.workers != null) {
|
||
|
+ // only notify if we're in queue to be executed
|
||
|
+ if (priority == PrioritizedTaskQueue.HIGHEST_PRIORITY) {
|
||
|
+ // was in another queue but became urgent later, add to urgent queue and the previous
|
||
|
+ // queue will just have to ignore this task if it has already been started.
|
||
|
+ // Ultimately, we now have 2 potential queues that can pull it out whoever gets it first
|
||
|
+ // but the urgent queue has dedicated thread(s) so it's likely to win....
|
||
|
+ globalUrgentQueue.add(task);
|
||
|
+ this.internalScheduleNotifyUrgent();
|
||
|
+ } else {
|
||
|
+ this.internalScheduleNotify();
|
||
|
+ }
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ protected void internalSchedule(final ChunkTask task) {
|
||
|
+ if (this.workers == null) {
|
||
|
+ this.chunkTasks.add(task);
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ // It's important we order the task to be executed before notifying. Avoid a race condition where the worker thread
|
||
|
+ // wakes up and goes to sleep before we actually schedule (or it's just about to sleep)
|
||
|
+ if (task.getPriority() == PrioritizedTaskQueue.HIGHEST_PRIORITY) {
|
||
|
+ globalUrgentQueue.add(task);
|
||
|
+ this.internalScheduleNotifyUrgent();
|
||
|
+ } else {
|
||
|
+ this.queue.add(task);
|
||
|
+ this.internalScheduleNotify();
|
||
|
+ }
|
||
|
+
|
||
|
+ }
|
||
|
+
|
||
|
+ protected void internalScheduleNotify() {
|
||
|
+ if (this.workers == null) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+ for (final QueueExecutorThread<ChunkTask> worker : this.workers) {
|
||
|
+ if (worker.notifyTasks()) {
|
||
|
+ // break here since we only want to wake up one worker for scheduling one task
|
||
|
+ break;
|
||
|
+ }
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+
|
||
|
+ protected void internalScheduleNotifyUrgent() {
|
||
|
+ if (globalUrgentWorker == null) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+ globalUrgentWorker.notifyTasks();
|
||
|
+ }
|
||
|
+
|
||
|
+}
|
||
|
diff --git a/src/main/java/net/minecraft/network/protocol/game/ServerboundCommandSuggestionPacket.java b/src/main/java/net/minecraft/network/protocol/game/ServerboundCommandSuggestionPacket.java
|
||
|
index 354783f862986bf939639a86a9076ac0f5ed97e3..c171860bc117199ca00085bf37507f867d51fb62 100644
|
||
|
--- a/src/main/java/net/minecraft/network/protocol/game/ServerboundCommandSuggestionPacket.java
|
||
|
+++ b/src/main/java/net/minecraft/network/protocol/game/ServerboundCommandSuggestionPacket.java
|
||
|
@@ -14,7 +14,7 @@ public class ServerboundCommandSuggestionPacket implements Packet<ServerGamePack
|
||
|
@Override
|
||
|
public void read(FriendlyByteBuf buf) throws IOException {
|
||
|
this.id = buf.readVarInt();
|
||
|
- this.command = buf.readUtf(32500);
|
||
|
+ this.command = buf.readUtf(2048);
|
||
|
}
|
||
|
|
||
|
@Override
|
||
|
diff --git a/src/main/java/net/minecraft/server/MCUtil.java b/src/main/java/net/minecraft/server/MCUtil.java
|
||
|
index a16551c81a444685f6337a65b6d7862b8c0dc684..99c3337eec552ba47d3b8b2d8feaaa80acf2a86f 100644
|
||
|
--- a/src/main/java/net/minecraft/server/MCUtil.java
|
||
|
+++ b/src/main/java/net/minecraft/server/MCUtil.java
|
||
|
@@ -714,4 +714,9 @@ public final class MCUtil {
|
||
|
out.print(fileData);
|
||
|
}
|
||
|
}
|
||
|
+
|
||
|
+ public static int getTicketLevelFor(ChunkStatus status) {
|
||
|
+ // TODO make sure the constant `33` is correct on future updates. See getChunkAt(int, int, ChunkStatus, boolean)
|
||
|
+ return 33 + ChunkStatus.getTicketLevelOffset(status);
|
||
|
+ }
|
||
|
}
|
||
|
diff --git a/src/main/java/net/minecraft/server/Main.java b/src/main/java/net/minecraft/server/Main.java
|
||
|
index 2bfc54941ec34c75c2d59bda748c75730b9951f7..855b3b4c90d84d4efa8395a76010b4b194591cbc 100644
|
||
|
--- a/src/main/java/net/minecraft/server/Main.java
|
||
|
+++ b/src/main/java/net/minecraft/server/Main.java
|
||
|
@@ -36,6 +36,7 @@ import net.minecraft.server.players.GameProfileCache;
|
||
|
import net.minecraft.util.Mth;
|
||
|
import net.minecraft.util.datafix.DataFixers;
|
||
|
import net.minecraft.util.worldupdate.WorldUpgrader;
|
||
|
+import net.minecraft.world.entity.npc.VillagerTrades;
|
||
|
import net.minecraft.world.level.DataPackConfig;
|
||
|
import net.minecraft.world.level.GameRules;
|
||
|
import net.minecraft.world.level.dimension.DimensionType;
|
||
|
@@ -202,6 +203,7 @@ public class Main {
|
||
|
|
||
|
convertable_conversionsession.a((IRegistryCustom) iregistrycustom_dimension, (SaveData) object);
|
||
|
*/
|
||
|
+ Class.forName(VillagerTrades.class.getName());// Paper - load this sync so it won't fail later async
|
||
|
final DedicatedServer dedicatedserver = (DedicatedServer) MinecraftServer.spin((thread) -> {
|
||
|
DedicatedServer dedicatedserver1 = new DedicatedServer(optionset, datapackconfiguration1, thread, iregistrycustom_dimension, convertable_conversionsession, resourcepackrepository, datapackresources, null, dedicatedserversettings, DataFixers.getDataFixer(), minecraftsessionservice, gameprofilerepository, usercache, LoggerChunkProgressListener::new);
|
||
|
|
||
|
diff --git a/src/main/java/net/minecraft/server/MinecraftServer.java b/src/main/java/net/minecraft/server/MinecraftServer.java
|
||
|
index aab1a055c065d1f1a92461e4442ec2cdd8e0b347..643d75b999c3da006eaaab11f4acd77e807683d4 100644
|
||
|
--- a/src/main/java/net/minecraft/server/MinecraftServer.java
|
||
|
+++ b/src/main/java/net/minecraft/server/MinecraftServer.java
|
||
|
@@ -920,7 +920,7 @@ public abstract class MinecraftServer extends ReentrantBlockableEventLoop<TickTa
|
||
|
this.getProfileCache().b(false); // Paper
|
||
|
}
|
||
|
// Spigot end
|
||
|
-
|
||
|
+ com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE.close(true, true); // Paper
|
||
|
}
|
||
|
|
||
|
public String getLocalIp() {
|
||
|
diff --git a/src/main/java/net/minecraft/server/level/ChunkHolder.java b/src/main/java/net/minecraft/server/level/ChunkHolder.java
|
||
|
index 491a9e78fdcec8c211499e8f48cceb829f1e5c8b..77d3969200ac6f88f3af9add05def0b627ce6db3 100644
|
||
|
--- a/src/main/java/net/minecraft/server/level/ChunkHolder.java
|
||
|
+++ b/src/main/java/net/minecraft/server/level/ChunkHolder.java
|
||
|
@@ -157,6 +157,18 @@ public class ChunkHolder {
|
||
|
}
|
||
|
return null;
|
||
|
}
|
||
|
+
|
||
|
+ public ChunkStatus getChunkHolderStatus() {
|
||
|
+ for (ChunkStatus curr = ChunkStatus.FULL, next = curr.getPreviousStatus(); curr != next; curr = next, next = next.getPreviousStatus()) {
|
||
|
+ CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> future = this.getFutureIfPresentUnchecked(curr);
|
||
|
+ Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure> either = future.getNow(null);
|
||
|
+ if (either == null || !either.left().isPresent()) {
|
||
|
+ continue;
|
||
|
+ }
|
||
|
+ return curr;
|
||
|
+ }
|
||
|
+ return null;
|
||
|
+ }
|
||
|
// Paper end
|
||
|
|
||
|
public CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> getFutureIfPresentUnchecked(ChunkStatus leastStatus) {
|
||
|
@@ -375,7 +387,7 @@ public class ChunkHolder {
|
||
|
ChunkStatus chunkstatus = getStatus(this.oldTicketLevel);
|
||
|
ChunkStatus chunkstatus1 = getStatus(this.ticketLevel);
|
||
|
boolean flag = this.oldTicketLevel <= ChunkMap.MAX_CHUNK_DISTANCE;
|
||
|
- boolean flag1 = this.ticketLevel <= ChunkMap.MAX_CHUNK_DISTANCE;
|
||
|
+ boolean flag1 = this.ticketLevel <= ChunkMap.MAX_CHUNK_DISTANCE; // Paper - diff on change: (flag1 = new ticket level is in loadable range)
|
||
|
ChunkHolder.FullChunkStatus playerchunk_state = getFullChunkStatus(this.oldTicketLevel);
|
||
|
ChunkHolder.FullChunkStatus playerchunk_state1 = getFullChunkStatus(this.ticketLevel);
|
||
|
// CraftBukkit start
|
||
|
@@ -411,6 +423,12 @@ public class ChunkHolder {
|
||
|
}
|
||
|
});
|
||
|
|
||
|
+ // Paper start
|
||
|
+ if (!flag1) {
|
||
|
+ chunkStorage.level.asyncChunkTaskManager.cancelChunkLoad(this.pos.x, this.pos.z);
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
+
|
||
|
for (int i = flag1 ? chunkstatus1.getIndex() + 1 : 0; i <= chunkstatus.getIndex(); ++i) {
|
||
|
completablefuture = (CompletableFuture) this.futures.get(i);
|
||
|
if (completablefuture != null) {
|
||
|
diff --git a/src/main/java/net/minecraft/server/level/ChunkMap.java b/src/main/java/net/minecraft/server/level/ChunkMap.java
|
||
|
index b2d668607c2b5122d06fa75f77b3cef44100fe28..c00f7c60ce7b497d697d1abdf230f91f327e2113 100644
|
||
|
--- a/src/main/java/net/minecraft/server/level/ChunkMap.java
|
||
|
+++ b/src/main/java/net/minecraft/server/level/ChunkMap.java
|
||
|
@@ -86,6 +86,7 @@ import net.minecraft.world.level.chunk.ProtoChunk;
|
||
|
import net.minecraft.world.level.chunk.UpgradeData;
|
||
|
import net.minecraft.world.level.chunk.storage.ChunkSerializer;
|
||
|
import net.minecraft.world.level.chunk.storage.ChunkStorage;
|
||
|
+import net.minecraft.world.level.chunk.storage.RegionFile;
|
||
|
import net.minecraft.world.level.levelgen.structure.StructureStart;
|
||
|
import net.minecraft.world.level.levelgen.structure.templatesystem.StructureManager;
|
||
|
import net.minecraft.world.level.storage.DimensionDataStorage;
|
||
|
@@ -110,7 +111,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
private final ThreadedLevelLightEngine lightEngine;
|
||
|
private final BlockableEventLoop<Runnable> mainThreadExecutor;
|
||
|
public final ChunkGenerator generator;
|
||
|
- private final Supplier<DimensionDataStorage> overworldDataStorage;
|
||
|
+ private final Supplier<DimensionDataStorage> overworldDataStorage; public final Supplier<DimensionDataStorage> getWorldPersistentDataSupplier() { return this.overworldDataStorage; } // Paper - OBFHELPER
|
||
|
private final PoiManager poiManager;
|
||
|
public final LongSet toDrop;
|
||
|
private boolean modified;
|
||
|
@@ -120,7 +121,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
public final ChunkProgressListener progressListener;
|
||
|
public final ChunkMap.ChunkDistanceManager distanceManager;
|
||
|
private final AtomicInteger tickingGenerated;
|
||
|
- private final StructureManager structureManager;
|
||
|
+ public final StructureManager structureManager; // Paper - private -> public
|
||
|
private final File storageFolder;
|
||
|
private final PlayerMap playerMap;
|
||
|
public final Int2ObjectMap<ChunkMap.TrackedEntity> entityMap;
|
||
|
@@ -203,7 +204,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
this.lightEngine = new ThreadedLevelLightEngine(chunkProvider, this, this.level.dimensionType().hasSkyLight(), threadedmailbox1, this.queueSorter.getProcessor(threadedmailbox1, false));
|
||
|
this.distanceManager = new ChunkMap.ChunkDistanceManager(workerExecutor, mainThreadExecutor);
|
||
|
this.overworldDataStorage = supplier;
|
||
|
- this.poiManager = new PoiManager(new File(this.storageFolder, "poi"), dataFixer, flag);
|
||
|
+ this.poiManager = new PoiManager(new File(this.storageFolder, "poi"), dataFixer, flag, this.level); // Paper
|
||
|
this.setViewDistance(i);
|
||
|
}
|
||
|
|
||
|
@@ -245,12 +246,12 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
}
|
||
|
|
||
|
@Nullable
|
||
|
- protected ChunkHolder getUpdatingChunkIfPresent(long pos) {
|
||
|
+ public ChunkHolder getUpdatingChunkIfPresent(long pos) { // Paper
|
||
|
return (ChunkHolder) this.updatingChunkMap.get(pos);
|
||
|
}
|
||
|
|
||
|
@Nullable
|
||
|
- protected ChunkHolder getVisibleChunkIfPresent(long pos) {
|
||
|
+ public ChunkHolder getVisibleChunkIfPresent(long pos) { // Paper - protected -> public
|
||
|
return (ChunkHolder) this.visibleChunkMap.get(pos);
|
||
|
}
|
||
|
|
||
|
@@ -372,6 +373,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
public void close() throws IOException {
|
||
|
try {
|
||
|
this.queueSorter.close();
|
||
|
+ this.level.asyncChunkTaskManager.close(true); // Paper - Required since we're closing regionfiles in the next line
|
||
|
this.poiManager.close();
|
||
|
} finally {
|
||
|
super.close();
|
||
|
@@ -463,7 +465,8 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
this.processUnloads(() -> {
|
||
|
return true;
|
||
|
});
|
||
|
- this.flushWorker();
|
||
|
+ this.level.asyncChunkTaskManager.flush(); // Paper - flush to preserve behavior compat with pre-async behaviour
|
||
|
+// this.i(); // Paper - nuke IOWorker
|
||
|
ChunkMap.LOGGER.info("ThreadedAnvilChunkStorage ({}): All chunks are saved", this.storageFolder.getName());
|
||
|
} else {
|
||
|
this.visibleChunkMap.values().stream().filter(ChunkHolder::wasAccessibleSinceLastSave).forEach((playerchunk) -> {
|
||
|
@@ -479,16 +482,20 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
|
||
|
}
|
||
|
|
||
|
- private static final double UNLOAD_QUEUE_RESIZE_FACTOR = 0.96; // Spigot
|
||
|
+ private static final double UNLOAD_QUEUE_RESIZE_FACTOR = 0.90; // Spigot // Paper - unload more
|
||
|
|
||
|
protected void tick(BooleanSupplier shouldKeepTicking) {
|
||
|
ProfilerFiller gameprofilerfiller = this.level.getProfiler();
|
||
|
|
||
|
+ try (Timing ignored = this.level.timings.poiUnload.startTiming()) { // Paper
|
||
|
gameprofilerfiller.push("poi");
|
||
|
this.poiManager.tick(shouldKeepTicking);
|
||
|
+ } // Paper
|
||
|
gameprofilerfiller.popPush("chunk_unload");
|
||
|
if (!this.level.noSave()) {
|
||
|
+ try (Timing ignored = this.level.timings.chunkUnload.startTiming()) { // Paper
|
||
|
this.processUnloads(shouldKeepTicking);
|
||
|
+ }// Paper
|
||
|
}
|
||
|
|
||
|
gameprofilerfiller.pop();
|
||
|
@@ -509,12 +516,13 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
if (playerchunk != null) {
|
||
|
this.pendingUnloads.put(j, playerchunk);
|
||
|
this.modified = true;
|
||
|
+ this.scheduleUnload(j, playerchunk); // Paper - Move up - don't leak chunks
|
||
|
// Spigot start
|
||
|
if (!shouldKeepTicking.getAsBoolean() && this.toDrop.size() <= targetSize && activityAccountant.activityTimeIsExhausted()) {
|
||
|
break;
|
||
|
}
|
||
|
// Spigot end
|
||
|
- this.scheduleUnload(j, playerchunk);
|
||
|
+ //this.a(j, playerchunk); // Paper - move up because spigot did a dumb
|
||
|
}
|
||
|
}
|
||
|
activityAccountant.endActivity(); // Spigot
|
||
|
@@ -528,6 +536,60 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
|
||
|
}
|
||
|
|
||
|
+ // Paper start - async chunk save for unload
|
||
|
+ // Note: This is very unsafe to call if the chunk is still in use.
|
||
|
+ // This is also modeled after PlayerChunkMap#saveChunk(IChunkAccess, boolean), with the intentional difference being
|
||
|
+ // serializing the chunk is left to a worker thread.
|
||
|
+ private void asyncSave(ChunkAccess chunk) {
|
||
|
+ ChunkPos chunkPos = chunk.getPos();
|
||
|
+ CompoundTag poiData;
|
||
|
+ try (Timing ignored = this.level.timings.chunkUnloadPOISerialization.startTiming()) {
|
||
|
+ poiData = this.getVillagePlace().getData(chunk.getPos());
|
||
|
+ }
|
||
|
+
|
||
|
+ com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE.scheduleSave(this.level, chunkPos.x, chunkPos.z,
|
||
|
+ poiData, null, com.destroystokyo.paper.io.PrioritizedTaskQueue.LOW_PRIORITY);
|
||
|
+
|
||
|
+ if (!chunk.isUnsaved()) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ ChunkStatus chunkstatus = chunk.getStatus();
|
||
|
+
|
||
|
+ // Copied from PlayerChunkMap#saveChunk(IChunkAccess, boolean)
|
||
|
+ if (chunkstatus.getChunkType() != ChunkStatus.ChunkType.LEVELCHUNK) {
|
||
|
+ try (co.aikar.timings.Timing ignored1 = this.level.timings.chunkSaveOverwriteCheck.startTiming()) { // Paper
|
||
|
+ // Paper start - Optimize save by using status cache
|
||
|
+ try {
|
||
|
+ ChunkStatus statusOnDisk = this.getChunkStatusOnDisk(chunkPos);
|
||
|
+ if (statusOnDisk != null && statusOnDisk.getChunkType() == ChunkStatus.ChunkType.LEVELCHUNK) {
|
||
|
+ // Paper end
|
||
|
+ return;
|
||
|
+ }
|
||
|
+
|
||
|
+ if (chunkstatus == ChunkStatus.EMPTY && chunk.getAllStarts().values().stream().noneMatch(StructureStart::e)) {
|
||
|
+ return;
|
||
|
+ }
|
||
|
+ } catch (IOException ex) {
|
||
|
+ ex.printStackTrace();
|
||
|
+ return;
|
||
|
+ }
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ ChunkSerializer.AsyncSaveData asyncSaveData;
|
||
|
+ try (Timing ignored = this.level.timings.chunkUnloadPrepareSave.startTiming()) {
|
||
|
+ asyncSaveData = ChunkSerializer.getAsyncSaveData(this.level, chunk);
|
||
|
+ }
|
||
|
+
|
||
|
+ this.level.asyncChunkTaskManager.scheduleChunkSave(chunkPos.x, chunkPos.z, com.destroystokyo.paper.io.PrioritizedTaskQueue.LOW_PRIORITY,
|
||
|
+ asyncSaveData, chunk);
|
||
|
+
|
||
|
+ chunk.setLastSaveTime(this.level.getGameTime());
|
||
|
+ chunk.setUnsaved(false);
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
+
|
||
|
private void scheduleUnload(long pos, ChunkHolder playerchunk) {
|
||
|
CompletableFuture<ChunkAccess> completablefuture = playerchunk.getChunkToSave();
|
||
|
Consumer<ChunkAccess> consumer = (ichunkaccess) -> { // CraftBukkit - decompile error
|
||
|
@@ -541,7 +603,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
((LevelChunk) ichunkaccess).setLoaded(false);
|
||
|
}
|
||
|
|
||
|
- this.save(ichunkaccess);
|
||
|
+ //this.saveChunk(ichunkaccess);// Paper - delay
|
||
|
if (this.entitiesInLevel.remove(pos) && ichunkaccess instanceof LevelChunk) {
|
||
|
LevelChunk chunk = (LevelChunk) ichunkaccess;
|
||
|
|
||
|
@@ -549,6 +611,13 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
}
|
||
|
this.autoSaveQueue.remove(playerchunk); // Paper
|
||
|
|
||
|
+ try {
|
||
|
+ this.asyncSave(ichunkaccess); // Paper - async chunk saving
|
||
|
+ } catch (Throwable ex) {
|
||
|
+ LOGGER.fatal("Failed to prepare async save, attempting synchronous save", ex);
|
||
|
+ this.save(ichunkaccess);
|
||
|
+ }
|
||
|
+
|
||
|
this.lightEngine.updateChunkStatus(ichunkaccess.getPos());
|
||
|
this.lightEngine.tryScheduleUpdate();
|
||
|
this.progressListener.onStatusChange(ichunkaccess.getPos(), (ChunkStatus) null);
|
||
|
@@ -619,19 +688,23 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
}
|
||
|
|
||
|
private CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> scheduleChunkLoad(ChunkPos pos) {
|
||
|
- return CompletableFuture.supplyAsync(() -> {
|
||
|
+ // Paper start - Async chunk io
|
||
|
+ final java.util.function.BiFunction<ChunkSerializer.InProgressChunkHolder, Throwable, Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> syncLoadComplete = (chunkHolder, ioThrowable) -> {
|
||
|
try (Timing ignored = this.level.timings.chunkLoad.startTimingIfSync()) { // Paper
|
||
|
this.level.getProfiler().incrementCounter("chunkLoad");
|
||
|
- CompoundTag nbttagcompound; // Paper
|
||
|
- try (Timing ignored2 = this.level.timings.chunkIO.startTimingIfSync()) { // Paper start - timings
|
||
|
- nbttagcompound = this.readChunk(pos);
|
||
|
- } // Paper end
|
||
|
+ // Paper start
|
||
|
+ if (ioThrowable != null) {
|
||
|
+ com.destroystokyo.paper.util.SneakyThrow.sneaky(ioThrowable);
|
||
|
+ }
|
||
|
+
|
||
|
+ this.getVillagePlace().loadInData(pos, chunkHolder.poiData);
|
||
|
+ chunkHolder.tasks.forEach(Runnable::run);
|
||
|
+ // Paper end
|
||
|
|
||
|
- if (nbttagcompound != null) {try (Timing ignored2 = this.level.timings.chunkLoadLevelTimer.startTimingIfSync()) { // Paper start - timings
|
||
|
- boolean flag = nbttagcompound.contains("Level", 10) && nbttagcompound.getCompound("Level").contains("Status", 8);
|
||
|
+ if (chunkHolder.protoChunk != null) {try (Timing ignored2 = this.level.timings.chunkLoadLevelTimer.startTimingIfSync()) { // Paper start - timings // Paper - chunk is created async
|
||
|
|
||
|
- if (flag) {
|
||
|
- ProtoChunk protochunk = ChunkSerializer.read(this.level, this.structureManager, this.poiManager, pos, nbttagcompound);
|
||
|
+ if (true) {
|
||
|
+ ProtoChunk protochunk = chunkHolder.protoChunk;
|
||
|
|
||
|
protochunk.setLastSaveTime(this.level.getGameTime());
|
||
|
this.markPosition(pos, protochunk.getStatus().getChunkType());
|
||
|
@@ -655,7 +728,32 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
|
||
|
this.markPositionReplaceable(pos);
|
||
|
return Either.left(new ProtoChunk(pos, UpgradeData.EMPTY, this.level)); // Paper - Anti-Xray - Add parameter
|
||
|
- }, this.mainThreadExecutor);
|
||
|
+ // Paper start - Async chunk io
|
||
|
+ };
|
||
|
+ CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> ret = new CompletableFuture<>();
|
||
|
+
|
||
|
+ Consumer<ChunkSerializer.InProgressChunkHolder> chunkHolderConsumer = (ChunkSerializer.InProgressChunkHolder holder) -> {
|
||
|
+ // Go into the chunk load queue and not server task queue so we can be popped out even faster.
|
||
|
+ com.destroystokyo.paper.io.chunk.ChunkTaskManager.queueChunkWaitTask(() -> {
|
||
|
+ try {
|
||
|
+ ret.complete(syncLoadComplete.apply(holder, null));
|
||
|
+ } catch (Exception e) {
|
||
|
+ ret.completeExceptionally(e);
|
||
|
+ }
|
||
|
+ });
|
||
|
+ };
|
||
|
+
|
||
|
+ CompletableFuture<CompoundTag> chunkSaveFuture = this.level.asyncChunkTaskManager.getChunkSaveFuture(pos.x, pos.z);
|
||
|
+ if (chunkSaveFuture != null) {
|
||
|
+ this.level.asyncChunkTaskManager.scheduleChunkLoad(pos.x, pos.z,
|
||
|
+ com.destroystokyo.paper.io.PrioritizedTaskQueue.HIGH_PRIORITY, chunkHolderConsumer, false, chunkSaveFuture);
|
||
|
+ this.level.asyncChunkTaskManager.raisePriority(pos.x, pos.z, com.destroystokyo.paper.io.PrioritizedTaskQueue.HIGH_PRIORITY);
|
||
|
+ } else {
|
||
|
+ this.level.asyncChunkTaskManager.scheduleChunkLoad(pos.x, pos.z,
|
||
|
+ com.destroystokyo.paper.io.PrioritizedTaskQueue.NORMAL_PRIORITY, chunkHolderConsumer, false);
|
||
|
+ }
|
||
|
+ return ret;
|
||
|
+ // Paper end
|
||
|
}
|
||
|
|
||
|
private void markPositionReplaceable(ChunkPos chunkcoordintpair) {
|
||
|
@@ -890,6 +988,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
}
|
||
|
|
||
|
public boolean save(ChunkAccess chunk) {
|
||
|
+ try (co.aikar.timings.Timing ignored = this.level.timings.chunkSave.startTiming()) { // Paper
|
||
|
this.poiManager.flush(chunk.getPos());
|
||
|
if (!chunk.isUnsaved()) {
|
||
|
return false;
|
||
|
@@ -902,6 +1001,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
ChunkStatus chunkstatus = chunk.getStatus();
|
||
|
|
||
|
if (chunkstatus.getChunkType() != ChunkStatus.ChunkType.LEVELCHUNK) {
|
||
|
+ try (co.aikar.timings.Timing ignored1 = this.level.timings.chunkSaveOverwriteCheck.startTiming()) { // Paper
|
||
|
if (this.isExistingChunkFull(chunkcoordintpair)) {
|
||
|
return false;
|
||
|
}
|
||
|
@@ -909,12 +1009,20 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
if (chunkstatus == ChunkStatus.EMPTY && chunk.getAllStarts().values().stream().noneMatch(StructureStart::e)) {
|
||
|
return false;
|
||
|
}
|
||
|
+ } // Paper
|
||
|
}
|
||
|
|
||
|
this.level.getProfiler().incrementCounter("chunkSave");
|
||
|
- CompoundTag nbttagcompound = ChunkSerializer.write(this.level, chunk);
|
||
|
+ CompoundTag nbttagcompound;
|
||
|
+ try (co.aikar.timings.Timing ignored1 = this.level.timings.chunkSaveDataSerialization.startTiming()) { // Paper
|
||
|
+ nbttagcompound = ChunkSerializer.write(this.level, chunk);
|
||
|
+ } // Paper
|
||
|
|
||
|
- this.write(chunkcoordintpair, nbttagcompound);
|
||
|
+
|
||
|
+ // Paper start - async chunk io
|
||
|
+ com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE.scheduleSave(this.level, chunkcoordintpair.x, chunkcoordintpair.z,
|
||
|
+ null, nbttagcompound, com.destroystokyo.paper.io.PrioritizedTaskQueue.NORMAL_PRIORITY);
|
||
|
+ // Paper end - async chunk io
|
||
|
this.markPosition(chunkcoordintpair, chunkstatus.getChunkType());
|
||
|
return true;
|
||
|
} catch (Exception exception) {
|
||
|
@@ -923,6 +1031,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
return false;
|
||
|
}
|
||
|
}
|
||
|
+ } // Paper
|
||
|
}
|
||
|
|
||
|
private boolean isExistingChunkFull(ChunkPos chunkcoordintpair) {
|
||
|
@@ -1052,6 +1161,35 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
}
|
||
|
}
|
||
|
|
||
|
+ // Paper start - Asynchronous chunk io
|
||
|
+ @Nullable
|
||
|
+ @Override
|
||
|
+ public CompoundTag read(ChunkPos chunkcoordintpair) throws IOException {
|
||
|
+ if (Thread.currentThread() != com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE) {
|
||
|
+ CompoundTag ret = com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE
|
||
|
+ .loadChunkDataAsyncFuture(this.level, chunkcoordintpair.x, chunkcoordintpair.z, com.destroystokyo.paper.io.IOUtil.getPriorityForCurrentThread(),
|
||
|
+ false, true, true).join().chunkData;
|
||
|
+
|
||
|
+ if (ret == com.destroystokyo.paper.io.PaperFileIOThread.FAILURE_VALUE) {
|
||
|
+ throw new IOException("See logs for further detail");
|
||
|
+ }
|
||
|
+ return ret;
|
||
|
+ }
|
||
|
+ return super.read(chunkcoordintpair);
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public void write(ChunkPos chunkcoordintpair, CompoundTag nbttagcompound) throws IOException {
|
||
|
+ if (Thread.currentThread() != com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE) {
|
||
|
+ com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE.scheduleSave(
|
||
|
+ this.level, chunkcoordintpair.x, chunkcoordintpair.z, null, nbttagcompound,
|
||
|
+ com.destroystokyo.paper.io.IOUtil.getPriorityForCurrentThread());
|
||
|
+ return;
|
||
|
+ }
|
||
|
+ super.write(chunkcoordintpair, nbttagcompound);
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
+
|
||
|
@Nullable
|
||
|
public CompoundTag readChunk(ChunkPos pos) throws IOException { // Paper - private -> public
|
||
|
CompoundTag nbttagcompound = this.read(pos);
|
||
|
@@ -1073,33 +1211,55 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
|
||
|
// Paper start - chunk status cache "api"
|
||
|
public ChunkStatus getChunkStatusOnDiskIfCached(ChunkPos chunkPos) {
|
||
|
- RegionFile regionFile = this.getIOWorker().getRegionFileCache().getRegionFileIfLoaded(chunkPos);
|
||
|
+ synchronized (this) { // Paper
|
||
|
+ RegionFile regionFile = this.regionFileCache.getRegionFileIfLoaded(chunkPos);
|
||
|
|
||
|
return regionFile == null ? null : regionFile.getStatusIfCached(chunkPos.x, chunkPos.z);
|
||
|
+ } // Paper
|
||
|
}
|
||
|
|
||
|
public ChunkStatus getChunkStatusOnDisk(ChunkPos chunkPos) throws IOException {
|
||
|
- RegionFile regionFile = this.getIOWorker().getRegionFileCache().getFile(chunkPos, true);
|
||
|
+ // Paper start - async chunk save for unload
|
||
|
+ ChunkAccess unloadingChunk = this.level.asyncChunkTaskManager.getChunkInSaveProgress(chunkPos.x, chunkPos.z);
|
||
|
+ if (unloadingChunk != null) {
|
||
|
+ return unloadingChunk.getStatus();
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
+ // Paper start - async io
|
||
|
+ CompoundTag inProgressWrite = com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE
|
||
|
+ .getPendingWrite(this.level, chunkPos.x, chunkPos.z, false);
|
||
|
|
||
|
- if (regionFile == null || !regionFile.chunkExists(chunkPos)) {
|
||
|
- return null;
|
||
|
+ if (inProgressWrite != null) {
|
||
|
+ return ChunkSerializer.getStatus(inProgressWrite);
|
||
|
}
|
||
|
+ // Paper end
|
||
|
+ synchronized (this) { // Paper - async io
|
||
|
+ RegionFile regionFile = this.regionFileCache.getFile(chunkPos, true);
|
||
|
+
|
||
|
+ if (regionFile == null || !regionFile.hasChunk(chunkPos)) {
|
||
|
+ return null;
|
||
|
+ }
|
||
|
|
||
|
- ChunkStatus status = regionFile.getStatusIfCached(chunkPos.x, chunkPos.z);
|
||
|
+ ChunkStatus status = regionFile.getStatusIfCached(chunkPos.x, chunkPos.z);
|
||
|
|
||
|
- if (status != null) {
|
||
|
- return status;
|
||
|
+ if (status != null) {
|
||
|
+ return status;
|
||
|
+ }
|
||
|
+ // Paper start - async io
|
||
|
}
|
||
|
|
||
|
- this.readChunk(chunkPos);
|
||
|
+ CompoundTag compound = this.readChunk(chunkPos);
|
||
|
|
||
|
- return regionFile.getStatusIfCached(chunkPos.x, chunkPos.z);
|
||
|
+ return ChunkSerializer.getStatus(compound);
|
||
|
+ // Paper end
|
||
|
}
|
||
|
|
||
|
public void updateChunkStatusOnDisk(ChunkPos chunkPos, @Nullable CompoundTag compound) throws IOException {
|
||
|
- RegionFile regionFile = this.getIOWorker().getRegionFileCache().getFile(chunkPos, false);
|
||
|
+ synchronized (this) {
|
||
|
+ RegionFile regionFile = this.regionFileCache.getFile(chunkPos, false);
|
||
|
|
||
|
- regionFile.setStatus(chunkPos.x, chunkPos.z, ChunkSerializer.getStatus(compound));
|
||
|
+ regionFile.setStatus(chunkPos.x, chunkPos.z, ChunkSerializer.getStatus(compound));
|
||
|
+ }
|
||
|
}
|
||
|
|
||
|
public ChunkAccess getUnloadingChunk(int chunkX, int chunkZ) {
|
||
|
@@ -1108,6 +1268,39 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
}
|
||
|
// Paper end
|
||
|
|
||
|
+
|
||
|
+ // Paper start - async io
|
||
|
+ // this function will not load chunk data off disk to check for status
|
||
|
+ // ret null for unknown, empty for empty status on disk or absent from disk
|
||
|
+ public ChunkStatus getStatusOnDiskNoLoad(int x, int z) {
|
||
|
+ // Paper start - async chunk save for unload
|
||
|
+ ChunkAccess unloadingChunk = this.level.asyncChunkTaskManager.getChunkInSaveProgress(x, z);
|
||
|
+ if (unloadingChunk != null) {
|
||
|
+ return unloadingChunk.getStatus();
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
+ // Paper start - async io
|
||
|
+ CompoundTag inProgressWrite = com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE
|
||
|
+ .getPendingWrite(this.level, x, z, false);
|
||
|
+
|
||
|
+ if (inProgressWrite != null) {
|
||
|
+ return ChunkSerializer.getStatus(inProgressWrite);
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
+ // variant of PlayerChunkMap#getChunkStatusOnDisk that does not load data off disk, but loads the region file
|
||
|
+ ChunkPos chunkPos = new ChunkPos(x, z);
|
||
|
+ synchronized (level.getChunkSource().chunkMap) {
|
||
|
+ RegionFile file;
|
||
|
+ try {
|
||
|
+ file = level.getChunkSource().chunkMap.regionFileCache.getFile(chunkPos, false);
|
||
|
+ } catch (IOException ex) {
|
||
|
+ throw new RuntimeException(ex);
|
||
|
+ }
|
||
|
+
|
||
|
+ return !file.hasChunk(chunkPos) ? ChunkStatus.EMPTY : file.getStatusIfCached(x, z);
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
boolean noPlayersCloseForSpawning(ChunkPos chunkcoordintpair) {
|
||
|
// Spigot start
|
||
|
return isOutsideOfRange(chunkcoordintpair, false);
|
||
|
@@ -1454,6 +1647,7 @@ public class ChunkMap extends ChunkStorage implements ChunkHolder.PlayerProvider
|
||
|
|
||
|
}
|
||
|
|
||
|
+ public PoiManager getVillagePlace() { return this.getPoiManager(); } // Paper - OBFHELPER
|
||
|
protected PoiManager getPoiManager() {
|
||
|
return this.poiManager;
|
||
|
}
|
||
|
diff --git a/src/main/java/net/minecraft/server/level/ServerChunkCache.java b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
|
||
|
index c1aa40c01a80a8870478193b8cd7354b0d71045c..120b604d91643248ab375969f95f62a74cbf6be7 100644
|
||
|
--- a/src/main/java/net/minecraft/server/level/ServerChunkCache.java
|
||
|
+++ b/src/main/java/net/minecraft/server/level/ServerChunkCache.java
|
||
|
@@ -37,6 +37,7 @@ import net.minecraft.world.level.chunk.ChunkAccess;
|
||
|
import net.minecraft.world.level.chunk.ChunkGenerator;
|
||
|
import net.minecraft.world.level.chunk.ChunkSource;
|
||
|
import net.minecraft.world.level.chunk.ChunkStatus;
|
||
|
+import net.minecraft.world.level.chunk.ImposterProtoChunk;
|
||
|
import net.minecraft.world.level.chunk.LevelChunk;
|
||
|
import net.minecraft.world.level.levelgen.structure.templatesystem.StructureManager;
|
||
|
import net.minecraft.world.level.storage.DimensionDataStorage;
|
||
|
@@ -332,11 +333,138 @@ public class ServerChunkCache extends ChunkSource {
|
||
|
return playerChunk.getAvailableChunkNow();
|
||
|
|
||
|
}
|
||
|
+
|
||
|
+ private long asyncLoadSeqCounter;
|
||
|
+
|
||
|
+ public CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> getChunkAtAsynchronously(int x, int z, boolean gen, boolean isUrgent) {
|
||
|
+ if (Thread.currentThread() != this.mainThread) {
|
||
|
+ CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> future = new CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>>();
|
||
|
+ this.mainThreadProcessor.execute(() -> {
|
||
|
+ this.getChunkAtAsynchronously(x, z, gen, isUrgent).whenComplete((chunk, ex) -> {
|
||
|
+ if (ex != null) {
|
||
|
+ future.completeExceptionally(ex);
|
||
|
+ } else {
|
||
|
+ future.complete(chunk);
|
||
|
+ }
|
||
|
+ });
|
||
|
+ });
|
||
|
+ return future;
|
||
|
+ }
|
||
|
+
|
||
|
+ if (!com.destroystokyo.paper.PaperConfig.asyncChunks) {
|
||
|
+ level.getWorld().loadChunk(x, z, gen);
|
||
|
+ LevelChunk chunk = getChunkAtIfLoadedMainThread(x, z);
|
||
|
+ return CompletableFuture.completedFuture(chunk != null ? Either.left(chunk) : ChunkHolder.UNLOADED_CHUNK);
|
||
|
+ }
|
||
|
+
|
||
|
+ long k = ChunkPos.asLong(x, z);
|
||
|
+ ChunkPos chunkPos = new ChunkPos(x, z);
|
||
|
+
|
||
|
+ ChunkAccess ichunkaccess;
|
||
|
+
|
||
|
+ // try cache
|
||
|
+ for (int l = 0; l < 4; ++l) {
|
||
|
+ if (k == this.lastChunkPos[l] && ChunkStatus.FULL == this.lastChunkStatus[l]) {
|
||
|
+ ichunkaccess = this.lastChunk[l];
|
||
|
+ if (ichunkaccess != null) { // CraftBukkit - the chunk can become accessible in the meantime TODO for non-null chunks it might also make sense to check that the chunk's state hasn't changed in the meantime
|
||
|
+
|
||
|
+ // move to first in cache
|
||
|
+
|
||
|
+ for (int i1 = 3; i1 > 0; --i1) {
|
||
|
+ this.lastChunkPos[i1] = this.lastChunkPos[i1 - 1];
|
||
|
+ this.lastChunkStatus[i1] = this.lastChunkStatus[i1 - 1];
|
||
|
+ this.lastChunk[i1] = this.lastChunk[i1 - 1];
|
||
|
+ }
|
||
|
+
|
||
|
+ this.lastChunkPos[0] = k;
|
||
|
+ this.lastChunkStatus[0] = ChunkStatus.FULL;
|
||
|
+ this.lastChunk[0] = ichunkaccess;
|
||
|
+
|
||
|
+ return CompletableFuture.completedFuture(Either.left(ichunkaccess));
|
||
|
+ }
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ if (gen) {
|
||
|
+ return this.bringToFullStatusAsync(x, z, chunkPos, isUrgent);
|
||
|
+ }
|
||
|
+
|
||
|
+ ChunkAccess current = this.getChunkAtImmediately(x, z); // we want to bypass ticket restrictions
|
||
|
+ if (current != null) {
|
||
|
+ if (!(current instanceof ImposterProtoChunk) && !(current instanceof LevelChunk)) {
|
||
|
+ return CompletableFuture.completedFuture(ChunkHolder.UNLOADED_CHUNK);
|
||
|
+ }
|
||
|
+ // we know the chunk is at full status here (either in read-only mode or the real thing)
|
||
|
+ return this.bringToFullStatusAsync(x, z, chunkPos, isUrgent);
|
||
|
+ }
|
||
|
+
|
||
|
+ ChunkStatus status = level.getChunkSource().chunkMap.getStatusOnDiskNoLoad(x, z);
|
||
|
+
|
||
|
+ if (status != null && status != ChunkStatus.FULL) {
|
||
|
+ // does not exist on disk
|
||
|
+ return CompletableFuture.completedFuture(ChunkHolder.UNLOADED_CHUNK);
|
||
|
+ }
|
||
|
+
|
||
|
+ if (status == ChunkStatus.FULL) {
|
||
|
+ return this.bringToFullStatusAsync(x, z, chunkPos, isUrgent);
|
||
|
+ }
|
||
|
+
|
||
|
+ // status is null here
|
||
|
+
|
||
|
+ // here we don't know what status it is and we're not supposed to generate
|
||
|
+ // so we asynchronously load empty status
|
||
|
+ return this.bringToStatusAsync(x, z, chunkPos, ChunkStatus.EMPTY, isUrgent).thenCompose((either) -> {
|
||
|
+ ChunkAccess chunk = either.left().orElse(null);
|
||
|
+ if (!(chunk instanceof ImposterProtoChunk) && !(chunk instanceof LevelChunk)) {
|
||
|
+ // the chunk on disk was not a full status chunk
|
||
|
+ return CompletableFuture.completedFuture(ChunkHolder.UNLOADED_CHUNK);
|
||
|
+ }
|
||
|
+ ; // bring to full status if required
|
||
|
+ return this.bringToFullStatusAsync(x, z, chunkPos, isUrgent);
|
||
|
+ });
|
||
|
+ }
|
||
|
+
|
||
|
+ private CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> bringToFullStatusAsync(int x, int z, ChunkPos chunkPos, boolean isUrgent) {
|
||
|
+ return this.bringToStatusAsync(x, z, chunkPos, ChunkStatus.FULL, isUrgent);
|
||
|
+ }
|
||
|
+
|
||
|
+ private CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> bringToStatusAsync(int x, int z, ChunkPos chunkPos, ChunkStatus status, boolean isUrgent) {
|
||
|
+ CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> future = this.getChunkFutureMainThread(x, z, status, true, isUrgent);
|
||
|
+ Long identifier = Long.valueOf(this.asyncLoadSeqCounter++);
|
||
|
+ int ticketLevel = MCUtil.getTicketLevelFor(status);
|
||
|
+ this.addTicketAtLevel(TicketType.ASYNC_LOAD, chunkPos, ticketLevel, identifier);
|
||
|
+
|
||
|
+ return future.thenComposeAsync((Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure> either) -> {
|
||
|
+ // either left -> success
|
||
|
+ // either right -> failure
|
||
|
+
|
||
|
+ this.removeTicketAtLevel(TicketType.ASYNC_LOAD, chunkPos, ticketLevel, identifier);
|
||
|
+ this.addTicketAtLevel(TicketType.UNKNOWN, chunkPos, ticketLevel, chunkPos); // allow unloading
|
||
|
+
|
||
|
+ Optional<ChunkHolder.ChunkLoadingFailure> failure = either.right();
|
||
|
+
|
||
|
+ if (failure.isPresent()) {
|
||
|
+ // failure
|
||
|
+ throw new IllegalStateException("Chunk failed to load: " + failure.get().toString());
|
||
|
+ }
|
||
|
+
|
||
|
+ return CompletableFuture.completedFuture(either);
|
||
|
+ }, this.mainThreadProcessor);
|
||
|
+ }
|
||
|
+
|
||
|
+ public <T> void addTicketAtLevel(TicketType<T> ticketType, ChunkPos chunkPos, int ticketLevel, T identifier) {
|
||
|
+ this.distanceManager.addTicketAtLevel(ticketType, chunkPos, ticketLevel, identifier);
|
||
|
+ }
|
||
|
+
|
||
|
+ public <T> void removeTicketAtLevel(TicketType<T> ticketType, ChunkPos chunkPos, int ticketLevel, T identifier) {
|
||
|
+ this.distanceManager.removeTicketAtLevel(ticketType, chunkPos, ticketLevel, identifier);
|
||
|
+ }
|
||
|
// Paper end
|
||
|
|
||
|
@Nullable
|
||
|
@Override
|
||
|
public ChunkAccess getChunk(int x, int z, ChunkStatus leastStatus, boolean create) {
|
||
|
+ final int x1 = x; final int z1 = z; // Paper - conflict on variable change
|
||
|
if (Thread.currentThread() != this.mainThread) {
|
||
|
return (ChunkAccess) CompletableFuture.supplyAsync(() -> {
|
||
|
return this.getChunk(x, z, leastStatus, create);
|
||
|
@@ -359,11 +487,16 @@ public class ServerChunkCache extends ChunkSource {
|
||
|
}
|
||
|
|
||
|
gameprofilerfiller.incrementCounter("getChunkCacheMiss");
|
||
|
- CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> completablefuture = this.getChunkFutureMainThread(x, z, leastStatus, create);
|
||
|
+ CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> completablefuture = this.getChunkFutureMainThread(x, z, leastStatus, create, true); // Paper
|
||
|
|
||
|
if (!completablefuture.isDone()) { // Paper
|
||
|
+ // Paper start - async chunk io/loading
|
||
|
+ this.level.asyncChunkTaskManager.raisePriority(x1, z1, com.destroystokyo.paper.io.PrioritizedTaskQueue.HIGHEST_PRIORITY);
|
||
|
+ com.destroystokyo.paper.io.chunk.ChunkTaskManager.pushChunkWait(this.level, x1, z1);
|
||
|
+ // Paper end
|
||
|
this.level.timings.syncChunkLoad.startTiming(); // Paper
|
||
|
this.mainThreadProcessor.managedBlock(completablefuture::isDone);
|
||
|
+ com.destroystokyo.paper.io.chunk.ChunkTaskManager.popChunkWait(); // Paper - async chunk debug
|
||
|
this.level.timings.syncChunkLoad.stopTiming(); // Paper
|
||
|
} // Paper
|
||
|
ichunkaccess = (ChunkAccess) ((Either) completablefuture.join()).map((ichunkaccess1) -> {
|
||
|
@@ -429,9 +562,14 @@ public class ServerChunkCache extends ChunkSource {
|
||
|
}
|
||
|
|
||
|
private CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> getChunkFutureMainThread(int chunkX, int chunkZ, ChunkStatus leastStatus, boolean create) {
|
||
|
- ChunkPos chunkcoordintpair = new ChunkPos(chunkX, chunkZ);
|
||
|
+ // Paper start - add isUrgent - old sig left in place for dirty nms plugins
|
||
|
+ return getChunkFutureMainThread(chunkX, chunkZ, leastStatus, create, false);
|
||
|
+ }
|
||
|
+ private CompletableFuture<Either<ChunkAccess, ChunkHolder.ChunkLoadingFailure>> getChunkFutureMainThread(int i, int j, ChunkStatus chunkstatus, boolean flag, boolean isUrgent) {
|
||
|
+ // Paper end
|
||
|
+ ChunkPos chunkcoordintpair = new ChunkPos(i, j);
|
||
|
long k = chunkcoordintpair.toLong();
|
||
|
- int l = 33 + ChunkStatus.getDistance(leastStatus);
|
||
|
+ int l = 33 + ChunkStatus.getDistance(chunkstatus);
|
||
|
ChunkHolder playerchunk = this.getVisibleChunkIfPresent(k);
|
||
|
|
||
|
// CraftBukkit start - don't add new ticket for currently unloading chunk
|
||
|
@@ -441,7 +579,7 @@ public class ServerChunkCache extends ChunkSource {
|
||
|
ChunkHolder.FullChunkStatus currentChunkState = ChunkHolder.getFullChunkStatus(playerchunk.getTicketLevel());
|
||
|
currentlyUnloading = (oldChunkState.isOrAfter(ChunkHolder.FullChunkStatus.BORDER) && !currentChunkState.isOrAfter(ChunkHolder.FullChunkStatus.BORDER));
|
||
|
}
|
||
|
- if (create && !currentlyUnloading) {
|
||
|
+ if (flag && !currentlyUnloading) {
|
||
|
// CraftBukkit end
|
||
|
this.distanceManager.addTicket(TicketType.UNKNOWN, chunkcoordintpair, l, chunkcoordintpair);
|
||
|
if (this.chunkAbsent(playerchunk, l)) {
|
||
|
@@ -457,7 +595,7 @@ public class ServerChunkCache extends ChunkSource {
|
||
|
}
|
||
|
}
|
||
|
|
||
|
- return this.chunkAbsent(playerchunk, l) ? ChunkHolder.UNLOADED_CHUNK_FUTURE : playerchunk.getOrScheduleFuture(leastStatus, this.chunkMap);
|
||
|
+ return this.chunkAbsent(playerchunk, l) ? ChunkHolder.UNLOADED_CHUNK_FUTURE : playerchunk.getOrScheduleFuture(chunkstatus, this.chunkMap);
|
||
|
}
|
||
|
|
||
|
private boolean chunkAbsent(@Nullable ChunkHolder holder, int maxLevel) {
|
||
|
@@ -831,11 +969,12 @@ public class ServerChunkCache extends ChunkSource {
|
||
|
protected boolean pollTask() {
|
||
|
// CraftBukkit start - process pending Chunk loadCallback() and unloadCallback() after each run task
|
||
|
try {
|
||
|
+ boolean execChunkTask = com.destroystokyo.paper.io.chunk.ChunkTaskManager.pollChunkWaitQueue() || ServerChunkCache.this.level.asyncChunkTaskManager.pollNextChunkTask(); // Paper
|
||
|
if (ServerChunkCache.this.runDistanceManagerUpdates()) {
|
||
|
return true;
|
||
|
} else {
|
||
|
ServerChunkCache.this.lightEngine.tryScheduleUpdate();
|
||
|
- return super.pollTask();
|
||
|
+ return super.pollTask() || execChunkTask; // Paper
|
||
|
}
|
||
|
} finally {
|
||
|
chunkMap.callbackExecutor.run();
|
||
|
diff --git a/src/main/java/net/minecraft/server/level/ServerLevel.java b/src/main/java/net/minecraft/server/level/ServerLevel.java
|
||
|
index a811ced17721b70bb51837f47e466c2261db2466..95eff4f6165024d21e5c4268a9ae1b7a4268de4b 100644
|
||
|
--- a/src/main/java/net/minecraft/server/level/ServerLevel.java
|
||
|
+++ b/src/main/java/net/minecraft/server/level/ServerLevel.java
|
||
|
@@ -51,6 +51,7 @@ import net.minecraft.core.RegistryAccess;
|
||
|
import net.minecraft.core.SectionPos;
|
||
|
import net.minecraft.core.Vec3i;
|
||
|
import net.minecraft.core.particles.ParticleOptions;
|
||
|
+import net.minecraft.nbt.CompoundTag;
|
||
|
import net.minecraft.network.chat.Component;
|
||
|
import net.minecraft.network.chat.TranslatableComponent;
|
||
|
import net.minecraft.network.protocol.Packet;
|
||
|
@@ -122,6 +123,7 @@ import net.minecraft.world.level.chunk.ChunkGenerator;
|
||
|
import net.minecraft.world.level.chunk.ChunkStatus;
|
||
|
import net.minecraft.world.level.chunk.LevelChunk;
|
||
|
import net.minecraft.world.level.chunk.LevelChunkSection;
|
||
|
+import net.minecraft.world.level.chunk.storage.RegionFile;
|
||
|
import net.minecraft.world.level.dimension.DimensionType;
|
||
|
import net.minecraft.world.level.dimension.end.EndDragonFight;
|
||
|
import net.minecraft.world.level.levelgen.Heightmap;
|
||
|
@@ -202,6 +204,79 @@ public class ServerLevel extends net.minecraft.world.level.Level implements Worl
|
||
|
return this.chunkSource.getChunk(x, z, false);
|
||
|
}
|
||
|
|
||
|
+ // Paper start - Asynchronous IO
|
||
|
+ public final com.destroystokyo.paper.io.PaperFileIOThread.ChunkDataController poiDataController = new com.destroystokyo.paper.io.PaperFileIOThread.ChunkDataController() {
|
||
|
+ @Override
|
||
|
+ public void writeData(int x, int z, CompoundTag compound) throws java.io.IOException {
|
||
|
+ ServerLevel.this.getChunkSource().chunkMap.getVillagePlace().write(new ChunkPos(x, z), compound);
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public CompoundTag readData(int x, int z) throws java.io.IOException {
|
||
|
+ return ServerLevel.this.getChunkSource().chunkMap.getVillagePlace().read(new ChunkPos(x, z));
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public <T> T computeForRegionFile(int chunkX, int chunkZ, java.util.function.Function<RegionFile, T> function) {
|
||
|
+ synchronized (ServerLevel.this.getChunkSource().chunkMap.getVillagePlace()) {
|
||
|
+ RegionFile file;
|
||
|
+
|
||
|
+ try {
|
||
|
+ file = ServerLevel.this.getChunkSource().chunkMap.getVillagePlace().getFile(new ChunkPos(chunkX, chunkZ), false);
|
||
|
+ } catch (java.io.IOException ex) {
|
||
|
+ throw new RuntimeException(ex);
|
||
|
+ }
|
||
|
+
|
||
|
+ return function.apply(file);
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public <T> T computeForRegionFileIfLoaded(int chunkX, int chunkZ, java.util.function.Function<RegionFile, T> function) {
|
||
|
+ synchronized (ServerLevel.this.getChunkSource().chunkMap.getVillagePlace()) {
|
||
|
+ RegionFile file = ServerLevel.this.getChunkSource().chunkMap.getVillagePlace().getRegionFileIfLoaded(new ChunkPos(chunkX, chunkZ));
|
||
|
+ return function.apply(file);
|
||
|
+ }
|
||
|
+ }
|
||
|
+ };
|
||
|
+
|
||
|
+ public final com.destroystokyo.paper.io.PaperFileIOThread.ChunkDataController chunkDataController = new com.destroystokyo.paper.io.PaperFileIOThread.ChunkDataController() {
|
||
|
+ @Override
|
||
|
+ public void writeData(int x, int z, CompoundTag compound) throws java.io.IOException {
|
||
|
+ ServerLevel.this.getChunkSource().chunkMap.write(new ChunkPos(x, z), compound);
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public CompoundTag readData(int x, int z) throws java.io.IOException {
|
||
|
+ return ServerLevel.this.getChunkSource().chunkMap.read(new ChunkPos(x, z));
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public <T> T computeForRegionFile(int chunkX, int chunkZ, java.util.function.Function<RegionFile, T> function) {
|
||
|
+ synchronized (ServerLevel.this.getChunkSource().chunkMap) {
|
||
|
+ RegionFile file;
|
||
|
+
|
||
|
+ try {
|
||
|
+ file = ServerLevel.this.getChunkSource().chunkMap.regionFileCache.getFile(new ChunkPos(chunkX, chunkZ), false);
|
||
|
+ } catch (java.io.IOException ex) {
|
||
|
+ throw new RuntimeException(ex);
|
||
|
+ }
|
||
|
+
|
||
|
+ return function.apply(file);
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public <T> T computeForRegionFileIfLoaded(int chunkX, int chunkZ, java.util.function.Function<RegionFile, T> function) {
|
||
|
+ synchronized (ServerLevel.this.getChunkSource().chunkMap) {
|
||
|
+ RegionFile file = ServerLevel.this.getChunkSource().chunkMap.regionFileCache.getRegionFileIfLoaded(new ChunkPos(chunkX, chunkZ));
|
||
|
+ return function.apply(file);
|
||
|
+ }
|
||
|
+ }
|
||
|
+ };
|
||
|
+ public final com.destroystokyo.paper.io.chunk.ChunkTaskManager asyncChunkTaskManager;
|
||
|
+ // Paper end
|
||
|
+
|
||
|
// Add env and gen to constructor, WorldData -> WorldDataServer
|
||
|
public ServerLevel(MinecraftServer minecraftserver, Executor executor, LevelStorageSource.LevelStorageAccess convertable_conversionsession, ServerLevelData iworlddataserver, ResourceKey<net.minecraft.world.level.Level> resourcekey, DimensionType dimensionmanager, ChunkProgressListener worldloadlistener, ChunkGenerator chunkgenerator, boolean flag, long i, List<CustomSpawner> list, boolean flag1, org.bukkit.World.Environment env, org.bukkit.generator.ChunkGenerator gen) {
|
||
|
super(iworlddataserver, resourcekey, dimensionmanager, minecraftserver::getProfiler, false, flag, i, gen, env, executor); // Paper pass executor
|
||
|
@@ -249,6 +324,8 @@ public class ServerLevel extends net.minecraft.world.level.Level implements Worl
|
||
|
this.dragonFight = null;
|
||
|
}
|
||
|
this.getCraftServer().addWorld(this.getWorld()); // CraftBukkit
|
||
|
+
|
||
|
+ this.asyncChunkTaskManager = new com.destroystokyo.paper.io.chunk.ChunkTaskManager(this); // Paper
|
||
|
}
|
||
|
|
||
|
// CraftBukkit start
|
||
|
@@ -1737,7 +1814,10 @@ public class ServerLevel extends net.minecraft.world.level.Level implements Worl
|
||
|
}
|
||
|
|
||
|
MCUtil.getSpiralOutChunks(spawn, radiusInBlocks >> 4).forEach(pair -> {
|
||
|
- getChunkSource().getChunkAtMainThread(pair.x, pair.z);
|
||
|
+ getChunkSource().getChunkAtAsynchronously(pair.x, pair.z, true, false).exceptionally((ex) -> {
|
||
|
+ ex.printStackTrace();
|
||
|
+ return null;
|
||
|
+ });
|
||
|
});
|
||
|
}
|
||
|
public void removeTicketsForSpawn(int radiusInBlocks, BlockPos spawn) {
|
||
|
diff --git a/src/main/java/net/minecraft/server/level/TicketType.java b/src/main/java/net/minecraft/server/level/TicketType.java
|
||
|
index cf3ced15c9a87e7a4dbccba17c57a7b32b77566c..d09e4857b6c40410d134fa81b48e95919a7373bd 100644
|
||
|
--- a/src/main/java/net/minecraft/server/level/TicketType.java
|
||
|
+++ b/src/main/java/net/minecraft/server/level/TicketType.java
|
||
|
@@ -26,6 +26,7 @@ public class TicketType<T> {
|
||
|
public static final TicketType<Unit> PLUGIN = create("plugin", (a, b) -> 0); // CraftBukkit
|
||
|
public static final TicketType<org.bukkit.plugin.Plugin> PLUGIN_TICKET = create("plugin_ticket", (plugin1, plugin2) -> plugin1.getClass().getName().compareTo(plugin2.getClass().getName())); // CraftBukkit
|
||
|
public static final TicketType<Long> FUTURE_AWAIT = create("future_await", Long::compareTo); // Paper
|
||
|
+ public static final TicketType<Long> ASYNC_LOAD = create("async_load", Long::compareTo); // Paper
|
||
|
|
||
|
public static <T> TicketType<T> create(String name, Comparator<T> comparator) {
|
||
|
return new TicketType<>(name, comparator, 0L);
|
||
|
diff --git a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
|
||
|
index 4f99c3d06e3b994708c699395adf481a6828e097..5dd99709d6b0ed15bbcee184fe33a28bc1c19dac 100644
|
||
|
--- a/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
|
||
|
+++ b/src/main/java/net/minecraft/server/network/ServerGamePacketListenerImpl.java
|
||
|
@@ -728,6 +728,13 @@ public class ServerGamePacketListenerImpl implements ServerGamePacketListener {
|
||
|
server.scheduleOnMain(() -> this.disconnect(new TranslatableComponent("disconnect.spam", new Object[0]))); // Paper
|
||
|
return;
|
||
|
}
|
||
|
+ // Paper start
|
||
|
+ String str = packet.getCommand(); int index = -1;
|
||
|
+ if (str.length() > 64 && ((index = str.indexOf(' ')) == -1 || index >= 64)) {
|
||
|
+ server.scheduleOnMain(() -> this.disconnect(new TranslatableComponent("disconnect.spam", new Object[0]))); // Paper
|
||
|
+ return;
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
// CraftBukkit end
|
||
|
StringReader stringreader = new StringReader(packet.getCommand());
|
||
|
|
||
|
diff --git a/src/main/java/net/minecraft/util/thread/BlockableEventLoop.java b/src/main/java/net/minecraft/util/thread/BlockableEventLoop.java
|
||
|
index e48fcfe2e4ff151258ae1d84cc0995d2cd54e9a6..a5ce61be7d6e85ac289730d9671e66a7190529f9 100644
|
||
|
--- a/src/main/java/net/minecraft/util/thread/BlockableEventLoop.java
|
||
|
+++ b/src/main/java/net/minecraft/util/thread/BlockableEventLoop.java
|
||
|
@@ -91,7 +91,7 @@ public abstract class BlockableEventLoop<R extends Runnable> implements Processo
|
||
|
|
||
|
}
|
||
|
|
||
|
- protected void runAllTasks() {
|
||
|
+ public void runAllTasks() { // Paper - protected -> public
|
||
|
while (this.pollTask()) {
|
||
|
;
|
||
|
}
|
||
|
diff --git a/src/main/java/net/minecraft/world/entity/ai/village/poi/PoiManager.java b/src/main/java/net/minecraft/world/entity/ai/village/poi/PoiManager.java
|
||
|
index 33a8604fa6c6431ccc5f61e484c163e09f1625a0..d082af8cf4c0c7ca434598aa370712c62e05bb24 100644
|
||
|
--- a/src/main/java/net/minecraft/world/entity/ai/village/poi/PoiManager.java
|
||
|
+++ b/src/main/java/net/minecraft/world/entity/ai/village/poi/PoiManager.java
|
||
|
@@ -22,7 +22,9 @@ import java.util.stream.Stream;
|
||
|
import net.minecraft.Util;
|
||
|
import net.minecraft.core.BlockPos;
|
||
|
import net.minecraft.core.SectionPos;
|
||
|
+import net.minecraft.nbt.CompoundTag;
|
||
|
import net.minecraft.server.level.SectionTracker;
|
||
|
+import net.minecraft.server.level.ServerLevel;
|
||
|
import net.minecraft.util.datafix.DataFixTypes;
|
||
|
import net.minecraft.world.level.ChunkPos;
|
||
|
import net.minecraft.world.level.LevelReader;
|
||
|
@@ -36,8 +38,16 @@ public class PoiManager extends SectionStorage<PoiSection> {
|
||
|
private final PoiManager.DistanceTracker distanceTracker = new PoiManager.DistanceTracker();
|
||
|
private final LongSet loadedChunks = new LongOpenHashSet();
|
||
|
|
||
|
+ private final ServerLevel world; // Paper
|
||
|
+
|
||
|
public PoiManager(File directory, DataFixer datafixer, boolean flag) {
|
||
|
- super(directory, PoiSection::codec, PoiSection::new, datafixer, DataFixTypes.POI_CHUNK, flag);
|
||
|
+ // Paper start - add world parameter
|
||
|
+ this(directory, datafixer, flag, null);
|
||
|
+ }
|
||
|
+ public PoiManager(File file, DataFixer datafixer, boolean flag, ServerLevel world) {
|
||
|
+ super(file, PoiSection::codec, PoiSection::new, datafixer, DataFixTypes.POI_CHUNK, flag);
|
||
|
+ this.world = world;
|
||
|
+ // Paper end - add world parameter
|
||
|
}
|
||
|
|
||
|
public void add(BlockPos pos, PoiType type) {
|
||
|
@@ -155,7 +165,23 @@ public class PoiManager extends SectionStorage<PoiSection> {
|
||
|
|
||
|
@Override
|
||
|
public void tick(BooleanSupplier shouldKeepTicking) {
|
||
|
- super.tick(shouldKeepTicking);
|
||
|
+ // Paper start - async chunk io
|
||
|
+ if (this.world == null) {
|
||
|
+ super.tick(shouldKeepTicking);
|
||
|
+ } else {
|
||
|
+ //super.a(booleansupplier); // re-implement below
|
||
|
+ while (!((SectionStorage)this).dirty.isEmpty() && shouldKeepTicking.getAsBoolean()) {
|
||
|
+ ChunkPos chunkcoordintpair = SectionPos.of(((SectionStorage)this).dirty.firstLong()).chunk();
|
||
|
+
|
||
|
+ CompoundTag data;
|
||
|
+ try (co.aikar.timings.Timing ignored1 = this.world.timings.poiSaveDataSerialization.startTiming()) {
|
||
|
+ data = this.getData(chunkcoordintpair);
|
||
|
+ }
|
||
|
+ com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE.scheduleSave(this.world,
|
||
|
+ chunkcoordintpair.x, chunkcoordintpair.z, data, null, com.destroystokyo.paper.io.PrioritizedTaskQueue.LOW_PRIORITY);
|
||
|
+ }
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
this.distanceTracker.runAllUpdates();
|
||
|
}
|
||
|
|
||
|
@@ -255,6 +281,35 @@ public class PoiManager extends SectionStorage<PoiSection> {
|
||
|
}
|
||
|
}
|
||
|
|
||
|
+ // Paper start - Asynchronous chunk io
|
||
|
+ @javax.annotation.Nullable
|
||
|
+ @Override
|
||
|
+ public CompoundTag read(ChunkPos chunkcoordintpair) throws java.io.IOException {
|
||
|
+ if (this.world != null && Thread.currentThread() != com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE) {
|
||
|
+ CompoundTag ret = com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE
|
||
|
+ .loadChunkDataAsyncFuture(this.world, chunkcoordintpair.x, chunkcoordintpair.z, com.destroystokyo.paper.io.IOUtil.getPriorityForCurrentThread(),
|
||
|
+ true, false, true).join().poiData;
|
||
|
+
|
||
|
+ if (ret == com.destroystokyo.paper.io.PaperFileIOThread.FAILURE_VALUE) {
|
||
|
+ throw new java.io.IOException("See logs for further detail");
|
||
|
+ }
|
||
|
+ return ret;
|
||
|
+ }
|
||
|
+ return super.read(chunkcoordintpair);
|
||
|
+ }
|
||
|
+
|
||
|
+ @Override
|
||
|
+ public void write(ChunkPos chunkcoordintpair, CompoundTag nbttagcompound) throws java.io.IOException {
|
||
|
+ if (this.world != null && Thread.currentThread() != com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE) {
|
||
|
+ com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE.scheduleSave(
|
||
|
+ this.world, chunkcoordintpair.x, chunkcoordintpair.z, nbttagcompound, null,
|
||
|
+ com.destroystokyo.paper.io.IOUtil.getPriorityForCurrentThread());
|
||
|
+ return;
|
||
|
+ }
|
||
|
+ super.write(chunkcoordintpair, nbttagcompound);
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
+
|
||
|
public static enum Occupancy {
|
||
|
|
||
|
HAS_SPACE(PoiRecord::hasSpace), IS_OCCUPIED(PoiRecord::isOccupied), ANY((villageplacerecord) -> {
|
||
|
diff --git a/src/main/java/net/minecraft/world/level/TickNextTickData.java b/src/main/java/net/minecraft/world/level/TickNextTickData.java
|
||
|
index d97e266b83bb331fcd4031046a5843d29ce53164..90833389022d7412bdda8868a356b84f62a00e03 100644
|
||
|
--- a/src/main/java/net/minecraft/world/level/TickNextTickData.java
|
||
|
+++ b/src/main/java/net/minecraft/world/level/TickNextTickData.java
|
||
|
@@ -5,7 +5,7 @@ import net.minecraft.core.BlockPos;
|
||
|
|
||
|
public class TickNextTickData<T> {
|
||
|
|
||
|
- private static long counter;
|
||
|
+ private static final java.util.concurrent.atomic.AtomicLong COUNTER = new java.util.concurrent.atomic.AtomicLong(); // Paper - async chunk loading
|
||
|
private final T type;
|
||
|
public final BlockPos pos;
|
||
|
public final long triggerTick;
|
||
|
@@ -17,7 +17,7 @@ public class TickNextTickData<T> {
|
||
|
}
|
||
|
|
||
|
public TickNextTickData(BlockPos pos, T t, long time, TickPriority priority) {
|
||
|
- this.c = (long) (TickNextTickData.counter++);
|
||
|
+ this.c = (long) (TickNextTickData.COUNTER.getAndIncrement()); // Paper - async chunk loading
|
||
|
this.pos = pos.immutable();
|
||
|
this.type = t;
|
||
|
this.triggerTick = time;
|
||
|
diff --git a/src/main/java/net/minecraft/world/level/chunk/ChunkStatus.java b/src/main/java/net/minecraft/world/level/chunk/ChunkStatus.java
|
||
|
index 46d5a24332c1fd3c164b760ec2a2d5bf859b1ab6..3c85b0d39a3fc5c8ec073d92f48b360c0b0be245 100644
|
||
|
--- a/src/main/java/net/minecraft/world/level/chunk/ChunkStatus.java
|
||
|
+++ b/src/main/java/net/minecraft/world/level/chunk/ChunkStatus.java
|
||
|
@@ -170,6 +170,7 @@ public class ChunkStatus {
|
||
|
return ChunkStatus.STATUS_BY_RANGE.size();
|
||
|
}
|
||
|
|
||
|
+ public static int getTicketLevelOffset(ChunkStatus status) { return ChunkStatus.getDistance(status); } // Paper - OBFHELPER
|
||
|
public static int getDistance(ChunkStatus status) {
|
||
|
return ChunkStatus.RANGE_BY_STATUS.getInt(status.getIndex());
|
||
|
}
|
||
|
@@ -185,6 +186,7 @@ public class ChunkStatus {
|
||
|
this.index = previous == null ? 0 : previous.getIndex() + 1;
|
||
|
}
|
||
|
|
||
|
+ public final int getStatusIndex() { return getIndex(); } // Paper - OBFHELPER
|
||
|
public int getIndex() {
|
||
|
return this.index;
|
||
|
}
|
||
|
@@ -193,7 +195,7 @@ public class ChunkStatus {
|
||
|
return this.name;
|
||
|
}
|
||
|
|
||
|
- public ChunkStatus getPreviousStatus() { return this.getParent(); } // Paper - OBFHELPER
|
||
|
+ public final ChunkStatus getPreviousStatus() { return this.getParent(); } // Paper - OBFHELPER
|
||
|
public ChunkStatus getParent() {
|
||
|
return this.parent;
|
||
|
}
|
||
|
@@ -206,6 +208,7 @@ public class ChunkStatus {
|
||
|
return this.loadingTask.doWork(this, world, structureManager, lightingProvider, function, chunk);
|
||
|
}
|
||
|
|
||
|
+ public final int getNeighborRadius() { return this.getRange(); } // Paper - OBFHELPER
|
||
|
public int getRange() {
|
||
|
return this.range;
|
||
|
}
|
||
|
@@ -233,6 +236,7 @@ public class ChunkStatus {
|
||
|
return this.heightmapsAfter;
|
||
|
}
|
||
|
|
||
|
+ public final boolean isAtLeastStatus(ChunkStatus chunkstatus) { return isOrAfter(chunkstatus); } // Paper - OBFHELPER
|
||
|
public boolean isOrAfter(ChunkStatus chunk) {
|
||
|
return this.getIndex() >= chunk.getIndex();
|
||
|
}
|
||
|
diff --git a/src/main/java/net/minecraft/world/level/chunk/DataLayer.java b/src/main/java/net/minecraft/world/level/chunk/DataLayer.java
|
||
|
index 808f69a10589a4a7d6c238c05f6d3e0f272681d3..2b798f4e556302f6f79d54182a309f4716a84f04 100644
|
||
|
--- a/src/main/java/net/minecraft/world/level/chunk/DataLayer.java
|
||
|
+++ b/src/main/java/net/minecraft/world/level/chunk/DataLayer.java
|
||
|
@@ -73,6 +73,7 @@ public class DataLayer {
|
||
|
return this.data;
|
||
|
}
|
||
|
|
||
|
+ public DataLayer copy() { return this.copy(); } // Paper - OBFHELPER
|
||
|
public DataLayer copy() {
|
||
|
return this.data == null ? new DataLayer() : new DataLayer((byte[]) this.data.clone());
|
||
|
}
|
||
|
diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/ChunkSerializer.java b/src/main/java/net/minecraft/world/level/chunk/storage/ChunkSerializer.java
|
||
|
index 8dbd1dc2de400ad0c6c2be49ba09dfc03216ffd2..be67dc16bf70e4517efd213ca9002f116f60b57c 100644
|
||
|
--- a/src/main/java/net/minecraft/world/level/chunk/storage/ChunkSerializer.java
|
||
|
+++ b/src/main/java/net/minecraft/world/level/chunk/storage/ChunkSerializer.java
|
||
|
@@ -6,6 +6,7 @@ import it.unimi.dsi.fastutil.longs.LongOpenHashSet;
|
||
|
import it.unimi.dsi.fastutil.longs.LongSet;
|
||
|
import it.unimi.dsi.fastutil.shorts.ShortList;
|
||
|
import it.unimi.dsi.fastutil.shorts.ShortListIterator;
|
||
|
+import java.util.ArrayDeque; // Paper
|
||
|
import java.util.Arrays;
|
||
|
import java.util.BitSet;
|
||
|
import java.util.EnumSet;
|
||
|
@@ -66,34 +67,58 @@ public class ChunkSerializer {
|
||
|
|
||
|
private static final Logger LOGGER = LogManager.getLogger();
|
||
|
|
||
|
+ // Paper start
|
||
|
+ public static final class InProgressChunkHolder {
|
||
|
+
|
||
|
+ public final ProtoChunk protoChunk;
|
||
|
+ public final ArrayDeque<Runnable> tasks;
|
||
|
+
|
||
|
+ public CompoundTag poiData;
|
||
|
+
|
||
|
+ public InProgressChunkHolder(final ProtoChunk protoChunk, final ArrayDeque<Runnable> tasks) {
|
||
|
+ this.protoChunk = protoChunk;
|
||
|
+ this.tasks = tasks;
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
public static ProtoChunk read(ServerLevel world, StructureManager structureManager, PoiManager poiStorage, ChunkPos pos, CompoundTag tag) {
|
||
|
- ChunkGenerator chunkgenerator = world.getChunkSource().getGenerator();
|
||
|
+ InProgressChunkHolder holder = loadChunk(world, structureManager, poiStorage, pos, tag, true);
|
||
|
+ holder.tasks.forEach(Runnable::run);
|
||
|
+ return holder.protoChunk;
|
||
|
+ }
|
||
|
+
|
||
|
+ public static InProgressChunkHolder loadChunk(ServerLevel worldserver, StructureManager definedstructuremanager, PoiManager villageplace, ChunkPos chunkcoordintpair, CompoundTag nbttagcompound, boolean distinguish) {
|
||
|
+ ArrayDeque<Runnable> tasksToExecuteOnMain = new ArrayDeque<>();
|
||
|
+ // Paper end
|
||
|
+ ChunkGenerator chunkgenerator = worldserver.getChunkSource().getGenerator();
|
||
|
BiomeSource worldchunkmanager = chunkgenerator.getBiomeSource();
|
||
|
- CompoundTag nbttagcompound1 = tag.getCompound("Level");
|
||
|
+ CompoundTag nbttagcompound1 = nbttagcompound.getCompound("Level");
|
||
|
ChunkPos chunkcoordintpair1 = new ChunkPos(nbttagcompound1.getInt("xPos"), nbttagcompound1.getInt("zPos"));
|
||
|
|
||
|
- if (!Objects.equals(pos, chunkcoordintpair1)) {
|
||
|
- ChunkSerializer.LOGGER.error("Chunk file at {} is in the wrong location; relocating. (Expected {}, got {})", pos, pos, chunkcoordintpair1);
|
||
|
+ if (!Objects.equals(chunkcoordintpair, chunkcoordintpair1)) {
|
||
|
+ ChunkSerializer.LOGGER.error("Chunk file at {} is in the wrong location; relocating. (Expected {}, got {})", chunkcoordintpair, chunkcoordintpair, chunkcoordintpair1);
|
||
|
}
|
||
|
|
||
|
- ChunkBiomeContainer biomestorage = new ChunkBiomeContainer(world.registryAccess().registryOrThrow(Registry.BIOME_REGISTRY), pos, worldchunkmanager, nbttagcompound1.contains("Biomes", 11) ? nbttagcompound1.getIntArray("Biomes") : null);
|
||
|
+ ChunkBiomeContainer biomestorage = new ChunkBiomeContainer(worldserver.registryAccess().registryOrThrow(Registry.BIOME_REGISTRY), chunkcoordintpair, worldchunkmanager, nbttagcompound1.contains("Biomes", 11) ? nbttagcompound1.getIntArray("Biomes") : null);
|
||
|
UpgradeData chunkconverter = nbttagcompound1.contains("UpgradeData", 10) ? new UpgradeData(nbttagcompound1.getCompound("UpgradeData")) : UpgradeData.EMPTY;
|
||
|
ProtoTickList<Block> protochunkticklist = new ProtoTickList<>((block) -> {
|
||
|
return block == null || block.defaultBlockState().isAir();
|
||
|
- }, pos, nbttagcompound1.getList("ToBeTicked", 9));
|
||
|
+ }, chunkcoordintpair, nbttagcompound1.getList("ToBeTicked", 9));
|
||
|
ProtoTickList<Fluid> protochunkticklist1 = new ProtoTickList<>((fluidtype) -> {
|
||
|
return fluidtype == null || fluidtype == Fluids.EMPTY;
|
||
|
- }, pos, nbttagcompound1.getList("LiquidsToBeTicked", 9));
|
||
|
+ }, chunkcoordintpair, nbttagcompound1.getList("LiquidsToBeTicked", 9));
|
||
|
boolean flag = nbttagcompound1.getBoolean("isLightOn");
|
||
|
ListTag nbttaglist = nbttagcompound1.getList("Sections", 10);
|
||
|
boolean flag1 = true;
|
||
|
LevelChunkSection[] achunksection = new LevelChunkSection[16];
|
||
|
- boolean flag2 = world.dimensionType().hasSkyLight();
|
||
|
- ServerChunkCache chunkproviderserver = world.getChunkSource();
|
||
|
+ boolean flag2 = worldserver.dimensionType().hasSkyLight();
|
||
|
+ ServerChunkCache chunkproviderserver = worldserver.getChunkSource();
|
||
|
LevelLightEngine lightengine = chunkproviderserver.getLightEngine();
|
||
|
|
||
|
if (flag) {
|
||
|
- lightengine.retainData(pos, true);
|
||
|
+ tasksToExecuteOnMain.add(() -> { // Paper - delay this task since we're executing off-main
|
||
|
+ lightengine.retainData(chunkcoordintpair, true);
|
||
|
+ }); // Paper - delay this task since we're executing off-main
|
||
|
}
|
||
|
|
||
|
for (int i = 0; i < nbttaglist.size(); ++i) {
|
||
|
@@ -101,7 +126,7 @@ public class ChunkSerializer {
|
||
|
byte b0 = nbttagcompound2.getByte("Y");
|
||
|
|
||
|
if (nbttagcompound2.contains("Palette", 9) && nbttagcompound2.contains("BlockStates", 12)) {
|
||
|
- LevelChunkSection chunksection = new LevelChunkSection(b0 << 4, null, world, false); // Paper - Anti-Xray - Add parameters
|
||
|
+ LevelChunkSection chunksection = new LevelChunkSection(b0 << 4, null, worldserver, false); // Paper - Anti-Xray - Add parameters
|
||
|
|
||
|
chunksection.getStates().read(nbttagcompound2.getList("Palette", 10), nbttagcompound2.getLongArray("BlockStates"));
|
||
|
chunksection.recalcBlockCounts();
|
||
|
@@ -109,22 +134,34 @@ public class ChunkSerializer {
|
||
|
achunksection[b0] = chunksection;
|
||
|
}
|
||
|
|
||
|
- poiStorage.checkConsistencyWithBlocks(pos, chunksection);
|
||
|
+ tasksToExecuteOnMain.add(() -> { // Paper - delay this task since we're executing off-main
|
||
|
+ villageplace.checkConsistencyWithBlocks(chunkcoordintpair, chunksection);
|
||
|
+ }); // Paper - delay this task since we're executing off-main
|
||
|
}
|
||
|
|
||
|
if (flag) {
|
||
|
if (nbttagcompound2.contains("BlockLight", 7)) {
|
||
|
- lightengine.queueSectionData(LightLayer.BLOCK, SectionPos.of(pos, b0), new DataLayer(nbttagcompound2.getByteArray("BlockLight")), true);
|
||
|
+ // Paper start - delay this task since we're executing off-main
|
||
|
+ DataLayer blockLight = new DataLayer(nbttagcompound2.getByteArray("BlockLight"));
|
||
|
+ tasksToExecuteOnMain.add(() -> {
|
||
|
+ lightengine.queueSectionData(LightLayer.BLOCK, SectionPos.of(chunkcoordintpair, b0), blockLight, true);
|
||
|
+ });
|
||
|
+ // Paper end - delay this task since we're executing off-main
|
||
|
}
|
||
|
|
||
|
if (flag2 && nbttagcompound2.contains("SkyLight", 7)) {
|
||
|
- lightengine.queueSectionData(LightLayer.SKY, SectionPos.of(pos, b0), new DataLayer(nbttagcompound2.getByteArray("SkyLight")), true);
|
||
|
+ // Paper start - delay this task since we're executing off-main
|
||
|
+ DataLayer skyLight = new DataLayer(nbttagcompound2.getByteArray("SkyLight"));
|
||
|
+ tasksToExecuteOnMain.add(() -> {
|
||
|
+ lightengine.queueSectionData(LightLayer.SKY, SectionPos.of(chunkcoordintpair, b0), skyLight, true);
|
||
|
+ });
|
||
|
+ // Paper end - delay this task since we're executing off-main
|
||
|
}
|
||
|
}
|
||
|
}
|
||
|
|
||
|
long j = nbttagcompound1.getLong("InhabitedTime");
|
||
|
- ChunkStatus.ChunkType chunkstatus_type = getChunkTypeFromTag(tag);
|
||
|
+ ChunkStatus.ChunkType chunkstatus_type = getChunkTypeFromTag(nbttagcompound);
|
||
|
Object object;
|
||
|
|
||
|
if (chunkstatus_type == ChunkStatus.ChunkType.LEVELCHUNK) {
|
||
|
@@ -155,7 +192,7 @@ public class ChunkSerializer {
|
||
|
object2 = protochunkticklist1;
|
||
|
}
|
||
|
|
||
|
- object = new LevelChunk(world.getLevel(), pos, biomestorage, chunkconverter, (TickList) object1, (TickList) object2, j, achunksection, (chunk) -> {
|
||
|
+ object = new LevelChunk(worldserver.getLevel(), chunkcoordintpair, biomestorage, chunkconverter, (TickList) object1, (TickList) object2, j, achunksection, (chunk) -> {
|
||
|
postLoadChunk(nbttagcompound1, chunk);
|
||
|
// CraftBukkit start - load chunk persistent data from nbt
|
||
|
net.minecraft.nbt.Tag persistentBase = nbttagcompound1.get("ChunkBukkitValues");
|
||
|
@@ -165,7 +202,7 @@ public class ChunkSerializer {
|
||
|
// CraftBukkit end
|
||
|
});
|
||
|
} else {
|
||
|
- ProtoChunk protochunk = new ProtoChunk(pos, chunkconverter, achunksection, protochunkticklist, protochunkticklist1, world); // Paper - Anti-Xray - Add parameter
|
||
|
+ ProtoChunk protochunk = new ProtoChunk(chunkcoordintpair, chunkconverter, achunksection, protochunkticklist, protochunkticklist1, worldserver); // Paper - Anti-Xray - Add parameter
|
||
|
|
||
|
protochunk.setBiomes(biomestorage);
|
||
|
object = protochunk;
|
||
|
@@ -176,7 +213,7 @@ public class ChunkSerializer {
|
||
|
}
|
||
|
|
||
|
if (!flag && protochunk.getStatus().isOrAfter(ChunkStatus.LIGHT)) {
|
||
|
- Iterator iterator = BlockPos.betweenClosed(pos.getMinBlockX(), 0, pos.getMinBlockZ(), pos.getMaxBlockX(), 255, pos.getMaxBlockZ()).iterator();
|
||
|
+ Iterator iterator = BlockPos.betweenClosed(chunkcoordintpair.getMinBlockX(), 0, chunkcoordintpair.getMinBlockZ(), chunkcoordintpair.getMaxBlockX(), 255, chunkcoordintpair.getMaxBlockZ()).iterator();
|
||
|
|
||
|
while (iterator.hasNext()) {
|
||
|
BlockPos blockposition = (BlockPos) iterator.next();
|
||
|
@@ -207,8 +244,8 @@ public class ChunkSerializer {
|
||
|
Heightmap.primeHeightmaps((ChunkAccess) object, enumset);
|
||
|
CompoundTag nbttagcompound4 = nbttagcompound1.getCompound("Structures");
|
||
|
|
||
|
- ((ChunkAccess) object).setAllStarts(unpackStructureStart(structureManager, nbttagcompound4, world.getSeed()));
|
||
|
- ((ChunkAccess) object).setAllReferences(unpackStructureReferences(pos, nbttagcompound4));
|
||
|
+ ((ChunkAccess) object).setAllStarts(unpackStructureStart(definedstructuremanager, nbttagcompound4, worldserver.getSeed()));
|
||
|
+ ((ChunkAccess) object).setAllReferences(unpackStructureReferences(chunkcoordintpair, nbttagcompound4));
|
||
|
if (nbttagcompound1.getBoolean("shouldSave")) {
|
||
|
((ChunkAccess) object).setUnsaved(true);
|
||
|
}
|
||
|
@@ -227,7 +264,7 @@ public class ChunkSerializer {
|
||
|
}
|
||
|
|
||
|
if (chunkstatus_type == ChunkStatus.ChunkType.LEVELCHUNK) {
|
||
|
- return new ImposterProtoChunk((LevelChunk) object);
|
||
|
+ return new InProgressChunkHolder(new ImposterProtoChunk((LevelChunk) object), tasksToExecuteOnMain); // Paper - Async chunk loading
|
||
|
} else {
|
||
|
ProtoChunk protochunk1 = (ProtoChunk) object;
|
||
|
|
||
|
@@ -266,12 +303,84 @@ public class ChunkSerializer {
|
||
|
protochunk1.setCarvingMask(worldgenstage_features, BitSet.valueOf(nbttagcompound5.getByteArray(s1)));
|
||
|
}
|
||
|
|
||
|
- return protochunk1;
|
||
|
+ return new InProgressChunkHolder(protochunk1, tasksToExecuteOnMain); // Paper - Async chunk loading
|
||
|
}
|
||
|
}
|
||
|
|
||
|
+ // Paper start - async chunk save for unload
|
||
|
+ public static final class AsyncSaveData {
|
||
|
+ public final DataLayer[] blockLight; // null or size of 17 (for indices -1 through 15)
|
||
|
+ public final DataLayer[] skyLight;
|
||
|
+
|
||
|
+ public final ListTag blockTickList; // non-null if we had to go to the server's tick list
|
||
|
+ public final ListTag fluidTickList; // non-null if we had to go to the server's tick list
|
||
|
+
|
||
|
+ public final long worldTime;
|
||
|
+
|
||
|
+ public AsyncSaveData(DataLayer[] blockLight, DataLayer[] skyLight, ListTag blockTickList, ListTag fluidTickList,
|
||
|
+ long worldTime) {
|
||
|
+ this.blockLight = blockLight;
|
||
|
+ this.skyLight = skyLight;
|
||
|
+ this.blockTickList = blockTickList;
|
||
|
+ this.fluidTickList = fluidTickList;
|
||
|
+ this.worldTime = worldTime;
|
||
|
+ }
|
||
|
+ }
|
||
|
+
|
||
|
+ // must be called sync
|
||
|
+ public static AsyncSaveData getAsyncSaveData(ServerLevel world, ChunkAccess chunk) {
|
||
|
+ org.spigotmc.AsyncCatcher.catchOp("preparation of chunk data for async save");
|
||
|
+ ChunkPos chunkPos = chunk.getPos();
|
||
|
+
|
||
|
+ ThreadedLevelLightEngine lightenginethreaded = world.getChunkSource().getLightEngine();
|
||
|
+
|
||
|
+ DataLayer[] blockLight = new DataLayer[17 - (-1)];
|
||
|
+ DataLayer[] skyLight = new DataLayer[17 - (-1)];
|
||
|
+
|
||
|
+ for (int i = -1; i < 17; ++i) {
|
||
|
+ DataLayer blockArray = lightenginethreaded.getLayerListener(LightLayer.BLOCK).getDataLayerData(SectionPos.of(chunkPos, i));
|
||
|
+ DataLayer skyArray = lightenginethreaded.getLayerListener(LightLayer.SKY).getDataLayerData(SectionPos.of(chunkPos, i));
|
||
|
+
|
||
|
+ // copy data for safety
|
||
|
+ if (blockArray != null) {
|
||
|
+ blockArray = blockArray.copy();
|
||
|
+ }
|
||
|
+ if (skyArray != null) {
|
||
|
+ skyArray = skyArray.copy();
|
||
|
+ }
|
||
|
+
|
||
|
+ // apply offset of 1 for -1 starting index
|
||
|
+ blockLight[i + 1] = blockArray;
|
||
|
+ skyLight[i + 1] = skyArray;
|
||
|
+ }
|
||
|
+
|
||
|
+ TickList<Block> blockTickList = chunk.getBlockTicks();
|
||
|
+
|
||
|
+ ListTag blockTickListSerialized;
|
||
|
+ if (blockTickList instanceof ProtoTickList || blockTickList instanceof ChunkTickList) {
|
||
|
+ blockTickListSerialized = null;
|
||
|
+ } else {
|
||
|
+ blockTickListSerialized = world.getBlockTicks().save(chunkPos);
|
||
|
+ }
|
||
|
+
|
||
|
+ TickList<Fluid> fluidTickList = chunk.getLiquidTicks();
|
||
|
+
|
||
|
+ ListTag fluidTickListSerialized;
|
||
|
+ if (fluidTickList instanceof ProtoTickList || fluidTickList instanceof ChunkTickList) {
|
||
|
+ fluidTickListSerialized = null;
|
||
|
+ } else {
|
||
|
+ fluidTickListSerialized = world.getLiquidTicks().save(chunkPos);
|
||
|
+ }
|
||
|
+
|
||
|
+ return new AsyncSaveData(blockLight, skyLight, blockTickListSerialized, fluidTickListSerialized, world.getGameTime());
|
||
|
+ }
|
||
|
+
|
||
|
public static CompoundTag write(ServerLevel world, ChunkAccess chunk) {
|
||
|
- ChunkPos chunkcoordintpair = chunk.getPos();
|
||
|
+ return saveChunk(world, chunk, null);
|
||
|
+ }
|
||
|
+ public static CompoundTag saveChunk(ServerLevel worldserver, ChunkAccess ichunkaccess, AsyncSaveData asyncsavedata) {
|
||
|
+ // Paper end
|
||
|
+ ChunkPos chunkcoordintpair = ichunkaccess.getPos();
|
||
|
CompoundTag nbttagcompound = new CompoundTag();
|
||
|
CompoundTag nbttagcompound1 = new CompoundTag();
|
||
|
|
||
|
@@ -279,30 +388,38 @@ public class ChunkSerializer {
|
||
|
nbttagcompound.put("Level", nbttagcompound1);
|
||
|
nbttagcompound1.putInt("xPos", chunkcoordintpair.x);
|
||
|
nbttagcompound1.putInt("zPos", chunkcoordintpair.z);
|
||
|
- nbttagcompound1.putLong("LastUpdate", world.getGameTime());
|
||
|
- nbttagcompound1.putLong("InhabitedTime", chunk.getInhabitedTime());
|
||
|
- nbttagcompound1.putString("Status", chunk.getStatus().getName());
|
||
|
- UpgradeData chunkconverter = chunk.getUpgradeData();
|
||
|
+ nbttagcompound1.putLong("LastUpdate", asyncsavedata != null ? asyncsavedata.worldTime : worldserver.getGameTime()); // Paper - async chunk unloading
|
||
|
+ nbttagcompound1.putLong("InhabitedTime", ichunkaccess.getInhabitedTime());
|
||
|
+ nbttagcompound1.putString("Status", ichunkaccess.getStatus().getName());
|
||
|
+ UpgradeData chunkconverter = ichunkaccess.getUpgradeData();
|
||
|
|
||
|
if (!chunkconverter.isEmpty()) {
|
||
|
nbttagcompound1.put("UpgradeData", chunkconverter.write());
|
||
|
}
|
||
|
|
||
|
- LevelChunkSection[] achunksection = chunk.getSections();
|
||
|
+ LevelChunkSection[] achunksection = ichunkaccess.getSections();
|
||
|
ListTag nbttaglist = new ListTag();
|
||
|
- ThreadedLevelLightEngine lightenginethreaded = world.getChunkSource().getLightEngine();
|
||
|
- boolean flag = chunk.isLightCorrect();
|
||
|
+ ThreadedLevelLightEngine lightenginethreaded = worldserver.getChunkSource().getLightEngine();
|
||
|
+ boolean flag = ichunkaccess.isLightCorrect();
|
||
|
|
||
|
CompoundTag nbttagcompound2;
|
||
|
|
||
|
- for (int i = -1; i < 17; ++i) {
|
||
|
+ for (int i = -1; i < 17; ++i) { // Paper - conflict on loop parameter change
|
||
|
int finalI = i; // CraftBukkit - decompile errors
|
||
|
LevelChunkSection chunksection = (LevelChunkSection) Arrays.stream(achunksection).filter((chunksection1) -> {
|
||
|
return chunksection1 != null && chunksection1.bottomBlockY() >> 4 == finalI; // CraftBukkit - decompile errors
|
||
|
}).findFirst().orElse(LevelChunk.EMPTY_SECTION);
|
||
|
- DataLayer nibblearray = lightenginethreaded.getLayerListener(LightLayer.BLOCK).getDataLayerData(SectionPos.of(chunkcoordintpair, i));
|
||
|
- DataLayer nibblearray1 = lightenginethreaded.getLayerListener(LightLayer.SKY).getDataLayerData(SectionPos.of(chunkcoordintpair, i));
|
||
|
-
|
||
|
+ // Paper start - async chunk save for unload
|
||
|
+ DataLayer nibblearray; // block light
|
||
|
+ DataLayer nibblearray1; // sky light
|
||
|
+ if (asyncsavedata == null) {
|
||
|
+ nibblearray = lightenginethreaded.getLayerListener(LightLayer.BLOCK).getDataLayerData(SectionPos.of(chunkcoordintpair, i)); /// Paper - diff on method change (see getAsyncSaveData)
|
||
|
+ nibblearray1 = lightenginethreaded.getLayerListener(LightLayer.SKY).getDataLayerData(SectionPos.of(chunkcoordintpair, i)); // Paper - diff on method change (see getAsyncSaveData)
|
||
|
+ } else {
|
||
|
+ nibblearray = asyncsavedata.blockLight[i + 1]; // +1 to offset the -1 starting index
|
||
|
+ nibblearray1 = asyncsavedata.skyLight[i + 1]; // +1 to offset the -1 starting index
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
if (chunksection != LevelChunk.EMPTY_SECTION || nibblearray != null || nibblearray1 != null) {
|
||
|
nbttagcompound2 = new CompoundTag();
|
||
|
nbttagcompound2.putByte("Y", (byte) (i & 255));
|
||
|
@@ -327,21 +444,21 @@ public class ChunkSerializer {
|
||
|
nbttagcompound1.putBoolean("isLightOn", true);
|
||
|
}
|
||
|
|
||
|
- ChunkBiomeContainer biomestorage = chunk.getBiomes();
|
||
|
+ ChunkBiomeContainer biomestorage = ichunkaccess.getBiomes();
|
||
|
|
||
|
if (biomestorage != null) {
|
||
|
nbttagcompound1.putIntArray("Biomes", biomestorage.writeBiomes());
|
||
|
}
|
||
|
|
||
|
ListTag nbttaglist1 = new ListTag();
|
||
|
- Iterator iterator = chunk.getBlockEntitiesPos().iterator();
|
||
|
+ Iterator iterator = ichunkaccess.getBlockEntitiesPos().iterator();
|
||
|
|
||
|
CompoundTag nbttagcompound3;
|
||
|
|
||
|
while (iterator.hasNext()) {
|
||
|
BlockPos blockposition = (BlockPos) iterator.next();
|
||
|
|
||
|
- nbttagcompound3 = chunk.getBlockEntityNbtForSaving(blockposition);
|
||
|
+ nbttagcompound3 = ichunkaccess.getBlockEntityNbtForSaving(blockposition);
|
||
|
if (nbttagcompound3 != null) {
|
||
|
nbttaglist1.add(nbttagcompound3);
|
||
|
}
|
||
|
@@ -351,25 +468,25 @@ public class ChunkSerializer {
|
||
|
ListTag nbttaglist2 = new ListTag();
|
||
|
|
||
|
java.util.List<Entity> toUpdate = new java.util.ArrayList<>(); // Paper
|
||
|
- if (chunk.getStatus().getChunkType() == ChunkStatus.ChunkType.LEVELCHUNK) {
|
||
|
- LevelChunk chunk1 = (LevelChunk) chunk;
|
||
|
+ if (ichunkaccess.getStatus().getChunkType() == ChunkStatus.ChunkType.LEVELCHUNK) {
|
||
|
+ LevelChunk chunk = (LevelChunk) ichunkaccess;
|
||
|
|
||
|
// CraftBukkit start - store chunk persistent data in nbt
|
||
|
- if (!chunk1.persistentDataContainer.isEmpty()) {
|
||
|
- nbttagcompound1.put("ChunkBukkitValues", chunk1.persistentDataContainer.toTagCompound());
|
||
|
+ if (!chunk.persistentDataContainer.isEmpty()) {
|
||
|
+ nbttagcompound1.put("ChunkBukkitValues", chunk.persistentDataContainer.toTagCompound());
|
||
|
}
|
||
|
// CraftBukkit end
|
||
|
|
||
|
- chunk1.setLastSaveHadEntities(false);
|
||
|
+ chunk.setLastSaveHadEntities(false);
|
||
|
|
||
|
- for (int j = 0; j < chunk1.getEntitySlices().length; ++j) {
|
||
|
- Iterator iterator1 = chunk1.getEntitySlices()[j].iterator();
|
||
|
+ for (int j = 0; j < chunk.getEntitySlices().length; ++j) {
|
||
|
+ Iterator iterator1 = chunk.getEntitySlices()[j].iterator();
|
||
|
|
||
|
while (iterator1.hasNext()) {
|
||
|
Entity entity = (Entity) iterator1.next();
|
||
|
CompoundTag nbttagcompound4 = new CompoundTag();
|
||
|
// Paper start
|
||
|
- if ((int) Math.floor(entity.getX()) >> 4 != chunk1.getPos().x || (int) Math.floor(entity.getZ()) >> 4 != chunk1.getPos().z) {
|
||
|
+ if (asyncsavedata == null && !entity.removed && (int) Math.floor(entity.getX()) >> 4 != chunk.getPos().x || (int) Math.floor(entity.getZ()) >> 4 != chunk.getPos().z) {
|
||
|
toUpdate.add(entity);
|
||
|
continue;
|
||
|
}
|
||
|
@@ -378,7 +495,7 @@ public class ChunkSerializer {
|
||
|
}
|
||
|
// Paper end
|
||
|
if (entity.save(nbttagcompound4)) {
|
||
|
- chunk1.setLastSaveHadEntities(true);
|
||
|
+ chunk.setLastSaveHadEntities(true);
|
||
|
nbttaglist2.add(nbttagcompound4);
|
||
|
}
|
||
|
}
|
||
|
@@ -386,12 +503,12 @@ public class ChunkSerializer {
|
||
|
|
||
|
// Paper start - move entities to the correct chunk
|
||
|
for (Entity entity : toUpdate) {
|
||
|
- world.updateChunkPos(entity);
|
||
|
+ worldserver.updateChunkPos(entity);
|
||
|
}
|
||
|
// Paper end
|
||
|
|
||
|
} else {
|
||
|
- ProtoChunk protochunk = (ProtoChunk) chunk;
|
||
|
+ ProtoChunk protochunk = (ProtoChunk) ichunkaccess;
|
||
|
|
||
|
nbttaglist2.addAll(protochunk.getEntities());
|
||
|
nbttagcompound1.put("Lights", packOffsets(protochunk.getPackedLights()));
|
||
|
@@ -412,40 +529,48 @@ public class ChunkSerializer {
|
||
|
}
|
||
|
|
||
|
nbttagcompound1.put("Entities", nbttaglist2);
|
||
|
- TickList<Block> ticklist = chunk.getBlockTicks();
|
||
|
+ TickList<Block> ticklist = ichunkaccess.getBlockTicks(); // Paper - diff on method change (see getAsyncSaveData)
|
||
|
|
||
|
if (ticklist instanceof ProtoTickList) {
|
||
|
nbttagcompound1.put("ToBeTicked", ((ProtoTickList) ticklist).save());
|
||
|
} else if (ticklist instanceof ChunkTickList) {
|
||
|
nbttagcompound1.put("TileTicks", ((ChunkTickList) ticklist).save());
|
||
|
+ // Paper start - async chunk save for unload
|
||
|
+ } else if (asyncsavedata != null) {
|
||
|
+ nbttagcompound1.put("TileTicks", asyncsavedata.blockTickList);
|
||
|
+ // Paper end
|
||
|
} else {
|
||
|
- nbttagcompound1.put("TileTicks", world.getBlockTicks().save(chunkcoordintpair));
|
||
|
+ nbttagcompound1.put("TileTicks", worldserver.getBlockTicks().save(chunkcoordintpair)); // Paper - diff on method change (see getAsyncSaveData)
|
||
|
}
|
||
|
|
||
|
- TickList<Fluid> ticklist1 = chunk.getLiquidTicks();
|
||
|
+ TickList<Fluid> ticklist1 = ichunkaccess.getLiquidTicks(); // Paper - diff on method change (see getAsyncSaveData)
|
||
|
|
||
|
if (ticklist1 instanceof ProtoTickList) {
|
||
|
nbttagcompound1.put("LiquidsToBeTicked", ((ProtoTickList) ticklist1).save());
|
||
|
} else if (ticklist1 instanceof ChunkTickList) {
|
||
|
nbttagcompound1.put("LiquidTicks", ((ChunkTickList) ticklist1).save());
|
||
|
+ // Paper start - async chunk save for unload
|
||
|
+ } else if (asyncsavedata != null) {
|
||
|
+ nbttagcompound1.put("LiquidTicks", asyncsavedata.fluidTickList);
|
||
|
+ // Paper end
|
||
|
} else {
|
||
|
- nbttagcompound1.put("LiquidTicks", world.getLiquidTicks().save(chunkcoordintpair));
|
||
|
+ nbttagcompound1.put("LiquidTicks", worldserver.getLiquidTicks().save(chunkcoordintpair)); // Paper - diff on method change (see getAsyncSaveData)
|
||
|
}
|
||
|
|
||
|
- nbttagcompound1.put("PostProcessing", packOffsets(chunk.getPostProcessing()));
|
||
|
+ nbttagcompound1.put("PostProcessing", packOffsets(ichunkaccess.getPostProcessing()));
|
||
|
nbttagcompound2 = new CompoundTag();
|
||
|
- Iterator iterator2 = chunk.getHeightmaps().iterator();
|
||
|
+ Iterator iterator2 = ichunkaccess.getHeightmaps().iterator();
|
||
|
|
||
|
while (iterator2.hasNext()) {
|
||
|
Entry<Heightmap.Types, Heightmap> entry = (Entry) iterator2.next();
|
||
|
|
||
|
- if (chunk.getStatus().heightmapsAfter().contains(entry.getKey())) {
|
||
|
+ if (ichunkaccess.getStatus().heightmapsAfter().contains(entry.getKey())) {
|
||
|
nbttagcompound2.put(((Heightmap.Types) entry.getKey()).getSerializationKey(), new LongArrayTag(((Heightmap) entry.getValue()).getRawData()));
|
||
|
}
|
||
|
}
|
||
|
|
||
|
nbttagcompound1.put("Heightmaps", nbttagcompound2);
|
||
|
- nbttagcompound1.put("Structures", packStructureData(chunkcoordintpair, chunk.getAllStarts(), chunk.getAllReferences()));
|
||
|
+ nbttagcompound1.put("Structures", packStructureData(chunkcoordintpair, ichunkaccess.getAllStarts(), ichunkaccess.getAllReferences()));
|
||
|
return nbttagcompound;
|
||
|
}
|
||
|
// Paper start - this is saved with the player
|
||
|
diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/ChunkStorage.java b/src/main/java/net/minecraft/world/level/chunk/storage/ChunkStorage.java
|
||
|
index 9cffef2098fbfba89ddd88a45bde33c07660497a..684442b7175e30b6d4cafb2f7d2d4c10517cc33d 100644
|
||
|
--- a/src/main/java/net/minecraft/world/level/chunk/storage/ChunkStorage.java
|
||
|
+++ b/src/main/java/net/minecraft/world/level/chunk/storage/ChunkStorage.java
|
||
|
@@ -3,6 +3,10 @@ package net.minecraft.world.level.chunk.storage;
|
||
|
import com.mojang.datafixers.DataFixer;
|
||
|
import java.io.File;
|
||
|
import java.io.IOException;
|
||
|
+// Paper start
|
||
|
+import java.util.concurrent.CompletableFuture;
|
||
|
+import java.util.concurrent.CompletionException;
|
||
|
+// Paper end
|
||
|
import java.util.function.Supplier;
|
||
|
import javax.annotation.Nullable;
|
||
|
import net.minecraft.SharedConstants;
|
||
|
@@ -21,32 +25,41 @@ import net.minecraft.world.level.storage.DimensionDataStorage;
|
||
|
|
||
|
public class ChunkStorage implements AutoCloseable {
|
||
|
|
||
|
- private final IOWorker worker; public IOWorker getIOWorker() { return worker; } // Paper - OBFHELPER
|
||
|
+ // Paper - OBFHELPER - nuke IOWorker
|
||
|
protected final DataFixer fixerUpper;
|
||
|
@Nullable
|
||
|
- private LegacyStructureDataHandler legacyStructureHandler;
|
||
|
+ private volatile LegacyStructureDataHandler legacyStructureHandler; // Paper - async chunk loading
|
||
|
+
|
||
|
+ private final Object persistentDataLock = new Object(); // Paper
|
||
|
+ public final RegionFileStorage regionFileCache;
|
||
|
|
||
|
public ChunkStorage(File file, DataFixer datafixer, boolean flag) {
|
||
|
+ this.regionFileCache = new RegionFileStorage(file, flag); // Paper - nuke IOWorker
|
||
|
this.fixerUpper = datafixer;
|
||
|
- this.worker = new IOWorker(file, flag, "chunk");
|
||
|
+ // Paper - nuke IOWorker
|
||
|
}
|
||
|
|
||
|
// CraftBukkit start
|
||
|
private boolean check(ServerChunkCache cps, int x, int z) throws IOException {
|
||
|
ChunkPos pos = new ChunkPos(x, z);
|
||
|
if (cps != null) {
|
||
|
- com.google.common.base.Preconditions.checkState(org.bukkit.Bukkit.isPrimaryThread(), "primary thread");
|
||
|
- if (cps.hasChunk(x, z)) {
|
||
|
+ //com.google.common.base.Preconditions.checkState(org.bukkit.Bukkit.isPrimaryThread(), "primary thread"); // Paper - this function is now MT-Safe
|
||
|
+ if (cps.getChunkAtIfCachedImmediately(x, z) != null) { // Paper - isLoaded is a ticket level check, not a chunk loaded check!
|
||
|
return true;
|
||
|
}
|
||
|
}
|
||
|
|
||
|
- CompoundTag nbt = read(pos);
|
||
|
- if (nbt != null) {
|
||
|
- CompoundTag level = nbt.getCompound("Level");
|
||
|
- if (level.getBoolean("TerrainPopulated")) {
|
||
|
- return true;
|
||
|
- }
|
||
|
+
|
||
|
+ // Paper start - prioritize
|
||
|
+ CompoundTag nbt = cps == null ? read(pos) :
|
||
|
+ com.destroystokyo.paper.io.PaperFileIOThread.Holder.INSTANCE.loadChunkData((ServerLevel)cps.getLevel(), x, z,
|
||
|
+ com.destroystokyo.paper.io.PrioritizedTaskQueue.HIGHER_PRIORITY, false, true).chunkData;
|
||
|
+ // Paper end
|
||
|
+ if (nbt != null) {
|
||
|
+ CompoundTag level = nbt.getCompound("Level");
|
||
|
+ if (level.getBoolean("TerrainPopulated")) {
|
||
|
+ return true;
|
||
|
+ }
|
||
|
|
||
|
ChunkStatus status = ChunkStatus.byName(level.getString("Status"));
|
||
|
if (status != null && status.isOrAfter(ChunkStatus.FEATURES)) {
|
||
|
@@ -77,11 +90,13 @@ public class ChunkStorage implements AutoCloseable {
|
||
|
if (i < 1493) {
|
||
|
nbttagcompound = NbtUtils.update(this.fixerUpper, DataFixTypes.CHUNK, nbttagcompound, i, 1493);
|
||
|
if (nbttagcompound.getCompound("Level").getBoolean("hasLegacyStructureData")) {
|
||
|
+ synchronized (this.persistentDataLock) { // Paper - Async chunk loading
|
||
|
if (this.legacyStructureHandler == null) {
|
||
|
this.legacyStructureHandler = LegacyStructureDataHandler.getLegacyStructureHandler(resourcekey, (DimensionDataStorage) supplier.get());
|
||
|
}
|
||
|
|
||
|
nbttagcompound = this.legacyStructureHandler.updateFromLegacy(nbttagcompound);
|
||
|
+ } // Paper - Async chunk loading
|
||
|
}
|
||
|
}
|
||
|
|
||
|
@@ -99,22 +114,20 @@ public class ChunkStorage implements AutoCloseable {
|
||
|
|
||
|
@Nullable
|
||
|
public CompoundTag read(ChunkPos chunkcoordintpair) throws IOException {
|
||
|
- return this.worker.load(chunkcoordintpair);
|
||
|
+ return this.regionFileCache.read(chunkcoordintpair);
|
||
|
}
|
||
|
|
||
|
- public void write(ChunkPos chunkcoordintpair, CompoundTag nbttagcompound) {
|
||
|
- this.worker.store(chunkcoordintpair, nbttagcompound);
|
||
|
+ public void write(ChunkPos chunkcoordintpair, CompoundTag nbttagcompound) throws IOException { write(chunkcoordintpair, nbttagcompound); } // Paper OBFHELPER
|
||
|
+ public void write(ChunkPos chunkcoordintpair, CompoundTag nbttagcompound) throws IOException { // Paper - OBFHELPER - (Switched around for safety)
|
||
|
+ this.regionFileCache.write(chunkcoordintpair, nbttagcompound);
|
||
|
if (this.legacyStructureHandler != null) {
|
||
|
+ synchronized (this.persistentDataLock) { // Paper - Async chunk loading
|
||
|
this.legacyStructureHandler.removeIndex(chunkcoordintpair.toLong());
|
||
|
+ } // Paper - Async chunk loading}
|
||
|
}
|
||
|
-
|
||
|
- }
|
||
|
-
|
||
|
- public void flushWorker() {
|
||
|
- this.worker.synchronize().join();
|
||
|
}
|
||
|
|
||
|
public void close() throws IOException {
|
||
|
- this.worker.close();
|
||
|
+ this.regionFileCache.close();
|
||
|
}
|
||
|
}
|
||
|
diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/RegionFile.java b/src/main/java/net/minecraft/world/level/chunk/storage/RegionFile.java
|
||
|
index 4d96e5ed28c910387c0a4238c9036c7a12458f57..7ecde2cb15fa0b1b5195fc560c559f2c367e336f 100644
|
||
|
--- a/src/main/java/net/minecraft/world/level/chunk/storage/RegionFile.java
|
||
|
+++ b/src/main/java/net/minecraft/world/level/chunk/storage/RegionFile.java
|
||
|
@@ -45,6 +45,8 @@ public class RegionFile implements AutoCloseable {
|
||
|
protected final RegionBitmap usedSectors;
|
||
|
public final File file; // Paper
|
||
|
|
||
|
+ public final java.util.concurrent.locks.ReentrantLock fileLock = new java.util.concurrent.locks.ReentrantLock(true); // Paper
|
||
|
+
|
||
|
// Paper start - Cache chunk status
|
||
|
private final ChunkStatus[] statuses = new ChunkStatus[32 * 32];
|
||
|
|
||
|
@@ -251,7 +253,7 @@ public class RegionFile implements AutoCloseable {
|
||
|
return (byteCount + 4096 - 1) / 4096;
|
||
|
}
|
||
|
|
||
|
- public boolean doesChunkExist(ChunkPos pos) {
|
||
|
+ public synchronized boolean doesChunkExist(ChunkPos pos) { // Paper - synchronized
|
||
|
int i = this.getOffset(pos);
|
||
|
|
||
|
if (i == 0) {
|
||
|
@@ -411,6 +413,11 @@ public class RegionFile implements AutoCloseable {
|
||
|
}
|
||
|
|
||
|
public void close() throws IOException {
|
||
|
+ // Paper start - Prevent regionfiles from being closed during use
|
||
|
+ this.fileLock.lock();
|
||
|
+ synchronized (this) {
|
||
|
+ try {
|
||
|
+ // Paper end
|
||
|
this.closed = true; // Paper
|
||
|
try {
|
||
|
this.padToFullSector();
|
||
|
@@ -421,6 +428,10 @@ public class RegionFile implements AutoCloseable {
|
||
|
this.file.close();
|
||
|
}
|
||
|
}
|
||
|
+ } finally { // Paper start - Prevent regionfiles from being closed during use
|
||
|
+ this.fileLock.unlock();
|
||
|
+ }
|
||
|
+ } // Paper end
|
||
|
|
||
|
}
|
||
|
|
||
|
diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/RegionFileStorage.java b/src/main/java/net/minecraft/world/level/chunk/storage/RegionFileStorage.java
|
||
|
index 6f1c96e4325caf6b4762700ad2286d9ea41515c9..0498982ac14f20145d68dbf64a46bcaacf5516ef 100644
|
||
|
--- a/src/main/java/net/minecraft/world/level/chunk/storage/RegionFileStorage.java
|
||
|
+++ b/src/main/java/net/minecraft/world/level/chunk/storage/RegionFileStorage.java
|
||
|
@@ -17,7 +17,7 @@ import net.minecraft.server.MinecraftServer;
|
||
|
import net.minecraft.util.ExceptionCollector;
|
||
|
import net.minecraft.world.level.ChunkPos;
|
||
|
|
||
|
-public final class RegionFileStorage implements AutoCloseable {
|
||
|
+public class RegionFileStorage implements AutoCloseable { // Paper - no final
|
||
|
|
||
|
public final Long2ObjectLinkedOpenHashMap<RegionFile> regionCache = new Long2ObjectLinkedOpenHashMap();
|
||
|
private final File folder;
|
||
|
@@ -30,16 +30,27 @@ public final class RegionFileStorage implements AutoCloseable {
|
||
|
|
||
|
|
||
|
// Paper start
|
||
|
- public RegionFile getRegionFileIfLoaded(ChunkPos chunkcoordintpair) {
|
||
|
+ public synchronized RegionFile getRegionFileIfLoaded(ChunkPos chunkcoordintpair) { // Paper - synchronize for async io
|
||
|
return this.regionCache.getAndMoveToFirst(ChunkPos.asLong(chunkcoordintpair.getRegionX(), chunkcoordintpair.getRegionZ()));
|
||
|
}
|
||
|
|
||
|
// Paper end
|
||
|
- public RegionFile getFile(ChunkPos chunkcoordintpair, boolean existingOnly) throws IOException { // CraftBukkit // Paper - private > public
|
||
|
+ public synchronized RegionFile getFile(ChunkPos chunkcoordintpair, boolean existingOnly) throws IOException { // CraftBukkit // Paper - private > public, synchronize
|
||
|
+ // Paper start - add lock parameter
|
||
|
+ return this.getFile(chunkcoordintpair, existingOnly, false);
|
||
|
+ }
|
||
|
+ public synchronized RegionFile getFile(ChunkPos chunkcoordintpair, boolean existingOnly, boolean lock) throws IOException {
|
||
|
+ // Paper end
|
||
|
long i = ChunkPos.asLong(chunkcoordintpair.getRegionX(), chunkcoordintpair.getRegionZ());
|
||
|
RegionFile regionfile = (RegionFile) this.regionCache.getAndMoveToFirst(i);
|
||
|
|
||
|
if (regionfile != null) {
|
||
|
+ // Paper start
|
||
|
+ if (lock) {
|
||
|
+ // must be in this synchronized block
|
||
|
+ regionfile.fileLock.lock();
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
return regionfile;
|
||
|
} else {
|
||
|
if (this.regionCache.size() >= com.destroystokyo.paper.PaperConfig.regionFileCacheSize) { // Paper - configurable
|
||
|
@@ -55,6 +66,12 @@ public final class RegionFileStorage implements AutoCloseable {
|
||
|
RegionFile regionfile1 = new RegionFile(file, this.folder, this.sync);
|
||
|
|
||
|
this.regionCache.putAndMoveToFirst(i, regionfile1);
|
||
|
+ // Paper start
|
||
|
+ if (lock) {
|
||
|
+ // must be in this synchronized block
|
||
|
+ regionfile1.fileLock.lock();
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
return regionfile1;
|
||
|
}
|
||
|
}
|
||
|
@@ -130,11 +147,12 @@ public final class RegionFileStorage implements AutoCloseable {
|
||
|
@Nullable
|
||
|
public CompoundTag read(ChunkPos pos) throws IOException {
|
||
|
// CraftBukkit start - SPIGOT-5680: There's no good reason to preemptively create files on read, save that for writing
|
||
|
- RegionFile regionfile = this.getFile(pos, true);
|
||
|
+ RegionFile regionfile = this.getFile(pos, true, true); // Paper
|
||
|
if (regionfile == null) {
|
||
|
return null;
|
||
|
}
|
||
|
// CraftBukkit end
|
||
|
+ try { // Paper
|
||
|
DataInputStream datainputstream = regionfile.getChunkDataInputStream(pos);
|
||
|
// Paper start
|
||
|
if (regionfile.isOversized(pos.x, pos.z)) {
|
||
|
@@ -172,10 +190,14 @@ public final class RegionFileStorage implements AutoCloseable {
|
||
|
}
|
||
|
|
||
|
return nbttagcompound;
|
||
|
+ } finally { // Paper start
|
||
|
+ regionfile.fileLock.unlock();
|
||
|
+ } // Paper end
|
||
|
}
|
||
|
|
||
|
protected void write(ChunkPos pos, CompoundTag tag) throws IOException {
|
||
|
- RegionFile regionfile = this.getFile(pos, false); // CraftBukkit
|
||
|
+ RegionFile regionfile = this.getFile(pos, false, true); // CraftBukkit // Paper
|
||
|
+ try { // Paper
|
||
|
int attempts = 0; Exception laste = null; while (attempts++ < 5) { try { // Paper
|
||
|
DataOutputStream dataoutputstream = regionfile.getChunkDataOutputStream(pos);
|
||
|
Throwable throwable = null;
|
||
|
@@ -214,9 +236,12 @@ public final class RegionFileStorage implements AutoCloseable {
|
||
|
MinecraftServer.LOGGER.error("Failed to save chunk", laste);
|
||
|
}
|
||
|
// Paper end
|
||
|
+ } finally { // Paper start
|
||
|
+ regionfile.fileLock.unlock();
|
||
|
+ } // Paper end
|
||
|
}
|
||
|
|
||
|
- public void close() throws IOException {
|
||
|
+ public synchronized void close() throws IOException { // Paper -> synchronized
|
||
|
ExceptionCollector<IOException> exceptionsuppressor = new ExceptionCollector<>();
|
||
|
ObjectIterator objectiterator = this.regionCache.values().iterator();
|
||
|
|
||
|
@@ -243,4 +268,12 @@ public final class RegionFileStorage implements AutoCloseable {
|
||
|
}
|
||
|
|
||
|
}
|
||
|
+
|
||
|
+ // CraftBukkit start
|
||
|
+ public synchronized boolean chunkExists(ChunkPos pos) throws IOException { // Paper - synchronize
|
||
|
+ RegionFile regionfile = getFile(pos, true);
|
||
|
+
|
||
|
+ return regionfile != null ? regionfile.hasChunk(pos) : false;
|
||
|
+ }
|
||
|
+ // CraftBukkit end
|
||
|
}
|
||
|
diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/SectionStorage.java b/src/main/java/net/minecraft/world/level/chunk/storage/SectionStorage.java
|
||
|
index 059a658aa87d19025daa66d98f78112d5f5be4e3..bb30fb085a6c5edb717ad006c0ab481723ca1b6b 100644
|
||
|
--- a/src/main/java/net/minecraft/world/level/chunk/storage/SectionStorage.java
|
||
|
+++ b/src/main/java/net/minecraft/world/level/chunk/storage/SectionStorage.java
|
||
|
@@ -30,28 +30,29 @@ import net.minecraft.world.level.Level;
|
||
|
import org.apache.logging.log4j.LogManager;
|
||
|
import org.apache.logging.log4j.Logger;
|
||
|
|
||
|
-public class SectionStorage<R> implements AutoCloseable {
|
||
|
+public class SectionStorage<R> extends RegionFileStorage implements AutoCloseable { // Paper - nuke IOWorker
|
||
|
|
||
|
private static final Logger LOGGER = LogManager.getLogger();
|
||
|
- private final IOWorker worker;
|
||
|
+ // Paper - nuke IOWorker
|
||
|
private final Long2ObjectMap<Optional<R>> storage = new Long2ObjectOpenHashMap();
|
||
|
- private final LongLinkedOpenHashSet dirty = new LongLinkedOpenHashSet();
|
||
|
+ public final LongLinkedOpenHashSet dirty = new LongLinkedOpenHashSet(); // Paper - private -> public
|
||
|
private final Function<Runnable, Codec<R>> codec;
|
||
|
private final Function<Runnable, R> factory;
|
||
|
private final DataFixer fixerUpper;
|
||
|
private final DataFixTypes type;
|
||
|
|
||
|
public SectionStorage(File directory, Function<Runnable, Codec<R>> codecFactory, Function<Runnable, R> factory, DataFixer datafixer, DataFixTypes datafixtypes, boolean flag) {
|
||
|
+ super(directory, flag); // Paper - nuke IOWorker
|
||
|
this.codec = codecFactory;
|
||
|
this.factory = factory;
|
||
|
this.fixerUpper = datafixer;
|
||
|
this.type = datafixtypes;
|
||
|
- this.worker = new IOWorker(directory, flag, directory.getName());
|
||
|
+ //this.b = new IOWorker(file, flag, file.getName()); // Paper - nuke IOWorker
|
||
|
}
|
||
|
|
||
|
protected void tick(BooleanSupplier shouldKeepTicking) {
|
||
|
while (!this.dirty.isEmpty() && shouldKeepTicking.getAsBoolean()) {
|
||
|
- ChunkPos chunkcoordintpair = SectionPos.of(this.dirty.firstLong()).chunk();
|
||
|
+ ChunkPos chunkcoordintpair = SectionPos.of(this.dirty.firstLong()).chunk(); // Paper - conflict here to avoid obfhelpers
|
||
|
|
||
|
this.writeColumn(chunkcoordintpair);
|
||
|
}
|
||
|
@@ -105,13 +106,18 @@ public class SectionStorage<R> implements AutoCloseable {
|
||
|
}
|
||
|
|
||
|
private void readColumn(ChunkPos chunkcoordintpair) {
|
||
|
- this.readColumn(chunkcoordintpair, NbtOps.INSTANCE, this.tryRead(chunkcoordintpair));
|
||
|
+ // Paper start - load data in function
|
||
|
+ this.loadInData(chunkcoordintpair, this.tryRead(chunkcoordintpair));
|
||
|
+ }
|
||
|
+ public void loadInData(ChunkPos chunkPos, CompoundTag compound) {
|
||
|
+ this.readColumn(chunkPos, NbtOps.INSTANCE, compound);
|
||
|
+ // Paper end
|
||
|
}
|
||
|
|
||
|
@Nullable
|
||
|
private CompoundTag tryRead(ChunkPos pos) {
|
||
|
try {
|
||
|
- return this.worker.load(pos);
|
||
|
+ return this.read(pos); // Paper - nuke IOWorker
|
||
|
} catch (IOException ioexception) {
|
||
|
SectionStorage.LOGGER.error("Error reading chunk {} data from disk", pos, ioexception);
|
||
|
return null;
|
||
|
@@ -157,17 +163,31 @@ public class SectionStorage<R> implements AutoCloseable {
|
||
|
}
|
||
|
|
||
|
private void writeColumn(ChunkPos chunkcoordintpair) {
|
||
|
- Dynamic<Tag> dynamic = this.writeColumn(chunkcoordintpair, NbtOps.INSTANCE);
|
||
|
+ Dynamic<Tag> dynamic = this.writeColumn(chunkcoordintpair, NbtOps.INSTANCE); // Paper - conflict here to avoid adding obfhelpers :)
|
||
|
Tag nbtbase = (Tag) dynamic.getValue();
|
||
|
|
||
|
if (nbtbase instanceof CompoundTag) {
|
||
|
- this.worker.store(chunkcoordintpair, (CompoundTag) nbtbase);
|
||
|
+ try { this.write(chunkcoordintpair, (CompoundTag) nbtbase); } catch (IOException ioexception) { SectionStorage.LOGGER.error("Error writing data to disk", ioexception); } // Paper - nuke IOWorker // TODO make this write async
|
||
|
} else {
|
||
|
SectionStorage.LOGGER.error("Expected compound tag, got {}", nbtbase);
|
||
|
}
|
||
|
|
||
|
}
|
||
|
|
||
|
+ // Paper start - internal get data function, copied from above
|
||
|
+ private CompoundTag getDataInternal(ChunkPos chunkcoordintpair) {
|
||
|
+ Dynamic<Tag> dynamic = this.writeColumn(chunkcoordintpair, NbtOps.INSTANCE);
|
||
|
+ Tag nbtbase = (Tag) dynamic.getValue();
|
||
|
+
|
||
|
+ if (nbtbase instanceof CompoundTag) {
|
||
|
+ return (CompoundTag)nbtbase;
|
||
|
+ } else {
|
||
|
+ SectionStorage.LOGGER.error("Expected compound tag, got {}", nbtbase);
|
||
|
+ }
|
||
|
+ return null;
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
+
|
||
|
private <T> Dynamic<T> writeColumn(ChunkPos chunkcoordintpair, DynamicOps<T> dynamicops) {
|
||
|
Map<T, T> map = Maps.newHashMap();
|
||
|
|
||
|
@@ -213,9 +233,9 @@ public class SectionStorage<R> implements AutoCloseable {
|
||
|
public void flush(ChunkPos chunkcoordintpair) {
|
||
|
if (!this.dirty.isEmpty()) {
|
||
|
for (int i = 0; i < 16; ++i) {
|
||
|
- long j = SectionPos.of(chunkcoordintpair, i).asLong();
|
||
|
+ long j = SectionPos.of(chunkcoordintpair, i).asLong(); // Paper - conflict here to avoid obfhelpers
|
||
|
|
||
|
- if (this.dirty.contains(j)) {
|
||
|
+ if (this.dirty.contains(j)) { // Paper - conflict here to avoid obfhelpers
|
||
|
this.writeColumn(chunkcoordintpair);
|
||
|
return;
|
||
|
}
|
||
|
@@ -224,7 +244,26 @@ public class SectionStorage<R> implements AutoCloseable {
|
||
|
|
||
|
}
|
||
|
|
||
|
- public void close() throws IOException {
|
||
|
- this.worker.close();
|
||
|
+// Paper start - nuke IOWorker
|
||
|
+// public void close() throws IOException {
|
||
|
+// this.b.close();
|
||
|
+// }
|
||
|
+// Paper end
|
||
|
+
|
||
|
+ // Paper start - get data function
|
||
|
+ public CompoundTag getData(ChunkPos chunkcoordintpair) {
|
||
|
+ // Note: Copied from above
|
||
|
+ // This is checking if the data exists, then it builds it later in getDataInternal(ChunkCoordIntPair)
|
||
|
+ if (!this.dirty.isEmpty()) {
|
||
|
+ for (int i = 0; i < 16; ++i) {
|
||
|
+ long j = SectionPos.of(chunkcoordintpair, i).asLong();
|
||
|
+
|
||
|
+ if (this.dirty.contains(j)) {
|
||
|
+ return this.getDataInternal(chunkcoordintpair);
|
||
|
+ }
|
||
|
+ }
|
||
|
+ }
|
||
|
+ return null;
|
||
|
}
|
||
|
+ // Paper end
|
||
|
}
|
||
|
diff --git a/src/main/java/org/bukkit/craftbukkit/CraftWorld.java b/src/main/java/org/bukkit/craftbukkit/CraftWorld.java
|
||
|
index a0615e4ba015cca4fe074de63b87d0bff84b1a14..52444619a4bae80a12bf296fbe07fa811adf806e 100644
|
||
|
--- a/src/main/java/org/bukkit/craftbukkit/CraftWorld.java
|
||
|
+++ b/src/main/java/org/bukkit/craftbukkit/CraftWorld.java
|
||
|
@@ -545,22 +545,23 @@ public class CraftWorld implements World {
|
||
|
return true;
|
||
|
}
|
||
|
|
||
|
- net.minecraft.world.level.chunk.storage.RegionFile file;
|
||
|
- try {
|
||
|
- file = world.getChunkSource().chunkMap.getIOWorker().getRegionFileCache().getFile(chunkPos, false);
|
||
|
- } catch (IOException ex) {
|
||
|
- throw new RuntimeException(ex);
|
||
|
- }
|
||
|
+ ChunkStatus status = world.getChunkSource().chunkMap.getStatusOnDiskNoLoad(x, z); // Paper - async io - move to own method
|
||
|
|
||
|
- ChunkStatus status = file.getStatusIfCached(x, z);
|
||
|
- if (!file.hasChunk(chunkPos) || (status != null && status != ChunkStatus.FULL)) {
|
||
|
+ // Paper start - async io
|
||
|
+ if (status == ChunkStatus.EMPTY) {
|
||
|
+ // does not exist on disk
|
||
|
return false;
|
||
|
}
|
||
|
|
||
|
+ if (status == null) { // at this stage we don't know what it is on disk
|
||
|
ChunkAccess chunk = world.getChunkSource().getChunk(x, z, ChunkStatus.EMPTY, true);
|
||
|
if (!(chunk instanceof ImposterProtoChunk) && !(chunk instanceof net.minecraft.world.level.chunk.LevelChunk)) {
|
||
|
return false;
|
||
|
}
|
||
|
+ } else if (status != ChunkStatus.FULL) {
|
||
|
+ return false; // not full status on disk
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
|
||
|
// fall through to load
|
||
|
// we do this so we do not re-read the chunk data on disk
|
||
|
@@ -2483,6 +2484,34 @@ public class CraftWorld implements World {
|
||
|
public DragonBattle getEnderDragonBattle() {
|
||
|
return (getHandle().dragonFight() == null) ? null : new CraftDragonBattle(getHandle().dragonFight());
|
||
|
}
|
||
|
+ // Paper start
|
||
|
+ @Override
|
||
|
+ public CompletableFuture<Chunk> getChunkAtAsync(int x, int z, boolean gen, boolean urgent) {
|
||
|
+ if (Bukkit.isPrimaryThread()) {
|
||
|
+ net.minecraft.world.level.chunk.LevelChunk immediate = this.world.getChunkSource().getChunkAtIfLoadedImmediately(x, z);
|
||
|
+ if (immediate != null) {
|
||
|
+ return CompletableFuture.completedFuture(immediate.getBukkitChunk());
|
||
|
+ }
|
||
|
+ } else {
|
||
|
+ CompletableFuture<Chunk> future = new CompletableFuture<Chunk>();
|
||
|
+ world.getServer().execute(() -> {
|
||
|
+ getChunkAtAsync(x, z, gen, urgent).whenComplete((chunk, err) -> {
|
||
|
+ if (err != null) {
|
||
|
+ future.completeExceptionally(err);
|
||
|
+ } else {
|
||
|
+ future.complete(chunk);
|
||
|
+ }
|
||
|
+ });
|
||
|
+ });
|
||
|
+ return future;
|
||
|
+ }
|
||
|
+
|
||
|
+ return this.world.getChunkSource().getChunkAtAsynchronously(x, z, gen, urgent).thenComposeAsync((either) -> {
|
||
|
+ net.minecraft.world.level.chunk.LevelChunk chunk = (net.minecraft.world.level.chunk.LevelChunk) either.left().orElse(null);
|
||
|
+ return CompletableFuture.completedFuture(chunk == null ? null : chunk.getBukkitChunk());
|
||
|
+ }, net.minecraft.server.MinecraftServer.getServer());
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
|
||
|
// Spigot start
|
||
|
@Override
|
||
|
diff --git a/src/main/java/org/bukkit/craftbukkit/entity/CraftEntity.java b/src/main/java/org/bukkit/craftbukkit/entity/CraftEntity.java
|
||
|
index 7ad4fb57af32cc1b8278688381e1b058ed8437db..76d652386806fd11961611486a1d0a12fe9616a4 100644
|
||
|
--- a/src/main/java/org/bukkit/craftbukkit/entity/CraftEntity.java
|
||
|
+++ b/src/main/java/org/bukkit/craftbukkit/entity/CraftEntity.java
|
||
|
@@ -11,7 +11,9 @@ import net.minecraft.core.BlockPos;
|
||
|
import net.minecraft.nbt.CompoundTag;
|
||
|
import net.minecraft.nbt.Tag;
|
||
|
import net.minecraft.network.chat.Component;
|
||
|
+import net.minecraft.server.level.ChunkMap;
|
||
|
import net.minecraft.server.level.ServerPlayer;
|
||
|
+import net.minecraft.server.level.TicketType;
|
||
|
import net.minecraft.world.damagesource.DamageSource;
|
||
|
import net.minecraft.world.entity.AreaEffectCloud;
|
||
|
import net.minecraft.world.entity.Entity;
|
||
|
@@ -508,6 +510,28 @@ public abstract class CraftEntity implements org.bukkit.entity.Entity {
|
||
|
entity.setYHeadRot(yaw);
|
||
|
}
|
||
|
|
||
|
+ @Override// Paper start
|
||
|
+ public java.util.concurrent.CompletableFuture<Boolean> teleportAsync(Location loc, @javax.annotation.Nonnull org.bukkit.event.player.PlayerTeleportEvent.TeleportCause cause) {
|
||
|
+ ChunkMap playerChunkMap = ((CraftWorld) loc.getWorld()).getHandle().getChunkSource().chunkMap;
|
||
|
+ java.util.concurrent.CompletableFuture<Boolean> future = new java.util.concurrent.CompletableFuture<>();
|
||
|
+
|
||
|
+ loc.getWorld().getChunkAtAsyncUrgently(loc).thenCompose(chunk -> {
|
||
|
+ ChunkCoordIntPair pair = new ChunkCoordIntPair(chunk.getX(), chunk.getZ());
|
||
|
+ ((CraftWorld) loc.getWorld()).getHandle().getChunkProvider().addTicketAtLevel(TicketType.POST_TELEPORT, pair, 31, 0);
|
||
|
+ PlayerChunk updatingChunk = playerChunkMap.getUpdatingChunk(pair.pair());
|
||
|
+ if (updatingChunk != null) {
|
||
|
+ return updatingChunk.getEntityTickingFuture();
|
||
|
+ } else {
|
||
|
+ return java.util.concurrent.CompletableFuture.completedFuture(com.mojang.datafixers.util.Either.left(((org.bukkit.craftbukkit.CraftChunk)chunk).getHandle()));
|
||
|
+ }
|
||
|
+ }).thenAccept((chunk) -> future.complete(teleport(loc, cause))).exceptionally(ex -> {
|
||
|
+ future.completeExceptionally(ex);
|
||
|
+ return null;
|
||
|
+ });
|
||
|
+ return future;
|
||
|
+ }
|
||
|
+ // Paper end
|
||
|
+
|
||
|
@Override
|
||
|
public boolean teleport(Location location) {
|
||
|
return teleport(location, TeleportCause.PLUGIN);
|
||
|
diff --git a/src/main/java/org/spigotmc/WatchdogThread.java b/src/main/java/org/spigotmc/WatchdogThread.java
|
||
|
index 16f6163bb53e73aa4ab6e22365342613b6b38118..33a66322d253c7562ae5acbdbc6cc87f7d72a9af 100644
|
||
|
--- a/src/main/java/org/spigotmc/WatchdogThread.java
|
||
|
+++ b/src/main/java/org/spigotmc/WatchdogThread.java
|
||
|
@@ -6,6 +6,7 @@ import java.lang.management.ThreadInfo;
|
||
|
import java.util.logging.Level;
|
||
|
import java.util.logging.Logger;
|
||
|
import com.destroystokyo.paper.PaperConfig;
|
||
|
+import com.destroystokyo.paper.io.chunk.ChunkTaskManager; // Paper
|
||
|
import net.minecraft.server.MinecraftServer;
|
||
|
import org.bukkit.Bukkit;
|
||
|
|
||
|
@@ -116,6 +117,7 @@ public class WatchdogThread extends Thread
|
||
|
// Paper end - Different message for short timeout
|
||
|
log.log( Level.SEVERE, "------------------------------" );
|
||
|
log.log( Level.SEVERE, "Server thread dump (Look for plugins here before reporting to Paper!):" ); // Paper
|
||
|
+ ChunkTaskManager.dumpAllChunkLoadInfo(); // Paper
|
||
|
dumpThread( ManagementFactory.getThreadMXBean().getThreadInfo( MinecraftServer.getServer().serverThread.getId(), Integer.MAX_VALUE ), log );
|
||
|
log.log( Level.SEVERE, "------------------------------" );
|
||
|
//
|