3
0
Mirror von https://github.com/PaperMC/Paper.git synchronisiert 2024-12-16 11:30:06 +01:00
Paper/Spigot-Server-Patches/0489-Allow-multiple-callbacks-to-schedule-for-Callback-Ex.patch

59 Zeilen
2.4 KiB
Diff

From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Aikar <aikar@aikar.co>
Date: Tue, 21 Apr 2020 03:51:53 -0400
Subject: [PATCH] Allow multiple callbacks to schedule for Callback Executor
ChunkMapDistance polls multiple entries for pendingChunkUpdates
Each of these have the potential to move a chunk in and out of
"Loaded" state, which will result in multiple callbacks being
needed within a single tick of ChunkMapDistance
Use an ArrayDeque to store this Queue
Improve mid tick chunk loading, Fix Oversleep, other improvements Process loads outside of any canSleep check. Original intent was to only apply those restrictions to generations but realized I had some checks higher up the call chain. Reworked the back off strategy to just run every 1 millisecond per world, and to apply the per tick limit to generations only. This guarantees that your chunk will load with at most around 1ms delay. Additionally, fire midTick processing in a few more places, notably the oversleep section so we can keep processing loads here too which has a large up to 50ms window... Speaking of oversleep, we had a bug in our implementation changes for Timings that caused oversleep to not sleep the correct amount. Because we now moved it into the NEXT tick instead of THIS tick, the value of nextTick had already been increased to +50ms, resulting in the risk of sleeping more than it should, but, more importantly, this caused every task that was trying to NOT run during oversleep to actually run during oversleep. This is now fixed. Another small tweak is to the /tps command, to no longer show the star when TPS is right at 20. Due to ineffeciencies in the sleep precision, TPS is commonly 20.02. This causes the star to show up almost constantly, so now only show it if we actually hit a real "catchup". This commit also improves the changes to the CallbackExecutor, in that it now is also recursion safe. It was possible that the executor could run tasks out of desired order if the executor task scheduled more executor tasks. We solve this by ensuring new additions do not enter the currently iterated queue. Each depth level will have its own queue. Fixes #3220
2020-04-26 05:47:29 +02:00
We make sure to also implement a pattern that is recursion safe too.
diff --git a/src/main/java/net/minecraft/server/PlayerChunkMap.java b/src/main/java/net/minecraft/server/PlayerChunkMap.java
index 030c980b522c4cada800e5d8ca47f0b8733bf5b6..4d591d620262e8c4ed0508b01e26ef7355e75e88 100644
--- a/src/main/java/net/minecraft/server/PlayerChunkMap.java
+++ b/src/main/java/net/minecraft/server/PlayerChunkMap.java
@@ -110,24 +110,32 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
public final CallbackExecutor callbackExecutor = new CallbackExecutor();
public static final class CallbackExecutor implements java.util.concurrent.Executor, Runnable {
- private Runnable queued;
Improve mid tick chunk loading, Fix Oversleep, other improvements Process loads outside of any canSleep check. Original intent was to only apply those restrictions to generations but realized I had some checks higher up the call chain. Reworked the back off strategy to just run every 1 millisecond per world, and to apply the per tick limit to generations only. This guarantees that your chunk will load with at most around 1ms delay. Additionally, fire midTick processing in a few more places, notably the oversleep section so we can keep processing loads here too which has a large up to 50ms window... Speaking of oversleep, we had a bug in our implementation changes for Timings that caused oversleep to not sleep the correct amount. Because we now moved it into the NEXT tick instead of THIS tick, the value of nextTick had already been increased to +50ms, resulting in the risk of sleeping more than it should, but, more importantly, this caused every task that was trying to NOT run during oversleep to actually run during oversleep. This is now fixed. Another small tweak is to the /tps command, to no longer show the star when TPS is right at 20. Due to ineffeciencies in the sleep precision, TPS is commonly 20.02. This causes the star to show up almost constantly, so now only show it if we actually hit a real "catchup". This commit also improves the changes to the CallbackExecutor, in that it now is also recursion safe. It was possible that the executor could run tasks out of desired order if the executor task scheduled more executor tasks. We solve this by ensuring new additions do not enter the currently iterated queue. Each depth level will have its own queue. Fixes #3220
2020-04-26 05:47:29 +02:00
+ // Paper start - replace impl with recursive safe multi entry queue
+ // it's possible to schedule multiple tasks currently, so it's vital we change this impl
+ // If we recurse into the executor again, we will append to another queue, ensuring task order consistency
+ private java.util.ArrayDeque<Runnable> queued = new java.util.ArrayDeque<>();
@Override
public void execute(Runnable runnable) {
- if (queued != null) {
- throw new IllegalStateException("Already queued");
Improve mid tick chunk loading, Fix Oversleep, other improvements Process loads outside of any canSleep check. Original intent was to only apply those restrictions to generations but realized I had some checks higher up the call chain. Reworked the back off strategy to just run every 1 millisecond per world, and to apply the per tick limit to generations only. This guarantees that your chunk will load with at most around 1ms delay. Additionally, fire midTick processing in a few more places, notably the oversleep section so we can keep processing loads here too which has a large up to 50ms window... Speaking of oversleep, we had a bug in our implementation changes for Timings that caused oversleep to not sleep the correct amount. Because we now moved it into the NEXT tick instead of THIS tick, the value of nextTick had already been increased to +50ms, resulting in the risk of sleeping more than it should, but, more importantly, this caused every task that was trying to NOT run during oversleep to actually run during oversleep. This is now fixed. Another small tweak is to the /tps command, to no longer show the star when TPS is right at 20. Due to ineffeciencies in the sleep precision, TPS is commonly 20.02. This causes the star to show up almost constantly, so now only show it if we actually hit a real "catchup". This commit also improves the changes to the CallbackExecutor, in that it now is also recursion safe. It was possible that the executor could run tasks out of desired order if the executor task scheduled more executor tasks. We solve this by ensuring new additions do not enter the currently iterated queue. Each depth level will have its own queue. Fixes #3220
2020-04-26 05:47:29 +02:00
+ if (queued == null) {
+ queued = new java.util.ArrayDeque<>();
}
- queued = runnable;
Improve mid tick chunk loading, Fix Oversleep, other improvements Process loads outside of any canSleep check. Original intent was to only apply those restrictions to generations but realized I had some checks higher up the call chain. Reworked the back off strategy to just run every 1 millisecond per world, and to apply the per tick limit to generations only. This guarantees that your chunk will load with at most around 1ms delay. Additionally, fire midTick processing in a few more places, notably the oversleep section so we can keep processing loads here too which has a large up to 50ms window... Speaking of oversleep, we had a bug in our implementation changes for Timings that caused oversleep to not sleep the correct amount. Because we now moved it into the NEXT tick instead of THIS tick, the value of nextTick had already been increased to +50ms, resulting in the risk of sleeping more than it should, but, more importantly, this caused every task that was trying to NOT run during oversleep to actually run during oversleep. This is now fixed. Another small tweak is to the /tps command, to no longer show the star when TPS is right at 20. Due to ineffeciencies in the sleep precision, TPS is commonly 20.02. This causes the star to show up almost constantly, so now only show it if we actually hit a real "catchup". This commit also improves the changes to the CallbackExecutor, in that it now is also recursion safe. It was possible that the executor could run tasks out of desired order if the executor task scheduled more executor tasks. We solve this by ensuring new additions do not enter the currently iterated queue. Each depth level will have its own queue. Fixes #3220
2020-04-26 05:47:29 +02:00
+ queued.add(runnable);
}
@Override
public void run() {
- Runnable task = queued;
Improve mid tick chunk loading, Fix Oversleep, other improvements Process loads outside of any canSleep check. Original intent was to only apply those restrictions to generations but realized I had some checks higher up the call chain. Reworked the back off strategy to just run every 1 millisecond per world, and to apply the per tick limit to generations only. This guarantees that your chunk will load with at most around 1ms delay. Additionally, fire midTick processing in a few more places, notably the oversleep section so we can keep processing loads here too which has a large up to 50ms window... Speaking of oversleep, we had a bug in our implementation changes for Timings that caused oversleep to not sleep the correct amount. Because we now moved it into the NEXT tick instead of THIS tick, the value of nextTick had already been increased to +50ms, resulting in the risk of sleeping more than it should, but, more importantly, this caused every task that was trying to NOT run during oversleep to actually run during oversleep. This is now fixed. Another small tweak is to the /tps command, to no longer show the star when TPS is right at 20. Due to ineffeciencies in the sleep precision, TPS is commonly 20.02. This causes the star to show up almost constantly, so now only show it if we actually hit a real "catchup". This commit also improves the changes to the CallbackExecutor, in that it now is also recursion safe. It was possible that the executor could run tasks out of desired order if the executor task scheduled more executor tasks. We solve this by ensuring new additions do not enter the currently iterated queue. Each depth level will have its own queue. Fixes #3220
2020-04-26 05:47:29 +02:00
+ if (queued == null) {
+ return;
+ }
+ java.util.ArrayDeque<Runnable> queue = queued;
queued = null;
- if (task != null) {
+ Runnable task;
Improve mid tick chunk loading, Fix Oversleep, other improvements Process loads outside of any canSleep check. Original intent was to only apply those restrictions to generations but realized I had some checks higher up the call chain. Reworked the back off strategy to just run every 1 millisecond per world, and to apply the per tick limit to generations only. This guarantees that your chunk will load with at most around 1ms delay. Additionally, fire midTick processing in a few more places, notably the oversleep section so we can keep processing loads here too which has a large up to 50ms window... Speaking of oversleep, we had a bug in our implementation changes for Timings that caused oversleep to not sleep the correct amount. Because we now moved it into the NEXT tick instead of THIS tick, the value of nextTick had already been increased to +50ms, resulting in the risk of sleeping more than it should, but, more importantly, this caused every task that was trying to NOT run during oversleep to actually run during oversleep. This is now fixed. Another small tweak is to the /tps command, to no longer show the star when TPS is right at 20. Due to ineffeciencies in the sleep precision, TPS is commonly 20.02. This causes the star to show up almost constantly, so now only show it if we actually hit a real "catchup". This commit also improves the changes to the CallbackExecutor, in that it now is also recursion safe. It was possible that the executor could run tasks out of desired order if the executor task scheduled more executor tasks. We solve this by ensuring new additions do not enter the currently iterated queue. Each depth level will have its own queue. Fixes #3220
2020-04-26 05:47:29 +02:00
+ while ((task = queue.pollFirst()) != null) {
task.run();
}
}
+ // Paper end
};
// CraftBukkit end