Archiviert
13
0
Dieses Repository wurde am 2024-12-25 archiviert. Du kannst Dateien ansehen und es klonen, aber nicht pushen oder Issues/Pull-Requests öffnen.
Paper-Old/Spigot-Server-Patches/0537-Optimize-Light-Engine.patch

1384 Zeilen
67 KiB
Diff

Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Aikar <aikar@aikar.co>
Date: Thu, 4 Jun 2020 22:43:29 -0400
Subject: [PATCH] Optimize Light Engine
Massive update to light to improve performance and chunk loading/generation.
1) Massive bit packing/unpacking optimizations and inlining.
A lot of performance has to do with constant packing and unpacking of bits.
We now inline a most bit operations, and re-use base x/y/z bits in many places.
This helps with cpu level processing to just do all the math at once instead
of having to jump in and out of function calls.
This much logic also is likely over the JVM Inline limit for JIT too.
2) Applied a few of JellySquid's Phosphor mod optimizations such as
- ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified
- reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used.
3) Optimize hot path accesses to getting updating chunk to have less branching
4) Optimize getBlock accesses to have less branching, and less unpacking
5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks.
6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full
of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front.
this applies for both urgent and non urgent tasks.
7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency.
8) Fix NPE risk that crashes server in getting nibble data
diff --git a/src/main/java/net/minecraft/server/Block.java b/src/main/java/net/minecraft/server/Block.java
index d2207a2c95690de586ab2d181b64955a6d2ea70d..66244a9d0e253b3709df4ae2adcd21e44ebbfc90 100644
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
--- a/src/main/java/net/minecraft/server/Block.java
+++ b/src/main/java/net/minecraft/server/Block.java
@@ -310,7 +310,7 @@ public class Block implements IMaterial {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
return false;
}
- @Deprecated
+ public final boolean canOcclude(IBlockData blockData) { return n(blockData); } @Deprecated // Paper - OBFHELPER
public final boolean n(IBlockData iblockdata) {
return this.j;
}
diff --git a/src/main/java/net/minecraft/server/ChunkProviderServer.java b/src/main/java/net/minecraft/server/ChunkProviderServer.java
index 81a200ea4c533744890b6e19dd0d83f57351906c..46d62aa414b340a14302e52bb882be08b58fa9a9 100644
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
--- a/src/main/java/net/minecraft/server/ChunkProviderServer.java
+++ b/src/main/java/net/minecraft/server/ChunkProviderServer.java
@@ -1090,7 +1090,7 @@ public class ChunkProviderServer extends IChunkProvider {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
if (ChunkProviderServer.this.tickDistanceManager()) {
return true;
} else {
- ChunkProviderServer.this.lightEngine.queueUpdate();
+ //ChunkProviderServer.this.lightEngine.queueUpdate(); // Paper - not needed
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
return super.executeNext() || execChunkTask; // Paper
}
} finally {
diff --git a/src/main/java/net/minecraft/server/IBlockData.java b/src/main/java/net/minecraft/server/IBlockData.java
index 296b41bf36ee1ace5bd9db2b810bf926b5f5278f..b39554faf235b6f81c86542673dfb4508d4c3e5a 100644
--- a/src/main/java/net/minecraft/server/IBlockData.java
+++ b/src/main/java/net/minecraft/server/IBlockData.java
@@ -24,6 +24,7 @@ public class IBlockData extends BlockDataAbstract<Block, IBlockData> implements
private final boolean e;
private final boolean isAir; // Paper
private final boolean isTicking; // Paper
+ private final boolean canOcclude; // Paper
public IBlockData(Block block, ImmutableMap<IBlockState<?>, Comparable<?>> immutablemap) {
super(block, immutablemap);
@@ -31,6 +32,7 @@ public class IBlockData extends BlockDataAbstract<Block, IBlockData> implements
this.e = block.o(this);
this.isAir = this.getBlock().isAir(this); // Paper
this.isTicking = this.getBlock().isTicking(this); // Paper
+ this.canOcclude = this.getBlock().canOcclude(this); // Paper
}
public void c() {
@@ -83,7 +85,7 @@ public class IBlockData extends BlockDataAbstract<Block, IBlockData> implements
return this.c == null || this.c.h;
}
- public boolean g() {
+ public final boolean g() { // Paper
return this.e;
}
@@ -151,8 +153,8 @@ public class IBlockData extends BlockDataAbstract<Block, IBlockData> implements
return this.c != null ? this.c.c : this.getBlock().k(this, iblockaccess, blockposition);
}
- public boolean o() {
- return this.c != null ? this.c.b : this.getBlock().n(this);
+ public final boolean o() { // Paper
+ return canOcclude; // Paper
}
public VoxelShape getShape(IBlockAccess iblockaccess, BlockPosition blockposition) {
diff --git a/src/main/java/net/minecraft/server/LightEngineBlock.java b/src/main/java/net/minecraft/server/LightEngineBlock.java
index 07fadc21ee12138b52cc77c50da536fec5b032f5..a61d0a27e9525505eedaec8cde44216e807eb9a8 100644
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
--- a/src/main/java/net/minecraft/server/LightEngineBlock.java
+++ b/src/main/java/net/minecraft/server/LightEngineBlock.java
@@ -13,9 +13,11 @@ public final class LightEngineBlock extends LightEngineLayer<LightEngineStorageB
}
private int d(long i) {
- int j = BlockPosition.b(i);
- int k = BlockPosition.c(i);
- int l = BlockPosition.d(i);
+ // Paper start - inline math
+ int j = (int) (i >> 38);
+ int k = (int) ((i << 52) >> 52);
+ int l = (int) ((i << 26) >> 38);
+ // Paper end
IBlockAccess iblockaccess = this.a.c(j >> 4, l >> 4);
return iblockaccess != null ? iblockaccess.h(this.f.d(j, k, l)) : 0;
@@ -30,25 +32,33 @@ public final class LightEngineBlock extends LightEngineLayer<LightEngineStorageB
} else if (k >= 15) {
return k;
} else {
- int l = Integer.signum(BlockPosition.b(j) - BlockPosition.b(i));
- int i1 = Integer.signum(BlockPosition.c(j) - BlockPosition.c(i));
- int j1 = Integer.signum(BlockPosition.d(j) - BlockPosition.d(i));
+ // Paper start - reuse math - credit to JellySquid for idea
+ int jx = (int) (j >> 38);
+ int jy = (int) ((j << 52) >> 52);
+ int jz = (int) ((j << 26) >> 38);
+ int ix = (int) (i >> 38);
+ int iy = (int) ((i << 52) >> 52);
+ int iz = (int) ((i << 26) >> 38);
+ int l = Integer.signum(jx - ix);
+ int i1 = Integer.signum(jy - iy);
+ int j1 = Integer.signum(jz - iz);
+ // Paper end
EnumDirection enumdirection = EnumDirection.a(l, i1, j1);
if (enumdirection == null) {
return 15;
} else {
//MutableInt mutableint = new MutableInt(); // Paper - share mutableint, single threaded
- IBlockData iblockdata = this.a(j, mutableint);
-
- if (mutableint.getValue() >= 15) {
+ IBlockData iblockdata = this.getBlockOptimized(jx, jy, jz, mutableint); // Paper
+ int blockedLight = mutableint.getValue(); // Paper
+ if (blockedLight >= 15) { // Paper
return 15;
} else {
- IBlockData iblockdata1 = this.a(i, (MutableInt) null);
+ IBlockData iblockdata1 = this.getBlockOptimized(ix, iy, iz); // Paper
VoxelShape voxelshape = this.a(iblockdata1, i, enumdirection);
VoxelShape voxelshape1 = this.a(iblockdata, j, enumdirection.opposite());
- return VoxelShapes.b(voxelshape, voxelshape1) ? 15 : k + Math.max(1, mutableint.getValue());
+ return VoxelShapes.b(voxelshape, voxelshape1) ? 15 : k + Math.max(1, blockedLight); // Paper
}
}
}
@@ -56,14 +66,19 @@ public final class LightEngineBlock extends LightEngineLayer<LightEngineStorageB
@Override
protected void a(long i, int j, boolean flag) {
- long k = SectionPosition.e(i);
+ // Paper start - reuse unpacking, credit to JellySquid (Didn't do full optimization though)
+ int x = (int) (i >> 38);
+ int y = (int) ((i << 52) >> 52);
+ int z = (int) ((i << 26) >> 38);
+ long k = SectionPosition.blockPosAsSectionLong(x, y, z);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper end
EnumDirection[] aenumdirection = LightEngineBlock.e;
int l = aenumdirection.length;
for (int i1 = 0; i1 < l; ++i1) {
EnumDirection enumdirection = aenumdirection[i1];
- long j1 = BlockPosition.a(i, enumdirection);
- long k1 = SectionPosition.e(j1);
+ long j1 = BlockPosition.getAdjacent(x, y, z, enumdirection); // Paper
+ long k1 = SectionPosition.getAdjacentFromBlockPos(x, y, z, enumdirection); // Paper
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
if (k == k1 || ((LightEngineStorageBlock) this.c).g(k1)) {
this.b(i, j1, j, flag);
@@ -88,27 +103,37 @@ public final class LightEngineBlock extends LightEngineLayer<LightEngineStorageB
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
}
- long j1 = SectionPosition.e(i);
- NibbleArray nibblearray = ((LightEngineStorageBlock) this.c).a(j1, true);
+ // Paper start
+ int baseX = (int) (i >> 38);
+ int baseY = (int) ((i << 52) >> 52);
+ int baseZ = (int) ((i << 26) >> 38);
+ long j1 = SectionPosition.blockPosAsSectionLong(baseX, baseY, baseZ);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ NibbleArray nibblearray = this.c.updating.getUpdatingOptimized(j1);
+ // Paper end
EnumDirection[] aenumdirection = LightEngineBlock.e;
int k1 = aenumdirection.length;
for (int l1 = 0; l1 < k1; ++l1) {
EnumDirection enumdirection = aenumdirection[l1];
- long i2 = BlockPosition.a(i, enumdirection);
+ // Paper start
+ int newX = baseX + enumdirection.getAdjacentX();
+ int newY = baseY + enumdirection.getAdjacentY();
+ int newZ = baseZ + enumdirection.getAdjacentZ();
+ long i2 = BlockPosition.asLong(newX, newY, newZ);
if (i2 != j) {
- long j2 = SectionPosition.e(i2);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ long j2 = SectionPosition.blockPosAsSectionLong(newX, newY, newZ);
+ // Paper end
NibbleArray nibblearray1;
if (j1 == j2) {
nibblearray1 = nibblearray;
} else {
- nibblearray1 = ((LightEngineStorageBlock) this.c).a(j2, true);
+ nibblearray1 = ((LightEngineStorageBlock) this.c).updating.getUpdatingOptimized(j2); // Paper
}
if (nibblearray1 != null) {
- int k2 = this.b(i2, i, this.a(nibblearray1, i2));
+ int k2 = this.b(i2, i, this.getNibbleLightInverse(nibblearray1, newX, newY, newZ)); // Paper
if (l > k2) {
l = k2;
diff --git a/src/main/java/net/minecraft/server/LightEngineGraphSection.java b/src/main/java/net/minecraft/server/LightEngineGraphSection.java
index 2eb37fb5796d423a70fa7321899a19b6625bbecc..13d067f48647dea63ef1bf3a2a3e0868074ba75f 100644
--- a/src/main/java/net/minecraft/server/LightEngineGraphSection.java
+++ b/src/main/java/net/minecraft/server/LightEngineGraphSection.java
@@ -13,14 +13,20 @@ public abstract class LightEngineGraphSection extends LightEngineGraph {
@Override
protected void a(long i, int j, boolean flag) {
+ // Paper start
+ int x = (int) (i >> 42);
+ int y = (int) (i << 44 >> 44);
+ int z = (int) (i << 22 >> 42);
+ // Paper end
for (int k = -1; k <= 1; ++k) {
for (int l = -1; l <= 1; ++l) {
for (int i1 = -1; i1 <= 1; ++i1) {
- long j1 = SectionPosition.a(i, k, l, i1);
+ if (k == 0 && l == 0 && i1 == 0) continue; // Paper
+ long j1 = (((long) (x + k) & 4194303L) << 42) | (((long) (y + l) & 1048575L)) | (((long) (z + i1) & 4194303L) << 20); // Paper
- if (j1 != i) {
+ //if (j1 != i) { // Paper - checked above
this.b(i, j1, j, flag);
- }
+ //} // Paper
}
}
}
@@ -31,10 +37,15 @@ public abstract class LightEngineGraphSection extends LightEngineGraph {
protected int a(long i, long j, int k) {
int l = k;
+ // Paper start
+ int x = (int) (i >> 42);
+ int y = (int) (i << 44 >> 44);
+ int z = (int) (i << 22 >> 42);
+ // Paper end
for (int i1 = -1; i1 <= 1; ++i1) {
for (int j1 = -1; j1 <= 1; ++j1) {
for (int k1 = -1; k1 <= 1; ++k1) {
- long l1 = SectionPosition.a(i, i1, j1, k1);
+ long l1 = (((long) (x + i1) & 4194303L) << 42) | (((long) (y + j1) & 1048575L)) | (((long) (z + k1) & 4194303L) << 20); // Paper
if (l1 == i) {
l1 = Long.MAX_VALUE;
diff --git a/src/main/java/net/minecraft/server/LightEngineLayer.java b/src/main/java/net/minecraft/server/LightEngineLayer.java
index f72ff8495bcf704c15040676b95c51fecb72b73a..faa432779ec70c2c6b5fe7fe3523f171e26d3a5f 100644
--- a/src/main/java/net/minecraft/server/LightEngineLayer.java
+++ b/src/main/java/net/minecraft/server/LightEngineLayer.java
@@ -11,9 +11,9 @@ public abstract class LightEngineLayer<M extends LightEngineStorageArray<M>, S e
protected final EnumSkyBlock b;
protected final S c;
private boolean f;
- protected final BlockPosition.MutableBlockPosition d = new BlockPosition.MutableBlockPosition();
+ protected final BlockPosition.MutableBlockPosition d = new BlockPosition.MutableBlockPosition(); protected final BlockPosition.MutableBlockPosition pos = d; // Paper
private final long[] g = new long[2];
- private final IBlockAccess[] h = new IBlockAccess[2];
+ private final IChunkAccess[] h = new IChunkAccess[2]; // Paper
public LightEngineLayer(ILightAccess ilightaccess, EnumSkyBlock enumskyblock, S s0) {
super(16, 256, 8192);
@@ -33,7 +33,7 @@ public abstract class LightEngineLayer<M extends LightEngineStorageArray<M>, S e
}
@Nullable
- private IBlockAccess a(int i, int j) {
+ private IChunkAccess a(int i, int j) { // Paper
long k = ChunkCoordIntPair.pair(i, j);
for (int l = 0; l < 2; ++l) {
@@ -42,7 +42,7 @@ public abstract class LightEngineLayer<M extends LightEngineStorageArray<M>, S e
}
}
- IBlockAccess iblockaccess = this.a.c(i, j);
+ IChunkAccess iblockaccess = (IChunkAccess) this.a.c(i, j); // Paper
for (int i1 = 1; i1 > 0; --i1) {
this.g[i1] = this.g[i1 - 1];
@@ -59,37 +59,64 @@ public abstract class LightEngineLayer<M extends LightEngineStorageArray<M>, S e
Arrays.fill(this.h, (Object) null);
}
- protected IBlockData a(long i, @Nullable MutableInt mutableint) {
- if (i == Long.MAX_VALUE) {
- if (mutableint != null) {
- mutableint.setValue(0);
- }
-
- return Blocks.AIR.getBlockData();
+ // Paper start - unused, optimized versions below, comment out to detect changes
+// protected IBlockData a(long i, @Nullable MutableInt mutableint) {
+// if (i == Long.MAX_VALUE) {
+// if (mutableint != null) {
+// mutableint.setValue(0);
+// }
+//
+// return Blocks.AIR.getBlockData();
+// } else {
+// int j = SectionPosition.a(BlockPosition.b(i));
+// int k = SectionPosition.a(BlockPosition.d(i));
+// IBlockAccess iblockaccess = this.a(j, k);
+//
+// if (iblockaccess == null) {
+// if (mutableint != null) {
+// mutableint.setValue(16);
+// }
+//
+// return Blocks.BEDROCK.getBlockData();
+// } else {
+// this.d.g(i);
+// IBlockData iblockdata = iblockaccess.getType(this.d);
+// boolean flag = iblockdata.o() && iblockdata.g();
+//
+// if (mutableint != null) {
+// mutableint.setValue(iblockdata.b(this.a.getWorld(), (BlockPosition) this.d));
+// }
+//
+// return flag ? iblockdata : Blocks.AIR.getBlockData();
+// }
+// }
+// }
+ // optimized method with less branching for when scenarios arent needed.
+ // avoid using mutable version if can
+ protected final IBlockData getBlockOptimized(int x, int y, int z, MutableInt mutableint) {
+ IChunkAccess iblockaccess = this.a(x >> 4, z >> 4);
+
+ if (iblockaccess == null) {
+ mutableint.setValue(16);
+ return Blocks.BEDROCK.getBlockData();
} else {
- int j = SectionPosition.a(BlockPosition.b(i));
- int k = SectionPosition.a(BlockPosition.d(i));
- IBlockAccess iblockaccess = this.a(j, k);
-
- if (iblockaccess == null) {
- if (mutableint != null) {
- mutableint.setValue(16);
- }
-
- return Blocks.BEDROCK.getBlockData();
- } else {
- this.d.g(i);
- IBlockData iblockdata = iblockaccess.getType(this.d);
- boolean flag = iblockdata.o() && iblockdata.g();
-
- if (mutableint != null) {
- mutableint.setValue(iblockdata.b(this.a.getWorld(), (BlockPosition) this.d));
- }
+ this.pos.setValues(x, y, z);
+ IBlockData iblockdata = iblockaccess.getType(x, y, z);
+ mutableint.setValue(iblockdata.b(this.a.getWorld(), this.pos));
+ return iblockdata.o() && iblockdata.g() ? iblockdata : Blocks.AIR.getBlockData();
+ }
+ }
+ protected final IBlockData getBlockOptimized(int x, int y, int z) {
+ IChunkAccess iblockaccess = this.a(x >> 4, z >> 4);
- return flag ? iblockdata : Blocks.AIR.getBlockData();
- }
+ if (iblockaccess == null) {
+ return Blocks.BEDROCK.getBlockData();
+ } else {
+ IBlockData iblockdata = iblockaccess.getType(x, y, z);
+ return iblockdata.o() && iblockdata.g() ? iblockdata : Blocks.AIR.getBlockData();
}
}
+ // Paper end
protected VoxelShape a(IBlockData iblockdata, long i, EnumDirection enumdirection) {
return iblockdata.o() ? iblockdata.a(this.a.getWorld(), this.d.g(i), enumdirection) : VoxelShapes.a();
@@ -124,8 +151,9 @@ public abstract class LightEngineLayer<M extends LightEngineStorageArray<M>, S e
return i == Long.MAX_VALUE ? 0 : 15 - this.c.i(i);
}
+ protected int getNibbleLightInverse(NibbleArray nibblearray, int x, int y, int z) { return 15 - nibblearray.a(x & 15, y & 15, z & 15); } // Paper - x/y/z version of below
protected int a(NibbleArray nibblearray, long i) {
- return 15 - nibblearray.a(SectionPosition.b(BlockPosition.b(i)), SectionPosition.b(BlockPosition.c(i)), SectionPosition.b(BlockPosition.d(i)));
+ return 15 - nibblearray.a((int) (i >> 38) & 15, (int) ((i << 52) >> 52) & 15, (int) ((i << 26) >> 38) & 15); // Paper
}
@Override
diff --git a/src/main/java/net/minecraft/server/LightEngineSky.java b/src/main/java/net/minecraft/server/LightEngineSky.java
index f0b57784006752e031800a12a1a3c1a5945c636b..32b52ca2462fa206b1184025cb3837d6c326db2d 100644
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
--- a/src/main/java/net/minecraft/server/LightEngineSky.java
+++ b/src/main/java/net/minecraft/server/LightEngineSky.java
@@ -29,21 +29,25 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
return k;
} else {
//MutableInt mutableint = new MutableInt(); // Paper - share mutableint, single threaded
- IBlockData iblockdata = this.a(j, mutableint);
-
- if (mutableint.getValue() >= 15) {
+ // Paper start - use x/y/z and optimized block lookup
+ int jx = (int) (j >> 38);
+ int jy = (int) ((j << 52) >> 52);
+ int jz = (int) ((j << 26) >> 38);
+ IBlockData iblockdata = this.getBlockOptimized(jx, jy, jz, mutableint);
+ int blockedLight = mutableint.getValue();
+ if (blockedLight >= 15) {
+ // Paper end
return 15;
} else {
- int l = BlockPosition.b(i);
- int i1 = BlockPosition.c(i);
- int j1 = BlockPosition.d(i);
- int k1 = BlockPosition.b(j);
- int l1 = BlockPosition.c(j);
- int i2 = BlockPosition.d(j);
- boolean flag = l == k1 && j1 == i2;
- int j2 = Integer.signum(k1 - l);
- int k2 = Integer.signum(l1 - i1);
- int l2 = Integer.signum(i2 - j1);
+ // Paper start - inline math
+ int ix = (int) (i >> 38);
+ int iy = (int) ((i << 52) >> 52);
+ int iz = (int) ((i << 26) >> 38);
+ boolean flag = ix == jx && iz == jz;
+ int j2 = Integer.signum(jx - ix);
+ int k2 = Integer.signum(jy - iy);
+ int l2 = Integer.signum(jz - iz);
+ // Paper end
EnumDirection enumdirection;
if (i == Long.MAX_VALUE) {
@@ -52,7 +56,7 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
enumdirection = EnumDirection.a(j2, k2, l2);
}
- IBlockData iblockdata1 = this.a(i, (MutableInt) null);
+ IBlockData iblockdata1 = i == Long.MAX_VALUE ? Blocks.AIR.getBlockData() : this.getBlockOptimized(ix, iy, iz); // Paper
VoxelShape voxelshape;
if (enumdirection != null) {
@@ -82,9 +86,9 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
}
}
- boolean flag1 = i == Long.MAX_VALUE || flag && i1 > l1;
+ boolean flag1 = i == Long.MAX_VALUE || flag && iy > jy; // Paper rename vars to iy > jy
- return flag1 && k == 0 && mutableint.getValue() == 0 ? 0 : k + Math.max(1, mutableint.getValue());
+ return flag1 && k == 0 && blockedLight == 0 ? 0 : k + Math.max(1, blockedLight); // Paper
}
}
}
@@ -92,10 +96,14 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
@Override
protected void a(long i, int j, boolean flag) {
- long k = SectionPosition.e(i);
- int l = BlockPosition.c(i);
- int i1 = SectionPosition.b(l);
- int j1 = SectionPosition.a(l);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper start
+ int baseX = (int) (i >> 38);
+ int baseY = (int) ((i << 52) >> 52);
+ int baseZ = (int) ((i << 26) >> 38);
+ long k = SectionPosition.blockPosAsSectionLong(baseX, baseY, baseZ);
+ int i1 = baseY & 15;
+ int j1 = baseY >> 4;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper end
int k1;
if (i1 != 0) {
@@ -110,15 +118,16 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
k1 = l1;
}
- long i2 = BlockPosition.a(i, 0, -1 - k1 * 16, 0);
- long j2 = SectionPosition.e(i2);
+ int newBaseY = baseY + (-1 - k1 * 16); // Paper
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ long i2 = BlockPosition.asLong(baseX, newBaseY, baseZ); // Paper
+ long j2 = SectionPosition.blockPosAsSectionLong(baseX, newBaseY, baseZ); // Paper
if (k == j2 || ((LightEngineStorageSky) this.c).g(j2)) {
this.b(i, i2, j, flag);
}
- long k2 = BlockPosition.a(i, EnumDirection.UP);
- long l2 = SectionPosition.e(k2);
+ long k2 = BlockPosition.asLong(baseX, baseY + 1, baseZ); // Paper
+ long l2 = SectionPosition.blockPosAsSectionLong(baseX, baseY + 1, baseZ); // Paper
if (k == l2 || ((LightEngineStorageSky) this.c).g(l2)) {
this.b(i, k2, j, flag);
@@ -133,8 +142,8 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
int k3 = 0;
while (true) {
- long l3 = BlockPosition.a(i, enumdirection.getAdjacentX(), -k3, enumdirection.getAdjacentZ());
- long i4 = SectionPosition.e(l3);
+ long l3 = BlockPosition.asLong(baseX + enumdirection.getAdjacentX(), baseY - k3, baseZ + enumdirection.getAdjacentZ()); // Paper
+ long i4 = SectionPosition.blockPosAsSectionLong(baseX + enumdirection.getAdjacentX(), baseY - k3, baseZ + enumdirection.getAdjacentZ()); // Paper
if (k == i4) {
this.b(i, l3, j, flag);
@@ -172,26 +181,36 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
}
- long j1 = SectionPosition.e(i);
- NibbleArray nibblearray = ((LightEngineStorageSky) this.c).a(j1, true);
+ // Paper start
+ int baseX = (int) (i >> 38);
+ int baseY = (int) ((i << 52) >> 52);
+ int baseZ = (int) ((i << 26) >> 38);
+ long j1 = SectionPosition.blockPosAsSectionLong(baseX, baseY, baseZ);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ NibbleArray nibblearray = this.c.updating.getUpdatingOptimized(j1);
+ // Paper end
EnumDirection[] aenumdirection = LightEngineSky.e;
int k1 = aenumdirection.length;
for (int l1 = 0; l1 < k1; ++l1) {
EnumDirection enumdirection = aenumdirection[l1];
- long i2 = BlockPosition.a(i, enumdirection);
- long j2 = SectionPosition.e(i2);
+ // Paper start
+ int newX = baseX + enumdirection.getAdjacentX();
+ int newY = baseY + enumdirection.getAdjacentY();
+ int newZ = baseZ + enumdirection.getAdjacentZ();
+ long i2 = BlockPosition.asLong(newX, newY, newZ);
+ long j2 = SectionPosition.blockPosAsSectionLong(newX, newY, newZ);
+ // Paper end
NibbleArray nibblearray1;
if (j1 == j2) {
nibblearray1 = nibblearray;
} else {
- nibblearray1 = ((LightEngineStorageSky) this.c).a(j2, true);
+ nibblearray1 = ((LightEngineStorageSky) this.c).updating.getUpdatingOptimized(j2); // Paper
}
if (nibblearray1 != null) {
if (i2 != j) {
- int k2 = this.b(i2, i, this.a(nibblearray1, i2));
+ int k2 = this.b(i2, i, this.getNibbleLightInverse(nibblearray1, newX, newY, newZ)); // Paper
if (l > k2) {
l = k2;
@@ -206,7 +225,7 @@ public final class LightEngineSky extends LightEngineLayer<LightEngineStorageSky
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
j2 = SectionPosition.a(j2, EnumDirection.UP);
}
- NibbleArray nibblearray2 = ((LightEngineStorageSky) this.c).a(j2, true);
+ NibbleArray nibblearray2 = this.c.updating.getUpdatingOptimized(j2); // Paper
if (i2 != j) {
int l2;
diff --git a/src/main/java/net/minecraft/server/LightEngineStorage.java b/src/main/java/net/minecraft/server/LightEngineStorage.java
index b0b7544592981bcff22c8acee7230a211918ef28..f2575fb69714397bb8e1f9d402689c7816d73b6c 100644
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
--- a/src/main/java/net/minecraft/server/LightEngineStorage.java
+++ b/src/main/java/net/minecraft/server/LightEngineStorage.java
@@ -20,42 +20,42 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
protected final LongSet c = new LongOpenHashSet();
protected final LongSet d = new LongOpenHashSet();
protected volatile M e_visible; protected final Object visibleUpdateLock = new Object(); // Paper - diff on change, should be "visible" - force compile fail on usage change
- protected final M f; // Paper - diff on change, should be "updating"
+ protected final M f; protected final M updating; // Paper - diff on change, should be "updating"
protected final LongSet g = new LongOpenHashSet();
- protected final LongSet h = new LongOpenHashSet();
+ protected final LongSet h = new LongOpenHashSet(); LongSet dirty = h; // Paper - OBFHELPER
protected final Long2ObjectMap<NibbleArray> i = Long2ObjectMaps.synchronize(new Long2ObjectOpenHashMap());
private final LongSet n = new LongOpenHashSet();
private final LongSet o = new LongOpenHashSet();
protected volatile boolean j;
protected LightEngineStorage(EnumSkyBlock enumskyblock, ILightAccess ilightaccess, M m0) {
- super(3, 16, 256);
+ super(3, 256, 256); // Paper - bump expected size of level sets to improve collisions and reduce rehashing (seen a lot of it)
this.l = enumskyblock;
this.m = ilightaccess;
- this.f = m0;
+ this.f = m0; updating = m0; // Paper
this.e_visible = m0.b(); // Paper - avoid copying light data
this.e_visible.d(); // Paper - avoid copying light data
}
- protected boolean g(long i) {
- return this.a(i, true) != null;
+ protected final boolean g(long i) { // Paper - final to help inlining
+ return this.updating.getUpdatingOptimized(i) != null; // Paper - inline to avoid branching
}
@Nullable
protected NibbleArray a(long i, boolean flag) {
// Paper start - avoid copying light data
if (flag) {
- return this.a(this.f, i);
+ return this.updating.getUpdatingOptimized(i);
} else {
synchronized (this.visibleUpdateLock) {
- return this.a(this.e_visible, i);
+ return this.e_visible.lookup.apply(i);
}
}
// Paper end - avoid copying light data
}
@Nullable
- protected NibbleArray a(M m0, long i) {
+ protected final NibbleArray a(M m0, long i) { // Paper
return m0.c(i);
}
@@ -69,27 +69,57 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
protected abstract int d(long i);
protected int i(long i) {
- long j = SectionPosition.e(i);
- NibbleArray nibblearray = this.a(j, true);
+ // Paper start - reuse and inline math, use Optimized Updating path
+ final int x = (int) (i >> 38);
+ final int y = (int) ((i << 52) >> 52);
+ final int z = (int) ((i << 26) >> 38);
+ long j = SectionPosition.blockPosAsSectionLong(x, y, z);
+ NibbleArray nibblearray = this.updating.getUpdatingOptimized(j);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // BUG: Sometimes returns null and crashes, try to recover, but to prevent crash just return no light.
+ if (nibblearray == null) {
+ nibblearray = this.e_visible.lookup.apply(j);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ }
+ if (nibblearray == null) {
+ System.err.println("Null nibble, preventing crash " + BlockPosition.fromLong(i));
+ return 0;
+ }
- return nibblearray.a(SectionPosition.b(BlockPosition.b(i)), SectionPosition.b(BlockPosition.c(i)), SectionPosition.b(BlockPosition.d(i)));
+ return nibblearray.a(x & 15, y & 15, z & 15); // Paper - inline operations
+ // Paper end
}
protected void b(long i, int j) {
- long k = SectionPosition.e(i);
+ // Paper start - cache part of the math done in loop below
+ int x = (int) (i >> 38);
+ int y = (int) ((i << 52) >> 52);
+ int z = (int) ((i << 26) >> 38);
+ long k = SectionPosition.blockPosAsSectionLong(x, y, z);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper end
if (this.g.add(k)) {
this.f.a(k);
}
NibbleArray nibblearray = this.a(k, true);
-
- nibblearray.a(SectionPosition.b(BlockPosition.b(i)), SectionPosition.b(BlockPosition.c(i)), SectionPosition.b(BlockPosition.d(i)), j);
-
- for (int l = -1; l <= 1; ++l) {
- for (int i1 = -1; i1 <= 1; ++i1) {
- for (int j1 = -1; j1 <= 1; ++j1) {
- this.h.add(SectionPosition.e(BlockPosition.a(i, i1, j1, l)));
+ nibblearray.a(x & 15, y & 15, z & 15, j); // Paper - use already calculated x/y/z
+
+ // Paper start - credit to JellySquid for a major optimization here:
+ /*
+ * An extremely important optimization is made here in regards to adding items to the pending notification set. The
+ * original implementation attempts to add the coordinate of every chunk which contains a neighboring block position
+ * even though a huge number of loop iterations will simply map to block positions within the same updating chunk.
+ *
+ * Our implementation here avoids this by pre-calculating the min/max chunk coordinates so we can iterate over only
+ * the relevant chunk positions once. This reduces what would always be 27 iterations to just 1-8 iterations.
+ *
+ * @reason Use faster implementation
+ * @author JellySquid
+ */
+ for (int z2 = (z - 1) >> 4; z2 <= (z + 1) >> 4; ++z2) {
+ for (int x2 = (x - 1) >> 4; x2 <= (x + 1) >> 4; ++x2) {
+ for (int y2 = (y - 1) >> 4; y2 <= (y + 1) >> 4; ++y2) {
+ this.dirty.add(SectionPosition.asLong(x2, y2, z2));
+ // Paper end
}
}
}
@@ -121,17 +151,23 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
if (k >= 2 && j != 2) {
- if (this.o.contains(i)) {
- this.o.remove(i);
- } else {
+ if (!this.o.remove(i)) { // Paper - remove useless contains - credit to JellySquid
+ //this.o.remove(i); // Paper
+ //} else { // Pape
this.f.a(i, this.j(i));
this.g.add(i);
this.k(i);
- for (int l = -1; l <= 1; ++l) {
- for (int i1 = -1; i1 <= 1; ++i1) {
- for (int j1 = -1; j1 <= 1; ++j1) {
- this.h.add(SectionPosition.e(BlockPosition.a(i, i1, j1, l)));
+ // Paper start - reuse x/y/z and only notify valid chunks - Credit to JellySquid (See above method for notes)
+ int x = (int) (i >> 38);
+ int y = (int) ((i << 52) >> 52);
+ int z = (int) ((i << 26) >> 38);
+
+ for (int z2 = (z - 1) >> 4; z2 <= (z + 1) >> 4; ++z2) {
+ for (int x2 = (x - 1) >> 4; x2 <= (x + 1) >> 4; ++x2) {
+ for (int y2 = (y - 1) >> 4; y2 <= (y + 1) >> 4; ++y2) {
+ this.dirty.add(SectionPosition.asLong(x2, y2, z2));
+ // Paper end
}
}
}
@@ -157,9 +193,9 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
return SectionPosition.e(j) == i;
});
} else {
- int j = SectionPosition.c(SectionPosition.b(i));
- int k = SectionPosition.c(SectionPosition.c(i));
- int l = SectionPosition.c(SectionPosition.d(i));
+ int j = (int) (i >> 42) << 4; // Paper - inline
+ int k = (int) (i << 44 >> 44) << 4; // Paper - inline
+ int l = (int) (i << 22 >> 42) << 4; // Paper - inline
for (int i1 = 0; i1 < 16; ++i1) {
for (int j1 = 0; j1 < 16; ++j1) {
@@ -186,7 +222,7 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
NibbleArray nibblearray;
while (longiterator.hasNext()) {
- i = (Long) longiterator.next();
+ i = longiterator.nextLong(); // Paper
this.a(lightenginelayer, i);
NibbleArray nibblearray1 = (NibbleArray) this.i.remove(i);
@@ -204,48 +240,56 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
longiterator = this.o.iterator();
while (longiterator.hasNext()) {
- i = (Long) longiterator.next();
+ i = longiterator.nextLong(); // Paper
this.l(i);
}
this.o.clear();
this.j = false;
- ObjectIterator objectiterator = this.i.long2ObjectEntrySet().iterator();
+ ObjectIterator<Long2ObjectMap.Entry<NibbleArray>> objectiterator = Long2ObjectMaps.fastIterator(this.i); // Paper
Entry entry;
long j;
+ NibbleArray test = null; // Paper
+ LongSet propagating = new LongOpenHashSet(); // Paper - credit JellySquid for idea to move this up
while (objectiterator.hasNext()) {
entry = (Entry) objectiterator.next();
j = entry.getLongKey();
- if (this.g(j)) {
+ if ((test = this.updating.getUpdatingOptimized(j)) != null) { // Paper - dont look up nibble twice
nibblearray = (NibbleArray) entry.getValue();
- if (this.f.c(j) != nibblearray) {
+ if (test != nibblearray) { // Paper
this.a(lightenginelayer, j);
this.f.a(j, nibblearray);
this.g.add(j);
}
+ if (!flag1) propagating.add(j); // Paper
+ objectiterator.remove(); // Paper
}
}
this.f.c();
- if (!flag1) {
- longiterator = this.i.keySet().iterator();
+ if (!flag1) {// Paper - diff on change, change propagating add a few lines up
+ longiterator = propagating.iterator(); // Paper
while (longiterator.hasNext()) {
- i = (Long) longiterator.next();
- if (this.g(i)) {
- int k = SectionPosition.c(SectionPosition.b(i));
- int l = SectionPosition.c(SectionPosition.c(i));
- int i1 = SectionPosition.c(SectionPosition.d(i));
+ // Paper start
+ i = longiterator.nextLong();
+ if (true) { // don't check hasLight, this iterator is filtered already
+ int secX = (int) (i >> 42);
+ int secY = (int) (i << 44 >> 44);
+ int secZ = (int) (i << 22 >> 42);
+ int k = secX << 4; // baseX
+ int l = secY << 4; // baseY
+ int i1 = secZ << 4; // baseZ
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper end
EnumDirection[] aenumdirection = LightEngineStorage.k;
int j1 = aenumdirection.length;
for (int k1 = 0; k1 < j1; ++k1) {
EnumDirection enumdirection = aenumdirection[k1];
- long l1 = SectionPosition.a(i, enumdirection);
-
- if (!this.i.containsKey(l1) && this.g(l1)) {
+ long l1 = SectionPosition.getAdjacentFromSectionPos(secX, secY, secZ, enumdirection); // Paper - avoid extra unpacking
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ if (!propagating.contains(l1) && this.g(l1)) { // Paper - use propagating
for (int i2 = 0; i2 < 16; ++i2) {
for (int j2 = 0; j2 < 16; ++j2) {
long k2;
@@ -287,6 +331,8 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
}
+ // Paper start - moved above - Credit JellySquid for idea
+ /*
objectiterator = this.i.long2ObjectEntrySet().iterator();
while (objectiterator.hasNext()) {
@@ -295,7 +341,8 @@ public abstract class LightEngineStorage<M extends LightEngineStorageArray<M>> e
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
if (this.g(j)) {
objectiterator.remove();
}
- }
+ }*/
+ // Paper end
}
}
diff --git a/src/main/java/net/minecraft/server/LightEngineStorageArray.java b/src/main/java/net/minecraft/server/LightEngineStorageArray.java
index 37c44a89f28c44915fcae5a7e2c4797b1c123723..549c2551c2b59730bf53a80f8706d388d96ad932 100644
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
--- a/src/main/java/net/minecraft/server/LightEngineStorageArray.java
+++ b/src/main/java/net/minecraft/server/LightEngineStorageArray.java
@@ -5,13 +5,18 @@ import javax.annotation.Nullable;
public abstract class LightEngineStorageArray<M extends LightEngineStorageArray<M>> {
- private final long[] b = new long[2];
- private final NibbleArray[] c = new NibbleArray[2];
+ // private final long[] b = new long[2]; // Paper - unused
+ private final NibbleArray[] c = new NibbleArray[]{NibbleArray.EMPTY_NIBBLE_ARRAY, NibbleArray.EMPTY_NIBBLE_ARRAY}; private final NibbleArray[] cache = c; // Paper - OBFHELPER
private boolean d;
protected final com.destroystokyo.paper.util.map.QueuedChangesMapLong2Object<NibbleArray> data; // Paper - avoid copying light data
protected final boolean isVisible; // Paper - avoid copying light data
- java.util.function.Function<Long, NibbleArray> lookup; // Paper - faster branchless lookup
+ // Paper start - faster lookups with less branching, use interface to avoid boxing instead of Function
+ public final NibbleArrayAccess lookup;
+ public interface NibbleArrayAccess {
+ NibbleArray apply(long id);
+ }
+ // Paper end
// Paper start - avoid copying light data
protected LightEngineStorageArray(com.destroystokyo.paper.util.map.QueuedChangesMapLong2Object<NibbleArray> data, boolean isVisible) {
if (isVisible) {
@@ -19,12 +24,14 @@ public abstract class LightEngineStorageArray<M extends LightEngineStorageArray<
}
this.data = data;
this.isVisible = isVisible;
+ // Paper end - avoid copying light data
+ // Paper start - faster lookups with less branching
if (isVisible) {
lookup = data::getVisibleAsync;
} else {
- lookup = data::getUpdating;
+ lookup = data.getUpdatingMap()::get; // jump straight the sub map
}
- // Paper end - avoid copying light data
+ // Paper end
this.c();
this.d = true;
}
@@ -34,7 +41,9 @@ public abstract class LightEngineStorageArray<M extends LightEngineStorageArray<
public void a(long i) {
if (this.isVisible) { throw new IllegalStateException("writing to visible data"); } // Paper - avoid copying light data
NibbleArray updating = this.data.getUpdating(i); // Paper - pool nibbles
- this.data.queueUpdate(i, new NibbleArray().markPoolSafe(updating.getCloneIfSet())); // Paper - avoid copying light data - pool safe clone
+ NibbleArray nibblearray = new NibbleArray().markPoolSafe(updating.getCloneIfSet()); // Paper
+ nibblearray.lightCacheKey = i; // Paper
+ this.data.queueUpdate(i, nibblearray); // Paper - avoid copying light data - pool safe clone
if (updating.cleaner != null) MCUtil.scheduleTask(2, updating.cleaner, "Light Engine Release"); // Paper - delay clean incase anything holding ref was still using it
this.c();
}
@@ -43,34 +52,34 @@ public abstract class LightEngineStorageArray<M extends LightEngineStorageArray<
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
return lookup.apply(i) != null; // Paper - avoid copying light data
}
- @Nullable
- public final NibbleArray c(long i) { // Paper - final
- if (this.d) {
- for (int j = 0; j < 2; ++j) {
- if (i == this.b[j]) {
- return this.c[j];
- }
- }
- }
-
- NibbleArray nibblearray = lookup.apply(i); // Paper - avoid copying light data
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper start - less branching as we know we are using cache and updating
+ public final NibbleArray getUpdatingOptimized(final long i) { // Paper - final
+ final NibbleArray[] cache = this.cache;
+ if (cache[0].lightCacheKey == i) return cache[0];
+ if (cache[1].lightCacheKey == i) return cache[1];
+ final NibbleArray nibblearray = this.lookup.apply(i); // Paper - avoid copying light data
if (nibblearray == null) {
return null;
} else {
- if (this.d) {
- for (int k = 1; k > 0; --k) {
- this.b[k] = this.b[k - 1];
- this.c[k] = this.c[k - 1];
- }
-
- this.b[0] = i;
- this.c[0] = nibblearray;
- }
-
+ cache[1] = cache[0];
+ cache[0] = nibblearray;
return nibblearray;
}
}
+ // Paper end
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+
+ @Nullable
+ public final NibbleArray c(final long i) { // Paper - final
+ // Paper start - optimize visible case or missed updating cases
+ if (this.d) {
+ // short circuit to optimized
+ return getUpdatingOptimized(i);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ }
+
+ return this.lookup.apply(i);
+ // Paper end
+ }
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
@Nullable
public NibbleArray d(long i) {
@@ -80,13 +89,14 @@ public abstract class LightEngineStorageArray<M extends LightEngineStorageArray<
public void a(long i, NibbleArray nibblearray) {
if (this.isVisible) { throw new IllegalStateException("writing to visible data"); } // Paper - avoid copying light data
+ nibblearray.lightCacheKey = i; // Paper
this.data.queueUpdate(i, nibblearray); // Paper - avoid copying light data
}
public void c() {
for (int i = 0; i < 2; ++i) {
- this.b[i] = Long.MAX_VALUE;
- this.c[i] = null;
+ // this.b[i] = Long.MAX_VALUE; // Paper - Unused
+ this.c[i] = NibbleArray.EMPTY_NIBBLE_ARRAY; // Paper
}
}
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
diff --git a/src/main/java/net/minecraft/server/LightEngineStorageBlock.java b/src/main/java/net/minecraft/server/LightEngineStorageBlock.java
index 292d8c742d3be41ba8ad7fb7f1251dc7f790b62b..5b7b7506f5d1a7578fb54a578891324dfcec3f03 100644
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
--- a/src/main/java/net/minecraft/server/LightEngineStorageBlock.java
+++ b/src/main/java/net/minecraft/server/LightEngineStorageBlock.java
@@ -10,10 +10,14 @@ public class LightEngineStorageBlock extends LightEngineStorage<LightEngineStora
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
@Override
protected int d(long i) {
- long j = SectionPosition.e(i);
- NibbleArray nibblearray = this.a(j, false);
-
- return nibblearray == null ? 0 : nibblearray.a(SectionPosition.b(BlockPosition.b(i)), SectionPosition.b(BlockPosition.c(i)), SectionPosition.b(BlockPosition.d(i)));
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper start
+ int baseX = (int) (i >> 38);
+ int baseY = (int) ((i << 52) >> 52);
+ int baseZ = (int) ((i << 26) >> 38);
+ long j = (((long) (baseX >> 4) & 4194303L) << 42) | (((long) (baseY >> 4) & 1048575L)) | (((long) (baseZ >> 4) & 4194303L) << 20);
+ NibbleArray nibblearray = this.e_visible.lookup.apply(j);
+ return nibblearray == null ? 0 : nibblearray.a(baseX & 15, baseY & 15, baseZ & 15);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper end
}
public static final class a extends LightEngineStorageArray<LightEngineStorageBlock.a> {
diff --git a/src/main/java/net/minecraft/server/LightEngineStorageSky.java b/src/main/java/net/minecraft/server/LightEngineStorageSky.java
index 097f58e9ac3f4096d3b9dad75b6ebe76021fa92c..f744f62c93370d096c113f92ee81a8232c35501d 100644
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
--- a/src/main/java/net/minecraft/server/LightEngineStorageSky.java
+++ b/src/main/java/net/minecraft/server/LightEngineStorageSky.java
@@ -22,7 +22,12 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
@Override
protected int d(long i) {
- long j = SectionPosition.e(i);
+ // Paper start
+ int baseX = (int) (i >> 38);
+ int baseY = (int) ((i << 52) >> 52);
+ int baseZ = (int) ((i << 26) >> 38);
+ long j = SectionPosition.blockPosAsSectionLong(baseX, baseY, baseZ);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper end
int k = SectionPosition.c(j);
synchronized (this.visibleUpdateLock) { // Paper - avoid copying light data
LightEngineStorageSky.a lightenginestoragesky_a = (LightEngineStorageSky.a) this.e_visible; // Paper - avoid copying light data - must be after lock acquire
@@ -43,7 +48,7 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
}
}
- return nibblearray.a(SectionPosition.b(BlockPosition.b(i)), SectionPosition.b(BlockPosition.c(i)), SectionPosition.b(BlockPosition.d(i)));
+ return nibblearray.a(baseX & 15, (int) ((i << 52) >> 52) & 15, (int) baseZ & 15); // Paper - y changed above
} else {
return 15;
}
@@ -162,7 +167,7 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
if (k != ((LightEngineStorageSky.a) this.f).b && SectionPosition.c(j) < k) {
NibbleArray nibblearray1;
- while ((nibblearray1 = this.a(j, true)) == null) {
+ while ((nibblearray1 = this.updating.getUpdatingOptimized(j)) == null) { // Paper
j = SectionPosition.a(j, EnumDirection.UP);
}
@@ -186,7 +191,10 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
longiterator = this.m.iterator();
while (longiterator.hasNext()) {
- i = (Long) longiterator.next();
+ i = longiterator.nextLong(); // Paper
+ int baseX = (int) (i >> 42) << 4; // Paper
+ int baseY = (int) (i << 44 >> 44) << 4; // Paper
+ int baseZ = (int) (i << 22 >> 42) << 4; // Paper
j = this.c(i);
if (j != 2 && !this.n.contains(i) && this.l.add(i)) {
int l;
@@ -197,10 +205,10 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
((LightEngineStorageSky.a) this.f).a(i);
}
- Arrays.fill(this.a(i, true).asBytesPoolSafe(), (byte) -1); // Paper
- k = SectionPosition.c(SectionPosition.b(i));
- l = SectionPosition.c(SectionPosition.c(i));
- int i1 = SectionPosition.c(SectionPosition.d(i));
+ Arrays.fill(this.updating.getUpdatingOptimized(i).asBytesPoolSafe(), (byte) -1); // Paper - use optimized
+ k = baseX; // Paper
+ l = baseY; // Paper
+ int i1 = baseZ; // Paper
EnumDirection[] aenumdirection = LightEngineStorageSky.k;
int j1 = aenumdirection.length;
@@ -209,7 +217,7 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
for (int l1 = 0; l1 < j1; ++l1) {
EnumDirection enumdirection = aenumdirection[l1];
- k1 = SectionPosition.a(i, enumdirection);
+ k1 = SectionPosition.getAdjacentFromBlockPos(baseX, baseY, baseZ, enumdirection); // Paper
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
if ((this.n.contains(k1) || !this.l.contains(k1) && !this.m.contains(k1)) && this.g(k1)) {
for (int i2 = 0; i2 < 16; ++i2) {
for (int j2 = 0; j2 < 16; ++j2) {
@@ -242,16 +250,16 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
for (int i3 = 0; i3 < 16; ++i3) {
for (j1 = 0; j1 < 16; ++j1) {
- long j3 = BlockPosition.a(SectionPosition.c(SectionPosition.b(i)) + i3, SectionPosition.c(SectionPosition.c(i)), SectionPosition.c(SectionPosition.d(i)) + j1);
+ long j3 = BlockPosition.a(baseX + i3, baseY, baseZ + j1); // Paper
- k1 = BlockPosition.a(SectionPosition.c(SectionPosition.b(i)) + i3, SectionPosition.c(SectionPosition.c(i)) - 1, SectionPosition.c(SectionPosition.d(i)) + j1);
+ k1 = BlockPosition.a(baseX + i3, baseY - 1, baseZ + j1); // Paper
lightenginelayer.a(j3, k1, lightenginelayer.b(j3, k1, 0), true);
}
}
} else {
for (k = 0; k < 16; ++k) {
for (l = 0; l < 16; ++l) {
- long k3 = BlockPosition.a(SectionPosition.c(SectionPosition.b(i)) + k, SectionPosition.c(SectionPosition.c(i)) + 16 - 1, SectionPosition.c(SectionPosition.d(i)) + l);
+ long k3 = BlockPosition.a(baseX + k, baseY + 16 - 1, baseZ + l); // Paper
lightenginelayer.a(Long.MAX_VALUE, k3, 0, true);
}
@@ -266,11 +274,14 @@ public class LightEngineStorageSky extends LightEngineStorage<LightEngineStorage
longiterator = this.n.iterator();
while (longiterator.hasNext()) {
- i = (Long) longiterator.next();
+ i = longiterator.nextLong(); // Paper
+ int baseX = (int) (i >> 42) << 4; // Paper
+ int baseY = (int) (i << 44 >> 44) << 4; // Paper
+ int baseZ = (int) (i << 22 >> 42) << 4; // Paper
if (this.l.remove(i) && this.g(i)) {
for (j = 0; j < 16; ++j) {
for (k = 0; k < 16; ++k) {
- long l3 = BlockPosition.a(SectionPosition.c(SectionPosition.b(i)) + j, SectionPosition.c(SectionPosition.c(i)) + 16 - 1, SectionPosition.c(SectionPosition.d(i)) + k);
+ long l3 = BlockPosition.a(baseX + j, baseY + 16 - 1, baseZ + k); // Paper
lightenginelayer.a(Long.MAX_VALUE, l3, 15, false);
}
diff --git a/src/main/java/net/minecraft/server/LightEngineThreaded.java b/src/main/java/net/minecraft/server/LightEngineThreaded.java
index 8776799de033f02b0f87e9ea7e4a4ce912e94dd4..604fe85313969bf1ad5b5ef75b46f9a23f79764a 100644
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
--- a/src/main/java/net/minecraft/server/LightEngineThreaded.java
+++ b/src/main/java/net/minecraft/server/LightEngineThreaded.java
@@ -14,8 +14,98 @@ import org.apache.logging.log4j.Logger;
public class LightEngineThreaded extends LightEngine implements AutoCloseable {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
private static final Logger LOGGER = LogManager.getLogger();
- private final ThreadedMailbox<Runnable> b;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
- private final ObjectList<Pair<LightEngineThreaded.Update, Runnable>> c = new ObjectArrayList();
+ private final ThreadedMailbox<Runnable> b; ThreadedMailbox<Runnable> mailbox; // Paper
+ // Paper start
+ private static final int MAX_PRIORITIES = PlayerChunkMap.GOLDEN_TICKET + 2;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+
+ public void changePriority(long pair, int currentPriority, int priority) {
+ this.mailbox.queue(() -> {
+ ChunkLightQueue remove = this.queue.buckets[currentPriority].remove(pair);
+ if (remove != null) {
+ ChunkLightQueue existing = this.queue.buckets[priority].put(pair, remove);
+ if (existing != null) {
+ remove.pre.addAll(existing.pre);
+ remove.post.addAll(existing.post);
+ }
+ }
+ });
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ }
+
+ static class ChunkLightQueue {
+ public boolean shouldFastUpdate;
+ java.util.ArrayDeque<Runnable> pre = new java.util.ArrayDeque<Runnable>();
+ java.util.ArrayDeque<Runnable> post = new java.util.ArrayDeque<Runnable>();
+
+ ChunkLightQueue(long chunk) {}
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ }
+
+
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Retain the chunks priority level for queued light tasks
+ private static class LightQueue {
+ private int size = 0;
+ private int lowestPriority = MAX_PRIORITIES;
+ private final it.unimi.dsi.fastutil.longs.Long2ObjectLinkedOpenHashMap<ChunkLightQueue>[] buckets = new it.unimi.dsi.fastutil.longs.Long2ObjectLinkedOpenHashMap[MAX_PRIORITIES];
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+
+ private LightQueue() {
+ for (int i = 0; i < buckets.length; i++) {
+ buckets[i] = new it.unimi.dsi.fastutil.longs.Long2ObjectLinkedOpenHashMap<>();
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ }
+ }
+
+ public final void add(long chunkId, int priority, LightEngineThreaded.Update type, Runnable run) {
+ add(chunkId, priority, type, run, false);
+ }
+ public final void add(long chunkId, int priority, LightEngineThreaded.Update type, Runnable run, boolean shouldFastUpdate) {
+ ChunkLightQueue lightQueue = this.buckets[priority].computeIfAbsent(chunkId, ChunkLightQueue::new);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ this.size++;
+ if (type == Update.PRE_UPDATE) {
+ lightQueue.pre.add(run);
+ } else {
+ lightQueue.post.add(run);
+ }
+ if (shouldFastUpdate) {
+ lightQueue.shouldFastUpdate = true;
+ }
+
+ if (this.lowestPriority > priority) {
+ this.lowestPriority = priority;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ }
+ }
+
+ public final boolean isEmpty() {
+ return this.size == 0;
+ }
+
+ public final int size() {
+ return this.size;
+ }
+
+ public boolean poll(java.util.List<Runnable> pre, java.util.List<Runnable> post) {
+ boolean hasWork = false;
+ it.unimi.dsi.fastutil.longs.Long2ObjectLinkedOpenHashMap<ChunkLightQueue>[] buckets = this.buckets;
+ while (lowestPriority < MAX_PRIORITIES && !isEmpty()) {
+ it.unimi.dsi.fastutil.longs.Long2ObjectLinkedOpenHashMap<ChunkLightQueue> bucket = buckets[lowestPriority];
+ if (bucket.isEmpty()) {
+ lowestPriority++;
+ continue;
+ }
+ ChunkLightQueue queue = bucket.removeFirst();
+ this.size -= queue.pre.size() + queue.post.size();
+ pre.addAll(queue.pre);
+ post.addAll(queue.post);
+ queue.pre.clear();
+ queue.post.clear();
+ hasWork = true;
+ if (queue.shouldFastUpdate) {
+ return true;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ }
+ }
+ return hasWork;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ }
+ }
+
+ private final LightQueue queue = new LightQueue();
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper end
private final PlayerChunkMap d;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
private final Mailbox<ChunkTaskQueueSorter.a<Runnable>> e;
private volatile int f = 5;
@@ -25,7 +115,7 @@ public class LightEngineThreaded extends LightEngine implements AutoCloseable {
super(ilightaccess, true, flag);
this.d = playerchunkmap;
this.e = mailbox;
- this.b = threadedmailbox;
+ this.mailbox = this.b = threadedmailbox; // Paper
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
public void close() {}
@@ -111,8 +201,11 @@ public class LightEngineThreaded extends LightEngine implements AutoCloseable {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
private void a(int i, int j, IntSupplier intsupplier, LightEngineThreaded.Update lightenginethreaded_update, Runnable runnable) {
this.e.a(ChunkTaskQueueSorter.a(() -> { // Paper - decompile error
- this.c.add(Pair.of(lightenginethreaded_update, runnable));
- if (this.c.size() >= this.f) {
+ // Paper start
+ int priority = intsupplier.getAsInt();
+ this.queue.add(ChunkCoordIntPair.pair(i, j), priority, lightenginethreaded_update, runnable); // Paper
+ if (priority <= 25) { // don't auto kick off unless priority
+ // Paper end
this.b();
}
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
@@ -134,7 +227,14 @@ public class LightEngineThreaded extends LightEngine implements AutoCloseable {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
ChunkCoordIntPair chunkcoordintpair = ichunkaccess.getPos();
ichunkaccess.b(false);
- this.a(chunkcoordintpair.x, chunkcoordintpair.z, LightEngineThreaded.Update.PRE_UPDATE, SystemUtils.a(() -> {
+ // Paper start
+ long pair = chunkcoordintpair.pair();
+ CompletableFuture<IChunkAccess> future = new CompletableFuture<>();
+ IntSupplier prioritySupplier1 = d.getPrioritySupplier(pair);
+ IntSupplier prioritySupplier = flag ? () -> Math.max(1, prioritySupplier1.getAsInt() - 10) : prioritySupplier1;
+ this.e.a(ChunkTaskQueueSorter.a(() -> {
+ this.queue.add(pair, prioritySupplier.getAsInt(), LightEngineThreaded.Update.PRE_UPDATE, SystemUtils.a(() -> {
+ // Paper end
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
ChunkSection[] achunksection = ichunkaccess.getSections();
for (int i = 0; i < 16; ++i) {
@@ -155,52 +255,45 @@ public class LightEngineThreaded extends LightEngine implements AutoCloseable {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
this.d.c(chunkcoordintpair);
}, () -> {
return "lightChunk " + chunkcoordintpair + " " + flag;
+ // Paper start - merge the 2 together
}));
- return CompletableFuture.supplyAsync(() -> {
+
+ this.queue.add(pair, prioritySupplier.getAsInt(), LightEngineThreaded.Update.POST_UPDATE, () -> {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
ichunkaccess.b(true);
super.b(chunkcoordintpair, false);
- return ichunkaccess;
- }, (runnable) -> {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
- this.a(chunkcoordintpair.x, chunkcoordintpair.z, LightEngineThreaded.Update.POST_UPDATE, runnable);
+ // Paper start
+ future.complete(ichunkaccess);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
});
+ queueUpdate(); // run queue now
+ }, pair, prioritySupplier));
+ return future;
+ // Paper end
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
public void queueUpdate() {
- if ((!this.c.isEmpty() || super.a()) && this.g.compareAndSet(false, true)) {
+ if ((!this.queue.isEmpty() || super.a()) && this.g.compareAndSet(false, true)) { // Paper
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
this.b.a((() -> { // Paper - decompile error
this.b();
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
this.g.set(false);
+ queueUpdate(); // Paper - if we still have work to do, do it!
}));
}
}
+ // Paper start - replace impl
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
private void b() {
- int i = Math.min(this.c.size(), this.f);
- ObjectListIterator<Pair<LightEngineThreaded.Update, Runnable>> objectlistiterator = this.c.iterator();
-
- Pair pair;
- int j;
-
- for (j = 0; objectlistiterator.hasNext() && j < i; ++j) {
- pair = (Pair) objectlistiterator.next();
- if (pair.getFirst() == LightEngineThreaded.Update.PRE_UPDATE) {
- ((Runnable) pair.getSecond()).run();
- }
- }
-
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
- objectlistiterator.back(j);
- super.a(Integer.MAX_VALUE, true, true);
-
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
- for (j = 0; objectlistiterator.hasNext() && j < i; ++j) {
- pair = (Pair) objectlistiterator.next();
- if (pair.getFirst() == LightEngineThreaded.Update.POST_UPDATE) {
- ((Runnable) pair.getSecond()).run();
- }
-
- objectlistiterator.remove();
+ java.util.List<Runnable> pre = new java.util.ArrayList<>();
+ java.util.List<Runnable> post = new java.util.ArrayList<>();
+ int i = Math.min(queue.size(), 4);
+ while (i-- > 0 && queue.poll(pre, post)) {
+ pre.forEach(Runnable::run);
+ pre.clear();
+ super.a(Integer.MAX_VALUE, true, true);
+ post.forEach(Runnable::run);
+ post.clear();
}
-
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ // Paper end
}
public void a(int i) {
diff --git a/src/main/java/net/minecraft/server/NibbleArray.java b/src/main/java/net/minecraft/server/NibbleArray.java
index 8cedfdd820cc02a76607b53e0b054fc74654f907..a9795394c9b17f9f0ce4c4f9c8f51a48e950418e 100644
--- a/src/main/java/net/minecraft/server/NibbleArray.java
+++ b/src/main/java/net/minecraft/server/NibbleArray.java
@@ -8,6 +8,13 @@ import javax.annotation.Nullable;
public class NibbleArray {
// Paper start
+ static final NibbleArray EMPTY_NIBBLE_ARRAY = new NibbleArray() {
+ @Override
+ public byte[] asBytes() {
+ throw new IllegalStateException();
+ }
+ };
+ long lightCacheKey = Long.MIN_VALUE;
public static byte[] EMPTY_NIBBLE = new byte[2048];
private static final int nibbleBucketSizeMultiplier = Integer.getInteger("Paper.nibbleBucketSize", 3072);
private static final int maxPoolSize = Integer.getInteger("Paper.maxNibblePoolSize", (int) Math.min(6, Math.max(1, Runtime.getRuntime().maxMemory() / 1024 / 1024 / 1024)) * (nibbleBucketSizeMultiplier * 8));
diff --git a/src/main/java/net/minecraft/server/PlayerChunk.java b/src/main/java/net/minecraft/server/PlayerChunk.java
index d04c8cdcc16ea7f0a49dc5720ddc8e47be0b634c..9c1be2e2d42701a96e8c40b377d13f5c0e92bf06 100644
--- a/src/main/java/net/minecraft/server/PlayerChunk.java
+++ b/src/main/java/net/minecraft/server/PlayerChunk.java
@@ -725,6 +725,7 @@ public class PlayerChunk {
ioPriority = com.destroystokyo.paper.io.PrioritizedTaskQueue.HIGH_PRIORITY;
}
chunkMap.world.asyncChunkTaskManager.raisePriority(location.x, location.z, ioPriority);
+ chunkMap.world.getChunkProvider().getLightEngine().changePriority(location.pair(), getCurrentPriority(), priority);
}
if (getCurrentPriority() != priority) {
this.w.a(this.location, this::getCurrentPriority, priority, this::setPriority); // use preferred priority
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
diff --git a/src/main/java/net/minecraft/server/PlayerChunkMap.java b/src/main/java/net/minecraft/server/PlayerChunkMap.java
index 394cea57b3871c2e11d1f7e0e9546df64ff3bafe..8abf276a325cbc3a863fb89276526790c4de2692 100644
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
--- a/src/main/java/net/minecraft/server/PlayerChunkMap.java
+++ b/src/main/java/net/minecraft/server/PlayerChunkMap.java
@@ -629,6 +629,7 @@ public class PlayerChunkMap extends IChunkLoader implements PlayerChunk.d {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
// Paper end
}
+ protected final IntSupplier getPrioritySupplier(long i) { return c(i); } // Paper - OBFHELPER
protected IntSupplier c(long i) {
return () -> {
PlayerChunk playerchunk = this.getVisibleChunk(i);
diff --git a/src/main/java/net/minecraft/server/ThreadedMailbox.java b/src/main/java/net/minecraft/server/ThreadedMailbox.java
index 8082569022384a3ba03fb4a6f1ae12b443598dcb..3db8073f5182fe4dd4c710b202afc323dfb9b2d2 100644
--- a/src/main/java/net/minecraft/server/ThreadedMailbox.java
+++ b/src/main/java/net/minecraft/server/ThreadedMailbox.java
@@ -93,7 +93,8 @@ public class ThreadedMailbox<T> implements Mailbox<T>, AutoCloseable, Runnable {
}
- @Override
+
+ public void queue(T t0) { a(t0); } @Override // Paper - OBFHELPER
public void a(T t0) {
this.a.a(t0);
this.f();
diff --git a/src/main/java/net/minecraft/server/WorldServer.java b/src/main/java/net/minecraft/server/WorldServer.java
index 49da7352dcee6c352904cabe8b5db0152c427029..46e261b6513b52f5562387b661de4a409ea515a3 100644
--- a/src/main/java/net/minecraft/server/WorldServer.java
+++ b/src/main/java/net/minecraft/server/WorldServer.java
@@ -685,6 +685,7 @@ public class WorldServer extends World {
}
gameprofilerfiller.exit();
timings.chunkTicksBlocks.stopTiming(); // Paper
+ getChunkProvider().getLightEngine().queueUpdate(); // Paper
// Paper end
}
}