3
0
Mirror von https://github.com/PaperMC/Paper.git synchronisiert 2024-11-15 20:40:07 +01:00
Paper/Spigot-Server-Patches/0491-Optimize-Bit-Operations-by-inlining.patch

221 Zeilen
10 KiB
Diff

Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
From: Aikar <aikar@aikar.co>
Date: Thu, 4 Jun 2020 02:24:49 -0400
Subject: [PATCH] Optimize Bit Operations by inlining
Inline bit operations and reduce instruction count to make these hot
operations faster
diff --git a/src/main/java/net/minecraft/server/BlockPosition.java b/src/main/java/net/minecraft/server/BlockPosition.java
index 52ae74571b0d671a294900caedaa400b62d4ea34..2d887af902a33b0e28d8f0a6ac2e59c815a7856e 100644
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
--- a/src/main/java/net/minecraft/server/BlockPosition.java
+++ b/src/main/java/net/minecraft/server/BlockPosition.java
@@ -25,14 +25,16 @@ public class BlockPosition extends BaseBlockPosition {
}).stable();
private static final Logger LOGGER = LogManager.getLogger();
public static final BlockPosition ZERO = new BlockPosition(0, 0, 0);
- private static final int f = 1 + MathHelper.f(MathHelper.c(30000000));
- private static final int g = BlockPosition.f;
- private static final int h = 64 - BlockPosition.f - BlockPosition.g;
- private static final long i = (1L << BlockPosition.f) - 1L;
- private static final long j = (1L << BlockPosition.h) - 1L;
- private static final long k = (1L << BlockPosition.g) - 1L;
- private static final int l = BlockPosition.h;
- private static final int m = BlockPosition.h + BlockPosition.g;
+ // Paper start - static constants
+ private static final int f = 26;
+ private static final int g = 26;
+ private static final int h = 12;
+ private static final long i = 67108863;
+ private static final long j = 4095;
+ private static final long k = 67108863;
+ private static final int l = 12;
+ private static final int m = 38;
+ // Paper end
public BlockPosition(int i, int j, int k) {
super(i, j, k);
@@ -54,28 +56,29 @@ public class BlockPosition extends BaseBlockPosition {
this(baseblockposition.getX(), baseblockposition.getY(), baseblockposition.getZ());
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
+ public static long getAdjacent(int baseX, int baseY, int baseZ, EnumDirection enumdirection) { return asLong(baseX + enumdirection.getAdjacentX(), baseY + enumdirection.getAdjacentY(), baseZ + enumdirection.getAdjacentZ()); } // Paper
public static long a(long i, EnumDirection enumdirection) {
return a(i, enumdirection.getAdjacentX(), enumdirection.getAdjacentY(), enumdirection.getAdjacentZ());
}
public static long a(long i, int j, int k, int l) {
- return a(b(i) + j, c(i) + k, d(i) + l);
+ return a((int) (i >> 38) + j, (int) ((i << 52) >> 52) + k, (int) ((i << 26) >> 38) + l); // Paper - simplify/inline
}
public static int b(long i) {
- return (int) (i << 64 - BlockPosition.m - BlockPosition.f >> 64 - BlockPosition.f);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ return (int) (i >> 38); // Paper - simplify/inline
}
public static int c(long i) {
- return (int) (i << 64 - BlockPosition.h >> 64 - BlockPosition.h);
+ return (int) ((i << 52) >> 52); // Paper - simplify/inline
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
public static int d(long i) {
- return (int) (i << 64 - BlockPosition.l - BlockPosition.g >> 64 - BlockPosition.g);
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
+ return (int) ((i << 26) >> 38); // Paper - simplify/inline
}
public static BlockPosition fromLong(long i) {
- return new BlockPosition(b(i), c(i), d(i));
+ return new BlockPosition((int) (i >> 38), (int) ((i << 52) >> 52), (int) ((i << 26) >> 38)); // Paper - simplify/inline
}
public long asLong() {
@@ -84,12 +87,7 @@ public class BlockPosition extends BaseBlockPosition {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
public static long asLong(int x, int y, int z) { return a(x, y, z); } // Paper - OBFHELPER
public static long a(int i, int j, int k) {
- long l = 0L;
-
- l |= ((long) i & BlockPosition.i) << BlockPosition.m;
- l |= ((long) j & BlockPosition.j) << 0;
- l |= ((long) k & BlockPosition.k) << BlockPosition.l;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
- return l;
+ return (((long) i & (long) 67108863) << 38) | (((long) j & (long) 4095)) | (((long) k & (long) 67108863) << 12); // Paper - inline constants and simplify
}
public static long f(long i) {
diff --git a/src/main/java/net/minecraft/server/SectionPosition.java b/src/main/java/net/minecraft/server/SectionPosition.java
index 8d325ec19c5d954785732d8e295b4ba150f740ca..f95925f1c5d091f1a129d0437bb6e175c6ac080f 100644
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
--- a/src/main/java/net/minecraft/server/SectionPosition.java
+++ b/src/main/java/net/minecraft/server/SectionPosition.java
@@ -16,7 +16,7 @@ public class SectionPosition extends BaseBlockPosition {
}
public static SectionPosition a(BlockPosition blockposition) {
- return new SectionPosition(a(blockposition.getX()), a(blockposition.getY()), a(blockposition.getZ()));
+ return new SectionPosition(blockposition.getX() >> 4, blockposition.getY() >> 4, blockposition.getZ() >> 4); // Paper
}
public static SectionPosition a(ChunkCoordIntPair chunkcoordintpair, int i) {
@@ -28,15 +28,23 @@ public class SectionPosition extends BaseBlockPosition {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
public static SectionPosition a(long i) {
- return new SectionPosition(b(i), c(i), d(i));
+ return new SectionPosition((int) (i >> 42), (int) (i << 44 >> 44), (int) (i << 22 >> 42)); // Paper
}
public static long a(long i, EnumDirection enumdirection) {
return a(i, enumdirection.getAdjacentX(), enumdirection.getAdjacentY(), enumdirection.getAdjacentZ());
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
+ // Paper start
+ public static long getAdjacentFromBlockPos(int x, int y, int z, EnumDirection enumdirection) {
+ return (((long) ((x >> 4) + enumdirection.getAdjacentX()) & 4194303L) << 42) | (((long) ((y >> 4) + enumdirection.getAdjacentY()) & 1048575L)) | (((long) ((z >> 4) + enumdirection.getAdjacentZ()) & 4194303L) << 20);
+ }
+ public static long getAdjacentFromSectionPos(int x, int y, int z, EnumDirection enumdirection) {
+ return (((long) (x + enumdirection.getAdjacentX()) & 4194303L) << 42) | (((long) ((y) + enumdirection.getAdjacentY()) & 1048575L)) | (((long) (z + enumdirection.getAdjacentZ()) & 4194303L) << 20);
+ }
+ // Paper end
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
public static long a(long i, int j, int k, int l) {
- return b(b(i) + j, c(i) + k, d(i) + l);
+ return (((long) ((int) (i >> 42) + j) & 4194303L) << 42) | (((long) ((int) (i << 44 >> 44) + k) & 1048575L)) | (((long) ((int) (i << 22 >> 42) + l) & 4194303L) << 20); // Simplify to reduce instruction count
}
public static int a(int i) {
@@ -48,11 +56,7 @@ public class SectionPosition extends BaseBlockPosition {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
public static short b(BlockPosition blockposition) {
- int i = b(blockposition.getX());
- int j = b(blockposition.getY());
- int k = b(blockposition.getZ());
-
- return (short) (i << 8 | k << 4 | j << 0);
+ return (short) ((blockposition.getX() & 15) << 8 | (blockposition.getZ() & 15) << 4 | blockposition.getY() & 15); // Paper - simplify/inline
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
public static int a(short short0) {
@@ -111,16 +115,16 @@ public class SectionPosition extends BaseBlockPosition {
return this.getZ();
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
- public int d() {
- return this.a() << 4;
+ public final int d() { // Paper
+ return this.getX() << 4; // Paper
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
- public int e() {
- return this.b() << 4;
+ public final int e() { // Paper
+ return this.getY() << 4; // Paper
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
- public int f() {
- return this.c() << 4;
+ public final int f() { // Paper
+ return this.getZ() << 4; // Paper
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
public int g() {
@@ -135,8 +139,10 @@ public class SectionPosition extends BaseBlockPosition {
return (this.c() << 4) + 15;
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
+ public static long blockToSection(long i) { return e(i); } // Paper - OBFHELPER
public static long e(long i) {
- return b(a(BlockPosition.b(i)), a(BlockPosition.c(i)), a(BlockPosition.d(i)));
+ // b(a(BlockPosition.b(i)), a(BlockPosition.c(i)), a(BlockPosition.d(i)));
+ return (((long) (int) (i >> 42) & 4194303L) << 42) | (((long) (int) ((i << 52) >> 56) & 1048575L)) | (((long) (int) ((i << 26) >> 42) & 4194303L) << 20); // Simplify to reduce instruction count
}
public static long f(long i) {
@@ -157,17 +163,18 @@ public class SectionPosition extends BaseBlockPosition {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
return new ChunkCoordIntPair(this.a(), this.c());
}
+ // Paper start
+ public static long blockPosAsSectionLong(int i, int j, int k) {
+ return (((long) (i >> 4) & 4194303L) << 42) | (((long) (j >> 4) & 1048575L)) | (((long) (k >> 4) & 4194303L) << 20);
+ }
+ // Paper end
+ public static long asLong(int i, int j, int k) { return b(i, j, k); } // Paper - OBFHELPER
public static long b(int i, int j, int k) {
- long l = 0L;
-
- l |= ((long) i & 4194303L) << 42;
- l |= ((long) j & 1048575L) << 0;
- l |= ((long) k & 4194303L) << 20;
- return l;
+ return (((long) i & 4194303L) << 42) | (((long) j & 1048575L)) | (((long) k & 4194303L) << 20); // Paper - Simplify to reduce instruction count
}
public long s() {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
- return b(this.a(), this.b(), this.c());
+ return (((long) getX() & 4194303L) << 42) | (((long) getY() & 1048575L)) | (((long) getZ() & 4194303L) << 20); // Paper - Simplify to reduce instruction count
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
public Stream<BlockPosition> t() {
@@ -175,18 +182,11 @@ public class SectionPosition extends BaseBlockPosition {
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
public static Stream<SectionPosition> a(SectionPosition sectionposition, int i) {
- int j = sectionposition.a();
- int k = sectionposition.b();
- int l = sectionposition.c();
-
- return a(j - i, k - i, l - i, j + i, k + i, l + i);
+ return a(sectionposition.getX() - i, sectionposition.getY() - i, sectionposition.getZ() - i, sectionposition.getX() + i, sectionposition.getY() + i, sectionposition.getZ() + i); // Paper - simplify/inline
Optimize Light Engine Massive update to light to improve performance and chunk loading/generation. 1) Massive bit packing/unpacking optimizations and inlining. A lot of performance has to do with constant packing and unpacking of bits. We now inline a most bit operations, and re-use base x/y/z bits in many places. This helps with cpu level processing to just do all the math at once instead of having to jump in and out of function calls. This much logic also is likely over the JVM Inline limit for JIT too. 2) Applied a few of JellySquid's Phosphor mod optimizations such as - ensuring we don't notify neighbor chunks when neighbor chunk doesn't need to be notified - reduce hasLight checks in initializing light, and prob some more, they are tagged JellySquid where phosphor influence was used. 3) Optimize hot path accesses to getting updating chunk to have less branching 4) Optimize getBlock accesses to have less branching, and less unpacking 5) Have a separate urgent bucket for chunk light tasks. These tasks will always cut in line over non blocking light tasks. 6) Retain chunk priority while light tasks are enqueued. So if a task comes in at high priority but the queue is full of tasks already at a lower priority, before the task was simply added to the end. Now it can cut in line to the front. this applies for both urgent and non urgent tasks. 7) Buffer non urgent tasks even if queueUpdate is called multiple times to improve efficiency. 8) Fix NPE risk that crashes server in getting nibble data Fixes #3489 Fixes #3363
2020-06-05 07:25:11 +02:00
}
public static Stream<SectionPosition> b(ChunkCoordIntPair chunkcoordintpair, int i) {
- int j = chunkcoordintpair.x;
- int k = chunkcoordintpair.z;
-
- return a(j - i, 0, k - i, j + i, 15, k + i);
+ return a(chunkcoordintpair.x - i, 0, chunkcoordintpair.z - i, chunkcoordintpair.x + i, 15, chunkcoordintpair.z + i); // Paper - simplify/inline
}
public static Stream<SectionPosition> a(final int i, final int j, final int k, final int l, final int i1, final int j1) {