7d10cdea03
This PR contains all of Tuinity's patches. Very notable ones are: - Highly optimised collisions - Optimised entity lookups by bounding box (Mojang made regressions in 1.17, this brings it back to 1.16) - Starlight https://github.com/PaperMC/Starlight - Rewritten dataconverter system https://github.com/PaperMC/DataConverter - Random block ticking optimisation (wrongly dropped from Paper 1.17) - Chunk ticking optimisations - Anything else I've forgotten in the 60 or so patches If you are a previous Tuinity user, your config will not migrate. You must do it yourself. The config options have simply been moved into paper.yml, so it will be an easy migration. However, please note that the chunk loading options in tuinity.yml are NOT compatible with the options in paper.yml. * Port tuinity, initial patchset * Update gradle to 7.2 jmp said it fixes rebuildpatches not working for me. it fucking better * Completely clean apply * Remove tuinity config, add per player api patch * Remove paper reobf mappings patch * Properly update gradlew * Force clean rebuild * Mark fixups Comments and ATs still need to be done * grep -r "Tuinity" * Fixup * Ensure gameprofile lastaccess is written only under the state lock * update URL for dataconverter * Only clean rebuild tuinity patches might fix merge conflicts * Use UTF-8 for gradlew * Clean rb patches again * Convert block ids used as item ids Neither the converters of pre 1.13 nor DFU handled these cases, as by the time they were written the game at the time didn't consider these ids valid - they would be air. Because of this, some worlds have logspam since only DataConverter (not DFU or legacy converters) will warn when an invalid id has been seen. While quite a few do need to now be considered as air, quite a lot do not. So it makes sense to add conversion for these items, instead of simply suppressing or ignoring the logs. I've now added id -> string conversion for all block ids that could be used as items that existed in the game before 1.7.10 (I have no interest in tracking down the exact version block ids stopped working) that were on https://minecraft-ids.grahamedgecombe.com/ Items that did not directly convert to new items will be instead converted to air: stems, wheat crops, piston head, tripwire wire block * Fix LightPopulated parsing in V1466 The DFU code was checking if the number existed, not if it didn't exist. I misread the original code. * Always parse protochunk light sources unless it is marked as non-lit Chunks not marked as lit will always go through the light engine, so they should always have their block sources parsed. * Update custom names to JSON for players Missed this fix from CB, as it was inside the DataFixers class. I decided to double check all of the CB changes again: DataFixers.java was the only area I missed, as I had inspected all datafixer diffs and implemented them all into DataConverter. I also checked Bootstrap.java again, and re-evaluated their changes. I had previously done this, but determined that they were all bad. The changes to make standing_sign block map to oak_sign block in V1450 is bad, because that's not the item id V1450 accepts. Only in 1.14 did oak_sign even exist, and as expected there is a converter to rename all existing sign items/blocks. The fix to register the portal block under id 1440 is useless, as the flattenning logic will default to the lowest registered id - which is the exact blockstate that CB registers into 1440. So it just doesn't do anything. The extra item ids in the id -> string converter are already added, but I found this from EMC originally. The change for the spawn egg id 23 -> Arrow is just wrong, that id DOES correspond to TippedArrow, NOT Arrow. As expected, the spawn egg already has a dedicated mapping for Arrow, which is id 10 - which was Arrow's entity id. I also ported a fix for the cooked_fished id update. This doesn't really matter since there is already a dataconverter to fix this, but the game didn't accept cooked_fished at the time. So I see no harm. * Review all converters and walkers - Refactor V99 to have helper methods for defining entity/tile entity types - Automatically namespace all ids that should be namespaced. While vanilla never saved non-namespaced data for things that are namespaced, plugins/users might have. - Synchronised the identity ensure map in HelperBlockFlatteningV1450 - Code style consistency - Add missing log warning in V102 for ITEM_NAME type conversion - Use getBoolean instead of getByte - Use ConverterAbstractEntityRename for V143 TippedArrow -> Arrow rename, as it will affect ENTITY_NAME type - Always set isVillager to false in V502 for Zombie - Register V808's converter under subversion 1 like DFU - Register a breakpoint for V1.17.1. In the future, all final versions of major releases will have a breakpoint so that the work required to determine if a converter needs a breakpoint is minimal - Validate that a dataconverter is only registered for a version that is registered - ConverterFlattenTileEntity is actually ConverterFlattenEntity It even registered the converters under TILE_ENTITY, instead of ENTITY. - Fix id comparison in V1492 STRUCTURE_FEATURE renamer - Use ConverterAbstractStatsRename for V1510 stats renamer At the time I had written that class, the abstract renamer didn't exist. - Ensure OwnerUUID is at least set to empty string in V1904 if the ocelot is converted to a cat (this is likely so that it retains a collar) - Use generic read/write for Records in V1946 Records is actually a list, not a map. So reading map was invalid. * Always set light to zero when propagating decrease This fixes an almost infinite loop where light values would be spam queued on a very small subset on blocks. This also likely fixes the memory issues people were seeing. * re-organize patches * Apply and fix conflicts * Revert some patches getChunkAt retains chunks so that plugins don't spam loads revert mc-4 fix will remain unless issues pop up * Shuffle iterated chunks if per player is not enabled Can help with some mob spawning stacking up at locations * Make per player default, migrate all configs * Adjust comments in fixups * Rework config for player chunk loader Old config is not compatible. Move all configs to be under `settings` in paper.yml The player chunk loader has been modified to less aggressively load chunks, but to send chunks at higher rates compared to tuinity. There are new config entries to tune this behavior. * Add back old constructor to CompressionEncoder/Decoder (fixes Tuinity #358) * Raise chunk loading default limits * Reduce worldgen thread workers for lower core count cpus * Raise limits for chunk loading config Also place it under `chunk-loading` * Disable max chunk send rate by default * Fix conflicts and rebuild patches * Drop default send rate again Appears to be still causing problems for no known reason * Raise chunk send limits to 100 per player While a low limit fixes ping issues for some people, most people do not suffer from this issue and thus should not suffer from an extremely slow load-in rate. * Rebase part 1 Autosquash the fixups * Move not implemented up * Fixup mc-dev fixes Missed this one * Rebase per player viewdistance api into the original api patch * Remove old light engine patch part 1 The prioritisation must be kept from it, so that part has been rebased into the priority patch. Part 2 will deal with rebasing all of the patches _after_ * Rebase remaining patches for old light patch removal * Remove other mid tick patch * Remove Optimize-PlayerChunkMap-memory-use-for-visibleChunks.patch Replaced by `Do not copy visible chunks` * Revert AT for Vec3i setX/Y/Z The class is immutable. set should not be exposed * Remove old IntegerUtil class * Replace old CraftChunk#getEntities patch * Remove import for SWMRNibbleArray in ChunkAccess * Finished merge checklist * Remove ensureTickThread impl in urgency patch Co-authored-by: Spottedleaf <Spottedleaf@users.noreply.github.com> Co-authored-by: Jason Penilla <11360596+jpenilla@users.noreply.github.com>
254 Zeilen
11 KiB
Diff
254 Zeilen
11 KiB
Diff
From 0000000000000000000000000000000000000000 Mon Sep 17 00:00:00 2001
|
|
From: Aikar <aikar@aikar.co>
|
|
Date: Fri, 15 Feb 2019 01:08:19 -0500
|
|
Subject: [PATCH] Allow Saving of Oversized Chunks
|
|
|
|
Note 1.17 update: With 1.17, Entities are no longer stored in chunk slices, so this needs updating!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
|
|
|
|
The Minecraft World Region File format has a hard cap of 1MB per chunk.
|
|
This is due to the fact that the header of the file format only allocates
|
|
a single byte for sector count, meaning a maximum of 256 sectors, at 4k per sector.
|
|
|
|
This limit can be reached fairly easily with books, resulting in the chunk being unable
|
|
to save to the world. Worse off, is that nothing printed when this occured, and silently
|
|
performed a chunk rollback on next load.
|
|
|
|
This leads to security risk with duplication and is being actively exploited.
|
|
|
|
This patch catches the too large scenario, falls back and moves any large Entity
|
|
or Tile Entity into a new compound, and this compound is saved into a different file.
|
|
|
|
On Chunk Load, we check for oversized status, and if so, we load the extra file and
|
|
merge the Entities and Tile Entities from the oversized chunk back into the level to
|
|
then be loaded as normal.
|
|
|
|
Once a chunk is returned back to normal size, the oversized flag will clear, and no
|
|
extra data file will exist.
|
|
|
|
This fix maintains compatability with all existing Anvil Region Format tools as it
|
|
does not alter the save format. They will just not know about the extra entities.
|
|
|
|
This fix also maintains compatability if someone switches server jars to one without
|
|
this fix, as the data will remain in the oversized file. Once the server returns
|
|
to a jar with this fix, the data will be restored.
|
|
|
|
diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/RegionFile.java b/src/main/java/net/minecraft/world/level/chunk/storage/RegionFile.java
|
|
index 298b5abbc792dd33be38acbd1c572c9778c4d2d2..46226dd2d16a9f4017661712fe2bfc0c46f63cb2 100644
|
|
--- a/src/main/java/net/minecraft/world/level/chunk/storage/RegionFile.java
|
|
+++ b/src/main/java/net/minecraft/world/level/chunk/storage/RegionFile.java
|
|
@@ -20,8 +20,12 @@ import java.nio.file.LinkOption;
|
|
import java.nio.file.Path;
|
|
import java.nio.file.StandardCopyOption;
|
|
import java.nio.file.StandardOpenOption;
|
|
+import java.util.zip.InflaterInputStream; // Paper
|
|
+
|
|
import javax.annotation.Nullable;
|
|
import net.minecraft.Util;
|
|
+import net.minecraft.nbt.CompoundTag;
|
|
+import net.minecraft.nbt.NbtIo;
|
|
import net.minecraft.world.level.ChunkPos;
|
|
import org.apache.logging.log4j.LogManager;
|
|
import org.apache.logging.log4j.Logger;
|
|
@@ -48,6 +52,7 @@ public class RegionFile implements AutoCloseable {
|
|
@VisibleForTesting
|
|
protected final RegionBitmap usedSectors;
|
|
public final java.util.concurrent.locks.ReentrantLock fileLock = new java.util.concurrent.locks.ReentrantLock(true); // Paper
|
|
+ public final File regionFile; // Paper
|
|
|
|
public RegionFile(File file, File directory, boolean dsync) throws IOException {
|
|
this(file.toPath(), directory.toPath(), RegionFileVersion.VERSION_DEFLATE, dsync);
|
|
@@ -55,6 +60,8 @@ public class RegionFile implements AutoCloseable {
|
|
|
|
public RegionFile(Path file, Path directory, RegionFileVersion outputChunkStreamVersion, boolean dsync) throws IOException {
|
|
this.header = ByteBuffer.allocateDirect(8192);
|
|
+ this.regionFile = file.toFile(); // Paper
|
|
+ initOversizedState(); // Paper
|
|
this.usedSectors = new RegionBitmap();
|
|
this.version = outputChunkStreamVersion;
|
|
if (!Files.isDirectory(directory, new LinkOption[0])) {
|
|
@@ -433,6 +440,74 @@ public class RegionFile implements AutoCloseable {
|
|
|
|
}
|
|
|
|
+ // Paper start
|
|
+ private final byte[] oversized = new byte[1024];
|
|
+ private int oversizedCount = 0;
|
|
+
|
|
+ private synchronized void initOversizedState() throws IOException {
|
|
+ File metaFile = getOversizedMetaFile();
|
|
+ if (metaFile.exists()) {
|
|
+ final byte[] read = java.nio.file.Files.readAllBytes(metaFile.toPath());
|
|
+ System.arraycopy(read, 0, oversized, 0, oversized.length);
|
|
+ for (byte temp : oversized) {
|
|
+ oversizedCount += temp;
|
|
+ }
|
|
+ }
|
|
+ }
|
|
+
|
|
+ private static int getChunkIndex(int x, int z) {
|
|
+ return (x & 31) + (z & 31) * 32;
|
|
+ }
|
|
+ synchronized boolean isOversized(int x, int z) {
|
|
+ return this.oversized[getChunkIndex(x, z)] == 1;
|
|
+ }
|
|
+ synchronized void setOversized(int x, int z, boolean oversized) throws IOException {
|
|
+ final int offset = getChunkIndex(x, z);
|
|
+ boolean previous = this.oversized[offset] == 1;
|
|
+ this.oversized[offset] = (byte) (oversized ? 1 : 0);
|
|
+ if (!previous && oversized) {
|
|
+ oversizedCount++;
|
|
+ } else if (!oversized && previous) {
|
|
+ oversizedCount--;
|
|
+ }
|
|
+ if (previous && !oversized) {
|
|
+ File oversizedFile = getOversizedFile(x, z);
|
|
+ if (oversizedFile.exists()) {
|
|
+ oversizedFile.delete();
|
|
+ }
|
|
+ }
|
|
+ if (oversizedCount > 0) {
|
|
+ if (previous != oversized) {
|
|
+ writeOversizedMeta();
|
|
+ }
|
|
+ } else if (previous) {
|
|
+ File oversizedMetaFile = getOversizedMetaFile();
|
|
+ if (oversizedMetaFile.exists()) {
|
|
+ oversizedMetaFile.delete();
|
|
+ }
|
|
+ }
|
|
+ }
|
|
+
|
|
+ private void writeOversizedMeta() throws IOException {
|
|
+ java.nio.file.Files.write(getOversizedMetaFile().toPath(), oversized);
|
|
+ }
|
|
+
|
|
+ private File getOversizedMetaFile() {
|
|
+ return new File(this.regionFile.getParentFile(), this.regionFile.getName().replaceAll("\\.mca$", "") + ".oversized.nbt");
|
|
+ }
|
|
+
|
|
+ private File getOversizedFile(int x, int z) {
|
|
+ return new File(this.regionFile.getParentFile(), this.regionFile.getName().replaceAll("\\.mca$", "") + "_oversized_" + x + "_" + z + ".nbt");
|
|
+ }
|
|
+
|
|
+ synchronized CompoundTag getOversizedData(int x, int z) throws IOException {
|
|
+ File file = getOversizedFile(x, z);
|
|
+ try (DataInputStream out = new DataInputStream(new BufferedInputStream(new InflaterInputStream(new java.io.FileInputStream(file))))) {
|
|
+ return NbtIo.read((java.io.DataInput) out);
|
|
+ }
|
|
+
|
|
+ }
|
|
+ // Paper end
|
|
private class ChunkBuffer extends ByteArrayOutputStream {
|
|
|
|
private final ChunkPos pos;
|
|
diff --git a/src/main/java/net/minecraft/world/level/chunk/storage/RegionFileStorage.java b/src/main/java/net/minecraft/world/level/chunk/storage/RegionFileStorage.java
|
|
index ebb1a050beab9530942c4498335f084c89faef06..24092d3d3d234b6f1f2b90e22d90f297532358cc 100644
|
|
--- a/src/main/java/net/minecraft/world/level/chunk/storage/RegionFileStorage.java
|
|
+++ b/src/main/java/net/minecraft/world/level/chunk/storage/RegionFileStorage.java
|
|
@@ -10,7 +10,9 @@ import java.io.File;
|
|
import java.io.IOException;
|
|
import javax.annotation.Nullable;
|
|
import net.minecraft.nbt.CompoundTag;
|
|
+import net.minecraft.nbt.ListTag;
|
|
import net.minecraft.nbt.NbtIo;
|
|
+import net.minecraft.nbt.Tag;
|
|
import net.minecraft.server.MinecraftServer;
|
|
import net.minecraft.util.ExceptionCollector;
|
|
import net.minecraft.world.level.ChunkPos;
|
|
@@ -81,6 +83,74 @@ public class RegionFileStorage implements AutoCloseable {
|
|
}
|
|
}
|
|
|
|
+ // Paper start
|
|
+ private static void printOversizedLog(String msg, File file, int x, int z) {
|
|
+ org.apache.logging.log4j.LogManager.getLogger().fatal(msg + " (" + file.toString().replaceAll(".+[\\\\/]", "") + " - " + x + "," + z + ") Go clean it up to remove this message. /minecraft:tp " + (x<<4)+" 128 "+(z<<4) + " - DO NOT REPORT THIS TO PAPER - You may ask for help on Discord, but do not file an issue. These error messages can not be removed.");
|
|
+ }
|
|
+
|
|
+ private static final int DEFAULT_SIZE_THRESHOLD = 1024 * 8;
|
|
+ private static final int OVERZEALOUS_TOTAL_THRESHOLD = 1024 * 64;
|
|
+ private static final int OVERZEALOUS_THRESHOLD = 1024;
|
|
+ private static int SIZE_THRESHOLD = DEFAULT_SIZE_THRESHOLD;
|
|
+ private static void resetFilterThresholds() {
|
|
+ SIZE_THRESHOLD = Math.max(1024 * 4, Integer.getInteger("Paper.FilterThreshhold", DEFAULT_SIZE_THRESHOLD));
|
|
+ }
|
|
+ static {
|
|
+ resetFilterThresholds();
|
|
+ }
|
|
+
|
|
+ static boolean isOverzealous() {
|
|
+ return SIZE_THRESHOLD == OVERZEALOUS_THRESHOLD;
|
|
+ }
|
|
+
|
|
+
|
|
+ private static CompoundTag readOversizedChunk(RegionFile regionfile, ChunkPos chunkCoordinate) throws IOException {
|
|
+ synchronized (regionfile) {
|
|
+ try (DataInputStream datainputstream = regionfile.getChunkDataInputStream(chunkCoordinate)) {
|
|
+ CompoundTag oversizedData = regionfile.getOversizedData(chunkCoordinate.x, chunkCoordinate.z);
|
|
+ CompoundTag chunk = NbtIo.read((DataInput) datainputstream);
|
|
+ if (oversizedData == null) {
|
|
+ return chunk;
|
|
+ }
|
|
+ CompoundTag oversizedLevel = oversizedData.getCompound("Level");
|
|
+ CompoundTag level = chunk.getCompound("Level");
|
|
+
|
|
+ mergeChunkList(level, oversizedLevel, "Entities");
|
|
+ mergeChunkList(level, oversizedLevel, "TileEntities");
|
|
+
|
|
+ chunk.put("Level", level);
|
|
+
|
|
+ return chunk;
|
|
+ } catch (Throwable throwable) {
|
|
+ throwable.printStackTrace();
|
|
+ throw throwable;
|
|
+ }
|
|
+ }
|
|
+ }
|
|
+
|
|
+ private static void mergeChunkList(CompoundTag level, CompoundTag oversizedLevel, String key) {
|
|
+ ListTag levelList = level.getList(key, 10);
|
|
+ ListTag oversizedList = oversizedLevel.getList(key, 10);
|
|
+
|
|
+ if (!oversizedList.isEmpty()) {
|
|
+ levelList.addAll(oversizedList);
|
|
+ level.put(key, levelList);
|
|
+ }
|
|
+ }
|
|
+
|
|
+ private static int getNBTSize(Tag nbtBase) {
|
|
+ DataOutputStream test = new DataOutputStream(new org.apache.commons.io.output.NullOutputStream());
|
|
+ try {
|
|
+ nbtBase.write(test);
|
|
+ return test.size();
|
|
+ } catch (IOException e) {
|
|
+ e.printStackTrace();
|
|
+ return 0;
|
|
+ }
|
|
+ }
|
|
+
|
|
+ // Paper End
|
|
+
|
|
@Nullable
|
|
public CompoundTag read(ChunkPos pos) throws IOException {
|
|
// CraftBukkit start - SPIGOT-5680: There's no good reason to preemptively create files on read, save that for writing
|
|
@@ -92,6 +162,12 @@ public class RegionFileStorage implements AutoCloseable {
|
|
try { // Paper
|
|
DataInputStream datainputstream = regionfile.getChunkDataInputStream(pos);
|
|
|
|
+ // Paper start
|
|
+ if (regionfile.isOversized(pos.x, pos.z)) {
|
|
+ printOversizedLog("Loading Oversized Chunk!", regionfile.regionFile, pos.x, pos.z);
|
|
+ return readOversizedChunk(regionfile, pos);
|
|
+ }
|
|
+ // Paper end
|
|
CompoundTag nbttagcompound;
|
|
label43:
|
|
{
|
|
@@ -143,6 +219,7 @@ public class RegionFileStorage implements AutoCloseable {
|
|
|
|
try {
|
|
NbtIo.write(nbt, (DataOutput) dataoutputstream);
|
|
+ regionfile.setOversized(pos.x, pos.z, false); // Paper - We don't do this anymore, mojang stores differently, but clear old meta flag if it exists to get rid of our own meta file once last oversized is gone
|
|
} catch (Throwable throwable) {
|
|
if (dataoutputstream != null) {
|
|
try {
|