there is apparently bugs with relighting, so turning toggle back on
also bumped the max time threshold to reduce risk of starving the queue
allows it to violate TPS some, but at cost of better light data
also fixed some chunk neighbor checks
Apparently a zero max health attribute is perfectly fine in vanilla and
our own revive handling code appears to handle the case fine, even when
EntityDeathEvent is cancelled. So we should allow it to avoid issues
when these mobs are killed.
CraftBukkit added synchronization to read and write methods. This adds
much more contention on this object for accessing region files, as
the entire read and write of NBT data is now a blocking operation.
This causes issues when something then simply needs to check if a chunk exists
on the main thread, causing a block...
However, this synchronization was unnecessary, because there is already
enough synchronization done to keep things safe
1) Obtaining a Region File: Those methods are still static synchronized.
Meaning we can safely obtain a Region File concurrently.
2) RegionFile data access: Methods reading and manipulating data from
a region file are also marked synchronized, ensuring that no 2 processes
are reading or writing data at the same time.
3) Checking a region file for chunkExists: getOffset is also synchronized
ensuring that even if a chunk is currently being written, it will be safe.
By removing these synchronizations, we reduce the locking to only
when data is being write or read.
GZIP compression and NBT Buffer creation will no longer be part of the
synchronized context, reducing lock times.
Ultimately: This brings us back to Vanilla, which has had no indication of region file loss.
Modifying of permissions was only half protected, enabling concurrency
issues to occur if permissions were modified async.
While no plugin really should be doing that, modifying operations
are not heavily called, so they are safe to add synchronization to.
Now, all modification API's will be synchronized ensuring safety.
Additionally, hasPermission was victim to a common java newbie mistake
of calling if (containsKey(k)) return get(k), resulting in 2 map lookups.
Optimized it to simply be a single get call cutting permission map
lookups in half.
The PluginManager incorrectly used synchronization on firing any event
that was marked as synchronous.
This synchronized did not even protect any concurrency risk as
handlers were already thread safe in terms of mutations during event
dispatch.
The way it was used, has commonly led to deadlocks on the server,
which results in a hard crash.
This change removes the synchronize and adds some protection around enable/disable
Fixes issue #1177
`MapMaker#weakKeys()` makes the `Map` use identity comparison for the keys, while also enabling the automatical removal of dropped classes from the cache.
The changes are the same as in #1399, except now the original patch is modified instead of a new one being created.
The key can be retrieved via methods Location#toBlockKey() and
Block#getBlockKey()
World provides lookup for blocks by long key via method World#getBlockAtKey(long)
The formatting for the key is as follows:
10 bit y|27 bit z|27 bit x
The y value is considered unsigned while z and x are considered two's complement
Y range: [0, 1023]
X, Z range: [-67 108 864, 67 108 863]
Checked encoding and decoding via https://gist.github.com/Spottedleaf/74f4e241012ca2fa67d8f1c7e8e34722
Player Movement, Entity Creation and Teleportation move
entities with a very "You are here, no debate" change, making
the server register them as there, regardless if that chunk was
loaded or not.
It appears possible that with hack clients and lag, a player
may be able to move fast enough to move into an unloaded
chunk and get into a buggy state.
To prevent this, we will ensure a chunk is always loaded,
guaranteeing that the entity will be properly registered
into its new home comfortably.
Closes#1316
1) Don't kick in until server has started (the full crash will still kick in before full start)
2) Delay reporting until 10 seconds, then print every 5
3) Make the intervals configurable
4) Make it able to be disabled by setting every interval to <= 0
Particle packets contain a boolean which marks the particle to either force or show normal to the receiver.
Spigot has been sending all particles with the force boolean which overrides client particle settings.
Related changes in this commit;
- Add a force option to the ParticleBuilder API, which defaults to true to keep spigot consistent with existing api.
- Add a new spawnParticle method to support this mode as a parameter. Of course kept existing api methods the same so as to not break them.
Let me know if changes are needed.
* [CI-SKIP] add .editorconfig for base code style settings
* * Created patch 0349 (fixes#471)
* * Made requested modifications
* * Made requested modifications (x2)
* * Made recommended changes (x3)
* * Moved ConcurrentMap return values to Map as no functions specific to ConcurrentMap were used (backing map is still ConcurrentMap)
* Removed ConcurrentMap import
I misinterpreted some code as a risk of entity loss, but now
after deeper study, I see how that code was used more and why
it was adding entities to chunks that they shouldn't have been
in during a world transfer process.
also ensure we never process already valid entities. this shouldnt be possible as of recent
commits as we made the entity slice array safer, but doesn't hurt for this logic to be safe too
incase that patch got dropped in a future version by accident/necessarily
1) Chunk Registration might kill an entity, don't add it to the world if it did!
2) By default, entities are added to the world per slice iteration.
This opens risk of the slices being manipulated during chunk add if an
EntityAddToWorldEvent spawns an entity into this chunk.
Fix this by differing entity add to world for all entities at the same time
3) If a duplicate entity is attempted to add to the world of an entity, and
the original entity is dead, overwrite it as the logic does for unloaod queued entities.
Should hopefully finish up issues with #1223
After witnessing behavior of the regeneration logs, its clear that Vanilla
has had bugs with saving duplicate entities for a while....
Some entities are saved in multiple chunks, and now we are bringing those duplicates
out that use to never surface.
This mode will analyze if the entity appears to be a duplicate (near the other dupe uuid)
and delete the entity instead.
This should reduce regenerations to entities that are nowhere near each other, and
therefore more likely to be subject to real UUID collisions due to our
previous bug, and therefor should survive the chunk load.