If the player is not in Creative (i.e. does not have the ability to
instantly build) we need to decrement the MonsterEgg item stack when used
on a breedable parent mob.
On join we unconditionally add the player to the world they logged out in.
If a plugin teleports a player during PlayerJoinEvent in a way that adds
them to a world (cross-world teleport) we end up with one player in two
places. To avoid this we check to see if the player has changed worlds or
is already added to the world we have we skip adding them again.
When sending chunks to a player we use their writer thread to do chunk
compression to avoid blocking the main thread with this work. However,
after a teleport or respawn there are a large number of chunk packets to
process. This causes the thread to spend a long period handling compression
while we continue dumping more chunk packets on it to handle. The result of
this is a noticable delay in getting responses to commands and chat
immediately after teleporting.
Switching to a lower compression level reduces this load and makes our
behavior more like vanilla. We do, however, still give this thread more
work to do so there will likely still be some delay when comparing to
vanilla. The only way to avoid this would be to put chunk compression back
on the main thread and give everyone on the server a poorer experience
instead.
When an event changes the item to be dispensed we check to see if the new
item has special behavior for dispensing and if so pass it on to that
behavior handler. However, we are actually checking the old itemstack and
passing the new itemstack so this check fails.
If a player travels past 32,000,000 blocks on the X or Z coordinates they
will be kicked for having an illegal position. On kick their player data
is saved which includes their (illegal) position. This means on join they
are immediately kicked again for the same reason and are stuck. Instead of
kicking at all in this case just teleport the player back to their previous
position just like the moved wrongly check does.
In order to correctly handle disconnects for invalid chat we setup a
Waitable and pass it to the main thread then wait for it to be processed.
However, commands are also chat packets and they are already on the main
thread. In this case, waiting will deadlock the server so we should just
do a normal disconnect.
End portals can only be placed in the end during the dragon's death.
Attempts to place them outside of this window causes the block to remove
itself. However, we still create the tile entity for the portal which
leads to exceptions spamming the console about a tile entity existing
without the appropriate block. In these cases we should not place the tile
entity at all.
When invalid chat is detected we currently drop the connection with no
hint as to why as anything else is not allowed while we're off the main
thread. To give valid disconnect reasons and fire proper events instead
pass these off to the main thread and wait for it to process them.
If a plugin cancels a PlayerInteractEvent when left clicking a block the
client may have removed this block if they are in creative mode or if the
block breaks in a single hit. In this case, we need to update the client's
tile entity as well as telling it the block still exists.
Packet 51 is used to send updates about large changes to single chunks
and to remove chunks from the client when they get out of range. In the
first case a single packet object is created and queued for all relevant
players. With our current chunk compression scheme this means the first
player to have the packet processed will start the compression and get the
packet correctly but the rest will get garbage.
Since this packet never contains much data it is better to simply handle
compression of it on the main thread like vanilla does instead of putting in
locks and dealing with their overhead and complexity.
When a client tries to break a block it assumes it has done so unless told
otherwise by the server. This means the client also wipes out any tile
entity data it has for the block as well. We do not send this data when
updating the client so clients lose things like text on signs, skull type,
etc when they aren't allowed to break the block.
Skulls need their tile entity in order to create an item correctly when
broken unlike every other block. Instead of sprinkling special cases all
over the code just override dropNaturally for skulls to read from their
tile entity and make sure everything that wants to drop them calls this
method before removing the block. There is only one case where this wasn't
already true so we end up with much less special casing.
Sheep now use the crafting system when breeding to determine what color
their baby should be. This triggers an event but the event wants the
crafting inventory to have a result slot which sheep do not have. This
event could be useful for plugins to control the output of sheep breeding
so instead of disabling it we add a result slot so the event fires without
issue.
If a chunk gets a block added to it that requires the extended block id
nibble array (block id greater than 255) the array is created and saved
with the chunk. When the blocks are verified to make sure they exist these
entries are erased but the extended block id array is not. This causes the
server and client to disagree about how much data a chunk has which makes
the client crash while trying to load the chunk for rendering.
To resolve these issues we now clear the extended block id array on chunk
load if there is no valid data in it.
When a block creates a falling entity the block is not immediately removed
from the world. Instead, the falling entity is responsible for removing it
but only if the block still exists. Due to certain piston mechanics it is
possible to move the block before this check happens and thus the block is
not removed. This should be fine as the entity will kill itself in this
situation. However, the code does not stop here and continues running the
rest of the entity logic which includes either placing a block in the world
or placing a block item in the world depending on the circumstances.
If a block is air we return immediately so miss the cleanup work that would
normally happen in this case in vanilla. This causes us to get in to a
situation where, due to odd packet sending from the client, we never
properly stop an attempt by the client to break a block and thus it
eventually breaks.
We also use our own variable for block damage and never sync it up with the
vanilla one so damage reporting to other clients is not always correct.
Skull blocks store their type in a tile entity and use their block data
as rotation. When breaking a block the block data is used for determining
what item to drop. Simply changing this to use the skull method for getting
their drop data is not enough because their tile entity is already gone.
Therefore we have to special case skulls to get the correct data _and_ get
that data before breaking the block.
On player death player PotionEffects need to be updated so that a player's
invisibility and other effects are removed, otherwise they will persist
after a respawn. This is a carry-over from our use of persistent player
entities.
Some features added in 1.4.2 use the difficulty value as an index to an
array so while before having it set to an invalid value would do nothing
or maybe cause an odd side effect somewhere it now crashes the server. This
patch ensures difficulty values are clamped between 0 and 3, inclusive.
Filtering item data is usually a good idea to make sure we don't have
invalid data or data on items that shouldn't have it. However, anvils
use item data in slightly different way and so running its code for
filtering here causes the data to be corrupted.
A couple method names were changed between 1.3.2 and 1.4.2 but were missed
in the update. One of these affects being able to enchant bows and the
other is used for updating player animations while firing.
Vanilla has its own handlers for plugin channel messages for things like
texture packs, books, and anvils. When vanilla handles one of these messages
we should not also pass it to plugins because they will be duplicating work
and potentially running in to situations our plugin system isn't setup to
handle. This is how 1.3.2 worked but was lost in the 1.4.2 update.
CommandMap now contains the functionality for tab completion. This
commit replaces the vanilla implementation and simply delegates it to
the Bukkit API.
This change affects the old chat compatibility layer from an
implementation only standpoint. It does not queue the 'event' to fire,
but rather queues a runnable that allows the calling thread to wait for
execution to finish.
The other effect of this change is that rcon connects now have their
commands queued to be run on next server tick using the same
implementation.
The internal implementation is in org.bukkit.craftbukkit.util.Waitable.
It is very similar to a Future<T> task, but only contains minimal
implementation with object.wait() and object.notify() calls
under the hood of waitable.get() and waitable.run().
PlayerPreLoginEvent now properly implements thread-safe event execution
by queuing the events similar to chat and rcon. This is still a poor way
albeit proper way to implement thread-safety; PlayerPreLoginEvent will
stay deprecated.
If two players (or a player and any other entity) are teleported to the
same location in the same tick they will both get added to the other's
destroy queue then have a new entity spawn packet sent. Next tick the
destroy queue will be processed and they will then be invisible to each
other. To prevent this situation we remove the entity from the destroy
queue when sending out a spawn packet for them.
The new AI system introduced by Minecraft 1.2 no longer relies on the
target field in the entity so it is frequently out of sync with what the
entity is actually doing. This modifies the AI goal to update the target
so our API can return the correct information.
In 1.2.5 and older versions of CraftBukkit we allowed the use of data
values on bug mushroom and mob spawner blocks for use with plugins.
For the 1.3 update the mechanism for doing this was changed and I
accidentally used the wrong value when adding these, indicating that
they should not have data instead of our actual intent. This change
corrects this regression.
In some situations an entity or tile entity can be added to the world but
have its own 'world' field be null or otherwise incorrect. As the entity
was added to this world to be ticked assume it actually is in this world.
After further testing it appears that while the original LongHashtable
has issues with object creation churn and is severly slower than even
java.util.HashMap in general case benchmarks it is in fact very efficient
for our use case.
With this in mind I wrote a replacement LongObjectHashMap modeled after
LongHashtable. Unlike the original implementation this one does not use
Entry objects for storage so does not have the same object creation churn.
It also uses a 2D array instead of a 3D one and does not use a cache as
benchmarking shows this is more efficient. The "bucket size" was chosen
based on benchmarking performance of the HashMap with contents that would
be plausible for a 200+ player server. This means it uses a little extra
memory for smaller servers but almost always uses less than the normal
java.util.HashMap.
To make up for the original LongHashtable being a poor choice for generic
datasets I added a mixer to the new implementation based on code from
MurmurHash. While this has no noticable effect positive or negative with
our normal use of chunk coordinates it makes the HashMap perform just as
well with nearly any kind of dataset.
After these changes ChunkProviderServer.isChunkLoaded() goes from using
20% CPU time while sampling to not even showing up after 45 minutes of
sampling due to the CPU usage being too low to be noticed.
This fix changes the 'state' of the last accessed variables to be more
accurate. Changing the coordinates of the last accessed chunk should
never precede actually setting the last accessed chunk, as loading a
chunk may at some point call back to getChunkAt with a new set of
coordinates before the chunk has actually been loaded. The coordinates
would have been set, but the actual chunk would not. With no check for
accuracy, this causes fringe case issues such as null block states.
Big thanks to @V10lator for finding where the root of the problem was
occurring.
This implementation of a visibility API check for sounds
was created by adding extra methods carrying the source entity
in WorldManager and ServerConfigurationManagerAbstract and
adding a test for canSee in the SCMA sendPacketNearby method.
This approach involves no logic copying, just method addition.
I opted to cast to WorldManager as:
1) IWorldAccess is not in CraftBukkit at the moment
2) There is no other IWorldAccess implemented in CraftBukkit,
nor is there likely to be one soon. If that day comes, easy fix.
The new setting is located at "ticks-per.autosave". By changing this
value, it affects how often a full save is automatically executed,
measured in ticks.
This value is defaulting to 0 (off) because we believe that the vast
majority of servers already have a third-party solution to automatically
saving the server at set intervals. Having the built in auto-save disabled
by default ensures that we are not saving things twice; doing so leads to
absolutely no benefits, but results in detrimental and noticeable
unnecessary performance decrease.
For servers that do not use an automated external script to perform saves,
this setting can be turned on by setting the value higher than 0, with 900
being the value used in vanilla.
When 1.3.1 was released, a try-catch block was removed from the tick
loop that called the method in NMS to handle commands. This restores a
try-catch to prevent the console from crashing the server.
Minecraft resets abilities based on what it knows client side, when someone dies and is in "survival," by default they should be in "survival." However, we allow modification of the PlayerAbilities, so we send this update out to the client.
Oh and, the format of the commit is like this to see if it looks any good. :)
Many codepaths only end up with one iterator being used at a time and
most of the rest only get up to two being used so using a static pool of
three is wasteful. This also allows us to efficiently handle cases that
exceed 3 iterators in use. Overall this dramatically increases the hit rate
and results in less iterators being created.
This ArrayList duplicates part of the functionality of the much more
efficient chunk map so can be removed as the map can be used in the few
places this was needed.
Replace uses of LongHashtable and LongHashset with new implementations.
Remove EntryBase, LongBaseHashtable, LongHashset, and LongHashtable as they
are no longer used.
LongObjectHashMap does not use Entry or EntryBase classes internally for
storage so has much lower object churn and greater performance. LongHashSet
is not as much of performance win for our use case but for general use is
up to seventeen times faster than the old implementation and is in fact
faster than alternatives from "high performance" java libraries. This is
being added so that if someone tries to use it in the future in a place
unrelated to its current use they don't accidentally end up with something
slower than the Java collections HashSet implementation.
Avoid overhead of using an ArrayList and resizing it. Also allows for reuse
of objects in the pool during the same tick by explicitly releasing them
back to the pool. This allows for much better cache performance as well
as reduced cache footprint.
Remove redundant ArrayList to avoid excessive object creation and CPU
overhead, the entries are added to the list then immediately iterated through
to run so just run them directly.
Swap order of some conditionals to perform the more efficient check first
as if it fails the list lookup will not be executed.
Remove profiling hooks including some rather expensive calls to getSimpleName.
ChunkSection.e() is called once per chunk section loaded and is quite
expensive (about 20% of CPU time for loading the chunk). This changes the
logic to add a fast path when extended block data is not being used and
reorganizes the loops for more optimal array traversal. Overall this saves
about 20-30% CPU time in this method.
- Hardcore requires a newly generated world
- You will be banned if you die in a hardcore world
- You will NOT be banned if you die in a non-vanilla world
- Your "heart container" will not change without logging back in. (Vanilla bug)
The method this.a(world, d0, d1, d2, i, j, k) is responsible for
actually placing the lava or water source block in the world. The event
is currently called after this method, thus canceling the event will
cause the player to keep their water/lava bucket but the water/lava will
still appear where they attempted to place it.
In addition, the check for whether a player has creative inventory is
short circuiting before the event fires, so the event will not be called
for these players.
This moves the event call and cancelled check above these two calls to
ensure it always fires and the results of it are honored.
Closes GH-835.
Added two utility collections for use with PlayerChatEvents allowing lazier
initialization of events and less need to synchronize against the player
list.
Provided a hidden queue system for similar logic to pre-1.3 chat. When a
plugin is listening for the deprecated PlayerChatEvent, all chat will be
delayed to be mirror executed from the main thread. All developers are
encouraged to immediately update to the developmental Bukkit chat API as a
minimum transition for server stability.
Additionally, changes were required to bring thread-safety to the flow
logic. CopyOnWriteArrayList is the only viable means to produce thread
safety with minimal diff; using a sane pre-implemented collection would
require reworking of sections of NMS logic.
As a minor change, implemented expected functionality for
PlayerCommandPreProcessEvent. Setting the player should now change the
player executing the command.
Both the CB 1.3.1 code, and vanilla 1.3.1 code, have modified the
behavior of entity tick processing in a way that can lead to disabling
of entity cleanup. Specifically, the tickEntities() call in n.m.s.World,
which processes both the entity cleanup (removing from the world entity
list) and tile entity tick processing (furnaces and such) does not get
called by n.m.s.MinecraftServer's q() method (which drives tick
processing calls in general) when no players are on the given world.
This causes a serious memory leak when automation processes, like dynmap
mapping, load and unload chunks - as entities on unloaded chunks are
only cleaned up during entity tick processing. It also will cause issues
with any mods that use persistent chunk loading (that is, keeping chunks
loaded so that tile entities will continue being processed), since such
processing will no longer function without at least one player on the
given world.
In any case, the tickEntities() call should be called in the same
fashion as under 1.2.x (each tick, independent of player population, as
opposed to being suspended indefinitely when no players are on the given
world). The specific memory leak observed, with removing the unloaded
entites from the world, requires this call be made regularly (or, at
least, whenever the entity unload queue (world.g) is not empty.
Closes GH-832
Added newlines at the end of files
Fixed improper line endings on some files
Matched start - end comments
Added some missing comments for diffs
Fixed syntax on some spots
Minimized some diff
Removed some no longer used files
Added comment on some required files with no changes
Fixed imports of items used once
Added imports for items used more than once
This allows previous causes to be available during the event, as well as making the damage cause a valid one. If EntityDamageEvent is canceled, then it's not the last DamageCause.
Also prevents setting DamageCause involuntarily through construction.
The Path object creates an array of 1024 PathPoint objects as the backing
for a sorted queue but testing shows we tend to get only 80 or so entries
in the array at most. To save memory this changes the default size of the
array to 128. Changing it to 64 was considered but that triggered too many
resizes which is detremental to performance.