Changes some NPEs to IllegalArgumentExceptions for exception consistency.
Contains(ItemStack, int) correctly calculates number of ItemStacks.
Adds a containsAtLeast(ItemStack, int) for finding a combined amount of a
single similar ItemStack.
Makes some utility methods private to prevent ambiguity in use.
When a player triggers a chunk load via walking around or teleporting there
is no need to stop everything and get this chunk on the main thread. The
client is used to having to wait some time for this chunk and the server
doesn't immediately do anything with it except send it to the player. At
the same time chunk loading is the last major source of file IO that still
runs on the main thread.
These two facts make it possible to offload chunks loaded for this reason
to another thread. However, not all parts of chunk loading can happen off
the main thread. For this we use the new AsynchronousExecutor system to
split chunk loading in to three pieces. The first is loading data from
disk, decompressing it, and parsing it in to an NBT structure. The second
piece is creating entities and tile entities in the chunk and adding them
to the world, this is still done on the main thread. The third piece is
informing everyone who requested a chunk load that the load is finished.
For this we register callbacks and then run them on the main thread once
the previous two stages are finished.
There are still cases where a chunk is needed immediately and these will
still trigger chunk loading entirely on the main thread. The most obvious
case is plugins using the API to request a chunk load. We also must load
the chunk immediately when something in the world tries to access it. In
these cases we ignore any possibly pending or in progress chunk loading
that is happening asynchronously as we will have the chunk loaded by the
time they are finished.
The hope is that overall this system will result in less CPU time and
pauses due to blocking file IO on the main thread thus giving more
consistent performance. Testing so far has shown that this also speeds up
chunk loading client side although some of this is likely to be because
we are sending less chunks at once for the client to process.
Thanks for @ammaraskar for help with the implementation of this feature.
When a player has canPickUpLoot set to true the code for mob pickup is
triggerd which does not know how to deal with player inventory. Since
players have their own logic for picking up items we simply disable this
code for them.
The old flag for picking up loot was default to false, making existing players not able to pickup items. We now use this flag for Players, which gives us the problem we had in 48b46f83.
To fix this, we add an incremental flag that will be cross-examined to check if the data was saved before or after the flag level was introduced.
Addresses BUKKIT-3143
As an added feature, players defaulted to being able to not pick up items if the flag was false. However, since minecraft doesn't normally use the flag on players, the flag was always false.
Adds:
- Getting/Setting equipment
- getting/setting drop rates
- getting/setting ability to pick up items
-- As an added feature, players with this flag start off with a canceled PlayerPickupItemEvent
When a mob is marked with the persistent flag (animal or anything with
setRemoveWhenFarAway(false)) the entire block of code for checking if they
should be despawned is skipped. However, one part of this code updates the
mob state if a player is close enough to them. It turns out this state is
used by the AI system to decide if the mob should move around randomly or
not. To stop mobs from being frozen in place we now update this state if
the persistent flag is set as well.
Currently when a plugin wants to get the location of something it calls
getLocation() which returns a new Location object. In some scenarios this
can cause enough object creation/destruction churn to be a significant
overhead. For this cases we add a method that updates a provided Location
object so there is no object creation done. This allows well written code
to work on several locations with only a single Location object getting
created.
Providing a more efficient way to set a location was also looked at but
the current solution is the fastest we can provide. You are not required
to create a new Location object every time you want to set something's
location so, with proper design, you can set locations with only a single
Location object being created.
The old default for the persistent flag on mobs was false which was then
written out to their NBT data when they were saved. We now use this data
for all mobs, not just non-animal mobs. However, this means animals that
spawned before that change will now start despawning like monsters do.
To avoid this we add a new flag to the mob's saved data to mark if the
data was saved before or after we started using it and ignore it if it
was before.
As of 1.4 mobs have a flag to determine if they despawn when away from a
player or not. Unfortunately animals still use their own system to prevent
despawning instead of making use of this flag. This change modifies them
to use the new system (defaults to true) and to add API for plugins to adjust
this.
If the player is not in Creative (i.e. does not have the ability to
instantly build) we need to decrement the MonsterEgg item stack when used
on a breedable parent mob.
Stale player references will add a player back into the world when
teleporting them, causing a cascade of issues relating to ghost entities
and servers failing to stop.
On join we unconditionally add the player to the world they logged out in.
If a plugin teleports a player during PlayerJoinEvent in a way that adds
them to a world (cross-world teleport) we end up with one player in two
places. To avoid this we check to see if the player has changed worlds or
is already added to the world we have we skip adding them again.
This is a missed part of the original "[Bleeding] Use case from player data
for OfflinePlayer. Fixes BUKKIT-519" commit. It avoids doing (somewhat
expensive) lookups of player data to find the correct capitalization inside
getOfflinePlayers() as we're already loading their name from the player data
and thus have the correct capitalization.
When sending chunks to a player we use their writer thread to do chunk
compression to avoid blocking the main thread with this work. However,
after a teleport or respawn there are a large number of chunk packets to
process. This causes the thread to spend a long period handling compression
while we continue dumping more chunk packets on it to handle. The result of
this is a noticable delay in getting responses to commands and chat
immediately after teleporting.
Switching to a lower compression level reduces this load and makes our
behavior more like vanilla. We do, however, still give this thread more
work to do so there will likely still be some delay when comparing to
vanilla. The only way to avoid this would be to put chunk compression back
on the main thread and give everyone on the server a poorer experience
instead.
When an event changes the item to be dispensed we check to see if the new
item has special behavior for dispensing and if so pass it on to that
behavior handler. However, we are actually checking the old itemstack and
passing the new itemstack so this check fails.
If a plugin looks up a player that is offline they may not know the correct
capitalization for the name. In this case they're likely to get it wrong
and since we cache the result even after the player joins the server all
future request for an OfflinePlayer will return one with incorrect case.
When looking up a player who has played on the server before we can
get the correct case from the player data file saved by the server. If
the player has never played before this point we cannot do anything and
will still have the same issue but this is not a solvable problem.
If a player travels past 32,000,000 blocks on the X or Z coordinates they
will be kicked for having an illegal position. On kick their player data
is saved which includes their (illegal) position. This means on join they
are immediately kicked again for the same reason and are stuck. Instead of
kicking at all in this case just teleport the player back to their previous
position just like the moved wrongly check does.
In order to correctly handle disconnects for invalid chat we setup a
Waitable and pass it to the main thread then wait for it to be processed.
However, commands are also chat packets and they are already on the main
thread. In this case, waiting will deadlock the server so we should just
do a normal disconnect.
End portals can only be placed in the end during the dragon's death.
Attempts to place them outside of this window causes the block to remove
itself. However, we still create the tile entity for the portal which
leads to exceptions spamming the console about a tile entity existing
without the appropriate block. In these cases we should not place the tile
entity at all.
When invalid chat is detected we currently drop the connection with no
hint as to why as anything else is not allowed while we're off the main
thread. To give valid disconnect reasons and fire proper events instead
pass these off to the main thread and wait for it to process them.
If a plugin cancels a PlayerInteractEvent when left clicking a block the
client may have removed this block if they are in creative mode or if the
block breaks in a single hit. In this case, we need to update the client's
tile entity as well as telling it the block still exists.
Packet 51 is used to send updates about large changes to single chunks
and to remove chunks from the client when they get out of range. In the
first case a single packet object is created and queued for all relevant
players. With our current chunk compression scheme this means the first
player to have the packet processed will start the compression and get the
packet correctly but the rest will get garbage.
Since this packet never contains much data it is better to simply handle
compression of it on the main thread like vanilla does instead of putting in
locks and dealing with their overhead and complexity.
When a client tries to break a block it assumes it has done so unless told
otherwise by the server. This means the client also wipes out any tile
entity data it has for the block as well. We do not send this data when
updating the client so clients lose things like text on signs, skull type,
etc when they aren't allowed to break the block.
Skulls need their tile entity in order to create an item correctly when
broken unlike every other block. Instead of sprinkling special cases all
over the code just override dropNaturally for skulls to read from their
tile entity and make sure everything that wants to drop them calls this
method before removing the block. There is only one case where this wasn't
already true so we end up with much less special casing.
Sheep now use the crafting system when breeding to determine what color
their baby should be. This triggers an event but the event wants the
crafting inventory to have a result slot which sheep do not have. This
event could be useful for plugins to control the output of sheep breeding
so instead of disabling it we add a result slot so the event fires without
issue.
If a chunk gets a block added to it that requires the extended block id
nibble array (block id greater than 255) the array is created and saved
with the chunk. When the blocks are verified to make sure they exist these
entries are erased but the extended block id array is not. This causes the
server and client to disagree about how much data a chunk has which makes
the client crash while trying to load the chunk for rendering.
To resolve these issues we now clear the extended block id array on chunk
load if there is no valid data in it.
When a block creates a falling entity the block is not immediately removed
from the world. Instead, the falling entity is responsible for removing it
but only if the block still exists. Due to certain piston mechanics it is
possible to move the block before this check happens and thus the block is
not removed. This should be fine as the entity will kill itself in this
situation. However, the code does not stop here and continues running the
rest of the entity logic which includes either placing a block in the world
or placing a block item in the world depending on the circumstances.
If a block is air we return immediately so miss the cleanup work that would
normally happen in this case in vanilla. This causes us to get in to a
situation where, due to odd packet sending from the client, we never
properly stop an attempt by the client to break a block and thus it
eventually breaks.
We also use our own variable for block damage and never sync it up with the
vanilla one so damage reporting to other clients is not always correct.
The static assertions are not normally evaluated in the JVM, and failed
to fail when the enums went from size 25 to size 26. This meant missing
values would not be detected at runtime and instead return null,
compounding problems later. The switches should never evaluate to null
so will instead throw runtime assertion errors.
Additional unit tests were added to detect new paintings and assure they
have proper, unique mappings. The test checks both that a mapping
exists, is not null, and does not duplicate another mapping.
If a defensive copy is not used in the API, changes to the item are
reflected in memory, but never updated to the client. It also goes
against the general contract provided in Bukkit, where setItem should be
the only way to change the underlying item frame.