In 1.2.5 and older versions of CraftBukkit we allowed the use of data
values on bug mushroom and mob spawner blocks for use with plugins.
For the 1.3 update the mechanism for doing this was changed and I
accidentally used the wrong value when adding these, indicating that
they should not have data instead of our actual intent. This change
corrects this regression.
If a plugin calls player.hidePlayer(other); then player.showPlayer(other);
in the same tick the other player will be added to the entity destroy queue
then a spawn packet will be sent. On the next tick the queue will be
processed and a destroy packet will be sent that renders the other player
invisible. To correct this we ensure the destroy queue is in sync with use
of the vanish API.
In some situations an entity or tile entity can be added to the world but
have its own 'world' field be null or otherwise incorrect. As the entity
was added to this world to be ticked assume it actually is in this world.
An internal method for making the debug output for CraftScheduler's
async tasks was erroneously using the 'this' reference when the loop
should be referencing the current task.
This change was done to remove the internal sound names from the API.
Along with moving the internal names into CraftBukkit, a unit test was
added for any new sounds added in the API to assure they have a non-null
mapping.
After further testing it appears that while the original LongHashtable
has issues with object creation churn and is severly slower than even
java.util.HashMap in general case benchmarks it is in fact very efficient
for our use case.
With this in mind I wrote a replacement LongObjectHashMap modeled after
LongHashtable. Unlike the original implementation this one does not use
Entry objects for storage so does not have the same object creation churn.
It also uses a 2D array instead of a 3D one and does not use a cache as
benchmarking shows this is more efficient. The "bucket size" was chosen
based on benchmarking performance of the HashMap with contents that would
be plausible for a 200+ player server. This means it uses a little extra
memory for smaller servers but almost always uses less than the normal
java.util.HashMap.
To make up for the original LongHashtable being a poor choice for generic
datasets I added a mixer to the new implementation based on code from
MurmurHash. While this has no noticable effect positive or negative with
our normal use of chunk coordinates it makes the HashMap perform just as
well with nearly any kind of dataset.
After these changes ChunkProviderServer.isChunkLoaded() goes from using
20% CPU time while sampling to not even showing up after 45 minutes of
sampling due to the CPU usage being too low to be noticed.
This fix changes the 'state' of the last accessed variables to be more
accurate. Changing the coordinates of the last accessed chunk should
never precede actually setting the last accessed chunk, as loading a
chunk may at some point call back to getChunkAt with a new set of
coordinates before the chunk has actually been loaded. The coordinates
would have been set, but the actual chunk would not. With no check for
accuracy, this causes fringe case issues such as null block states.
Big thanks to @V10lator for finding where the root of the problem was
occurring.
This implementation of a visibility API check for sounds
was created by adding extra methods carrying the source entity
in WorldManager and ServerConfigurationManagerAbstract and
adding a test for canSee in the SCMA sendPacketNearby method.
This approach involves no logic copying, just method addition.
I opted to cast to WorldManager as:
1) IWorldAccess is not in CraftBukkit at the moment
2) There is no other IWorldAccess implemented in CraftBukkit,
nor is there likely to be one soon. If that day comes, easy fix.
The new setting is located at "ticks-per.autosave". By changing this
value, it affects how often a full save is automatically executed,
measured in ticks.
This value is defaulting to 0 (off) because we believe that the vast
majority of servers already have a third-party solution to automatically
saving the server at set intervals. Having the built in auto-save disabled
by default ensures that we are not saving things twice; doing so leads to
absolutely no benefits, but results in detrimental and noticeable
unnecessary performance decrease.
For servers that do not use an automated external script to perform saves,
this setting can be turned on by setting the value higher than 0, with 900
being the value used in vanilla.
Refactoring dependencies 'changes' the string literal in the code. This
commit changes the literal to instead use a char[] to initialize a new
String. On a bytecode level, there will not exist a String literal for these
two values; the shade plugin will no longer refactor them.
Refactoring jline also changes the other String literals we use for
notifying jline of the current state. To insure that our local code reflects
the inner logic in jline, the key value was changed to the static final
variable located in TerminalFactory. Likewise, UnsupportedTerminal uses the
explicit class name (as reflection is used later with the value that has
been set).
Async tasks are notorious for causing CMEs and corrupted data when
accessing the API. This change makes a linked list to track recent tasks
that may no longer be running. It is accessed via the toString method on
the scheduler. This behavior is not guaranteed, but it is accessible as
such currently.
Although toString is located in the scheduler, its contract does not
guarantee an accurate or up to date call when accessed from a second
thread.
When 1.3.1 was released, a try-catch block was removed from the tick
loop that called the method in NMS to handle commands. This restores a
try-catch to prevent the console from crashing the server.
Minecraft resets abilities based on what it knows client side, when someone dies and is in "survival," by default they should be in "survival." However, we allow modification of the PlayerAbilities, so we send this update out to the client.
Oh and, the format of the commit is like this to see if it looks any good. :)
Previously, the timeout would erroneously get converted to milliseconds
twice. The second conversion was removed.
Spurious wakeups were not handled properly, and would instead throw a
TimeoutException even if the waited time was not reached..
The maven shade plugin has the ability to change the namespace for
included dependencies and packages. This change is being implemented to
remove all conflicts with any possible libraries in an execution
environment.
The only dependencies to be refactored are specific to CraftBukkit. To
refactor dependencies included with Bukkit breaks any plugin compiled
against those specific dependencies, especially ebeans--an API
specifically encouraged for database management.
The new scheduler uses a non-blocking methodology. Combining volatile
references to make a linked reference chain, with the atomic reference
handling the tail, tasks are queued without waiting for locks. The main
thread will no longer limit the length of time spend for scheduled tasks,
but no task will run twice in the same tick. Scheduling a new task inside of
a synchronous task will always run the new task during the same tick,
assuming there is no supplied delay > 0.
Asynchronous tasks are now run using a thread pool. Any thread-local
implemenation should now account for threads being reused between
executions.
Race conditions were carefully examined and the order of logic is now very
important. Each task is placed in a secondary collection before removal from
primary collections. Thus, by reading tasks from the collections in the same
order they travel, it retains state-safety. This does make modifications
less responsive in some situations, as the task may be transitioning before
the modifier accesses it. This cost outweighs the requirement to synchronize
on the scheduler; previously any conflict would be first-come-first-serve,
with the main thread backing out arbitrarily.
Many codepaths only end up with one iterator being used at a time and
most of the rest only get up to two being used so using a static pool of
three is wasteful. This also allows us to efficiently handle cases that
exceed 3 iterators in use. Overall this dramatically increases the hit rate
and results in less iterators being created.
This ArrayList duplicates part of the functionality of the much more
efficient chunk map so can be removed as the map can be used in the few
places this was needed.
Replace uses of LongHashtable and LongHashset with new implementations.
Remove EntryBase, LongBaseHashtable, LongHashset, and LongHashtable as they
are no longer used.
LongObjectHashMap does not use Entry or EntryBase classes internally for
storage so has much lower object churn and greater performance. LongHashSet
is not as much of performance win for our use case but for general use is
up to seventeen times faster than the old implementation and is in fact
faster than alternatives from "high performance" java libraries. This is
being added so that if someone tries to use it in the future in a place
unrelated to its current use they don't accidentally end up with something
slower than the Java collections HashSet implementation.
Avoid overhead of using an ArrayList and resizing it. Also allows for reuse
of objects in the pool during the same tick by explicitly releasing them
back to the pool. This allows for much better cache performance as well
as reduced cache footprint.
Remove redundant ArrayList to avoid excessive object creation and CPU
overhead, the entries are added to the list then immediately iterated through
to run so just run them directly.
Swap order of some conditionals to perform the more efficient check first
as if it fails the list lookup will not be executed.
Remove profiling hooks including some rather expensive calls to getSimpleName.
ChunkSection.e() is called once per chunk section loaded and is quite
expensive (about 20% of CPU time for loading the chunk). This changes the
logic to add a fast path when extended block data is not being used and
reorganizes the loops for more optimal array traversal. Overall this saves
about 20-30% CPU time in this method.