By using return 0, we exit the loop prematurely preventing other
creature types from being spawned if one type is set to 0. By using
continue we move on to the other types and allow them to spawn
properly.
After further testing it appears that while the original LongHashtable
has issues with object creation churn and is severly slower than even
java.util.HashMap in general case benchmarks it is in fact very efficient
for our use case.
With this in mind I wrote a replacement LongObjectHashMap modeled after
LongHashtable. Unlike the original implementation this one does not use
Entry objects for storage so does not have the same object creation churn.
It also uses a 2D array instead of a 3D one and does not use a cache as
benchmarking shows this is more efficient. The "bucket size" was chosen
based on benchmarking performance of the HashMap with contents that would
be plausible for a 200+ player server. This means it uses a little extra
memory for smaller servers but almost always uses less than the normal
java.util.HashMap.
To make up for the original LongHashtable being a poor choice for generic
datasets I added a mixer to the new implementation based on code from
MurmurHash. While this has no noticable effect positive or negative with
our normal use of chunk coordinates it makes the HashMap perform just as
well with nearly any kind of dataset.
After these changes ChunkProviderServer.isChunkLoaded() goes from using
20% CPU time while sampling to not even showing up after 45 minutes of
sampling due to the CPU usage being too low to be noticed.
Replace uses of LongHashtable and LongHashset with new implementations.
Remove EntryBase, LongBaseHashtable, LongHashset, and LongHashtable as they
are no longer used.
LongObjectHashMap does not use Entry or EntryBase classes internally for
storage so has much lower object churn and greater performance. LongHashSet
is not as much of performance win for our use case but for general use is
up to seventeen times faster than the old implementation and is in fact
faster than alternatives from "high performance" java libraries. This is
being added so that if someone tries to use it in the future in a place
unrelated to its current use they don't accidentally end up with something
slower than the Java collections HashSet implementation.
Added newlines at the end of files
Fixed improper line endings on some files
Matched start - end comments
Added some missing comments for diffs
Fixed syntax on some spots
Minimized some diff
Removed some no longer used files
Added comment on some required files with no changes
Fixed imports of items used once
Added imports for items used more than once
We know these updates (can) break plugins bypassing Bukkit. They are needed for
smooth updates however. There will be another one right before before 1.1-R1.