CeruleanJS Twitch Stream
January 27, 2016CeruleanJS's FICs bot is being streamed 24/7 from my Twitch.tv account. Watch it there or on the embedded player below:
CeruleanJS's FICs bot is being streamed 24/7 from my Twitch.tv account. Watch it there or on the embedded player below:
CeruleanJS is now on the Free Internet Chess Server (FICS) under the handle CeruleanJS
. Here are the specs:
CeruleanJS
is currently unrated but seeking rated 2 minute games. Give it a whirl!
CeruleanJS v0.1.1 (Blizzard) has been released! Windows 64-bit binaries are available from the Downloads page. The source is available from BitBucket and NPM as usual.
Note: v0.1.0 was released briefly. However this contains a Polyglot opening book bug on castling move. Please be advised and use v0.1.1 instead.
BLACK = 0, WHITE = 1
)moves
commandcolors
dependency)stdio
dependency)After some hacking on the polyglot
move branch, CeruleanJS is almost ready for its next major version release. With the additional cost of rank and file calculation to compute the polyglot zobrist key, move generation performance is down to about 2M moves/s. I have some ideas on how to improve this.
I’ve also corrected some sign errors that were causing issues with evaluation function. Time to do some testing!
Below are the results of a 40/4 tournament piting CeruleanJS against a weak field on engines. All engines used the build in
Engine Score Ce Rating
1: Ceruleanjs 17.5/25 ····· ~1350
2: mACE 3.0/5 =0=11 1525
3: GrayMatterSVN1604 2.5/5 1010= 1192
4: Cassandre-Intel 1.0/5 =000= 1116
4: Pierre17 1.0/5 00010 1201
6: NSVChess v0.14 0.0/5 00000 958
25 games played / Tournament is finished
Level: Tournament 40/4
Hardware: Intel(R) Core(TM) M-5Y71 CPU @ 1.20GHz with 7.9 GB Memory
Operating system: Windows 10 Home Home Edition (Build 9200) 64 bit
CeruleanJS features a Strategic Test Suite (STS) implementation. Running through the first 10 suites at 1 second per move on my results a score of 4,141 out of a possible 10,000. This gives a comparable ELO rating of 1412! This measure cannot be relied upon entirely as they’re manufactured tested, but it’s an ballpark measure of the strength of CeruleanJS.
The above results are promising and provide the main metrics I’m going to optimize for (ELO and STS score). Woo!
CeruleanJS has a pseudo-legal move generation algorithm. It generates all possible moves for a position (even ones that put the king in check or castle the king through check) and the full legality is tested during the addMove()
function. This is because the move needs to be added before check detection can work. Fast move generation is key to a strong chess engine: the more moves you can generate and evaluate per second, the stronger it will be. This post is about my experiences optimizing CeruleanJS’s move generation.
At the start of this document, CeruleanJS was weighing in at a measly 200,000 moves/s on my MacBook Pro. Cerulean (the original C implementation) managed 20,000,000 moves/s in a single thread. CeruleanJS hopes to achieve this level of performance, the question is: can it?
A piece list is a cache of which board indices are occupied by which side. CeruleanJS has two piece lists, one for each white and black. Think of it as a way to optimize looping over all 64 squares. Instead we only need to loop over the pieces we’re generating moves for (maximum 32).
The first iteration of CeruleanJS contained a dead simple piece list implementation:
class PieceList {
constructor() {
this.indices = [];
}
push(index) {
this.indices.push(index);
}
remove(index) {
let reverseIndex = this.indices.indexOf(index);
this.indices.splice(reverseIndex, 1);
}
}
There’s a a couple things wrong with this implementation. First, the indices array is set to an initial length of 0, so each push has to allocate more memory to store the new item. Second, removing an index is expensive. It requires a linear scan of the indices array to remove a specified index, which is O(n). Third, indices is spliced to remove the found index. This reduces the size of the array, but forces all values after reverseIndex to be shifted by one. I’m can’t speculate on the internals of the JS Array data structure, but this may cause a rewrite of up to 16 squares (again, O(n)). What data structure would allow quick creation and removal?
What if we implement a scheme such that when an index is removed, it is replaced by current last element in the list? A piece list only needs to contain at most 16 pieces. We can do the allocation for all 16 elements up front. Also, what if we maintain a reverse board array that maps board index to index in piece list? This would remove the linear scan needed to find the object to remove.
class PieceList {
constructor() {
this.indices = new Array(16);
this.reverse = new Array(constants.WIDTH * constants.HEIGHT);
this.length = 0;
}
push(index) {
this.reverse[index] = this.length;
this.indices[this.length] = index;
this.length++;
}
remove(index) {
this.length--;
var reverseIndex = this.reverse[index];
this.indices[reverseIndex] = this.indices[this.length];
this.reverse[this.indices[reverseIndex]] = reverseIndex;
this.indices[this.length] = undefined;
this.reverse[index] = undefined;
}
}
This allows for an O(1) piece list implementation for both adding and removing items.
The let keyword is a relatively new addition to ES6, which allows variables to be defined at the block level (for, if, while, do, or { }). This is great for encapsulating variables in a for loop or if statement.
However when let
is used two levels deep in nested for loops, all that variable allocation and deallocation can be expensive and can generate a lot of garbage.
The solution is to use a single variable declared outside of any loop and to modify this value as necessary. This trades off safety for performance, but resulted in a large performance increase for Cerulean :).
A significant performance increase was noticed when two lookup objects were converted to arrays. These lookup objects were used in tight loops (generateMoves()
, addMove()
, subtractMove()
) where the additional overhead of casting a value to a string was cost prohibitive.
CeruleanJS uses an internal history array to restore unrecoverable information (i.e. information that cannot be inferred) during the subtractMove()
function. Some examples of unrecoverable information are:
It makes sense for addMove()
and subtractMove()
to be exact inverses of each other, where addMove()
pushes a new array to the history array, and subtractMove()
pops this off. This however would generate an array for every move. As it turns out, at each node in the search tree, all subsequent moves share the same history. Therefore we only need to save to the history array once per move generation instead of once for every move in every move generation, saving dozens of array allocations.
This may seem obvious to other chess programmers, but using a single 32-bit integer to represent a move object is significantly faster than using an object-structure like an array. CeruleanJS initially used a simple array [from, to, promotion]
. []
in JavaScript is short for new Array()
, so in reality you’re doing a lot of object allocation. Integers create less garbage in this respect.
Cerulean’s move data structure is:
ORDER BITS CAP PRO TO FROM
000000 000000 000 000 0000000 0000000
^ MSB LSB ^
This breaksdown to the following distribution:
This dense move structure requires less data to be saved on the board’s internal history array.
Your pseudolegal move generator has a lot of information about the move your generating. Is it a pawn move? Is it a capture? Is it a double push? Save this information in a metadata field in your move and base your addMove()
/subtractMove()
functions on it.
Originally CeruleanJS only passed in [from, to, promotion]
and inferred all other information from the board state. This proved to add a lot of overhead to addMove()
/subtractMove()
when this information is available earlier in the pseudolegal move generator. For a capture for instance — you’d be essentially double checking that the board[to]
is occupied by an opponent square: in generateMoves()
and addMove()
.
CeruleanJS switched to a “bits” based addMove()
/subtractMove()
functions, where the “bits” flag in a move is used to switch()
to specialized move for that bit type.
The move generator is a single function, generateMoves()
loops over all pieces for a side and has a big switch statement for the piece we’re looking at (pawn, knight, bishop, rook, queen, king). This calls a bunch of other functions pawnMoves()
, rookMoves()
, etc.
Originally these functions returned a list which was concatenated to the main move list.
Instead of using concat, we can use a single move array that is passed by reference to all the sub functions (pawnMoves(moves)
, etc.), which then modify it. Using a single array with a mutable state that is passed by reference to other objects that modify it is considered bad practice almost anywhere that codes JavaScript. However, by doing this we reduce the amount of garbage arrays (and .concat()
) created during the move generation process.
This makes the lookup of board positions significantly faster. Roughly a 2x speed increase was noticed when switching to this. This is because less checks are required on item access in JavaScript.
An additional increase of 14% was seen switching from Arrays to Uint32Array in the PieceList.
Testing showed this would not be as advantageous for Move lists (due to the number Uint32Array allocations, which can be as large as 218 in one turn!). A possible optimization here is to use one move list per depth and maintain the list of preallocated move lists in the board. More testing is needed on this front.
A number of other data structures in my application could be switched to TypedArrays, namely Zobrist keys, which are looked up frequently during the addMove()
/subtractMove()
cycle. However, this did not prove to be as beneficial since Zobrist keys are currently a multi-dimensional data structure. JavaScript only supports a 1 dimensional TypedArray and the multiplication required to generate the key look took away any possible speed improvements.
As a whole, these improvements gave a 10-15x speed boost for CeruleanJS, which clocks in at around 2,500,000 moves/s. The performance remains an order of magnitude slower than Cerulean in C (and 2 orders of magnitude slower then a well optimized state-of-the-art chess move generator), but it will suffice for a sufficiently strong JavaScript chess engine.
Until next time, happy chessing!
Welcome to CeruleanJS!
After a 24-hour coding spree, the release of CeruleanJS v0.0.1 (Azure) is available. This release contains a working search (Principle Variation Search) with a simplified evaluation function that takes into account material difference and piece square tables.
CeruleanJS has CECP support so it can be played using a tool like XBoard or Arena. You can also play it directly using the command line interface.
Mocha unit tests are in place to verify FEN board getting and setting and move generation perft values. Builds are being run on TravisCI (thanks to their generous free build server for open source projects!). This helps ensure the core features continue to work while I quickly iterate and add features.
No binaries are available for this initial release, which is still undergoing heavy testing in a bunch of scenarios. Eventually, native builds will be generated for Windows using nexe. These builds will have no dependency on a local copy of NodeJS and are made for simpler distribution to Windows users. In the mean time, the source is available on BitBucket.
Below is a GIF created using LICEcap of the STDIN/STDOUT interface: