        ===============================================================
                                 LITTLE GOLIATH
                            WINBOARD/UCI CHESSENGINE
		         VERSION 3.5a (AND NEWER) WITH UCI-SUPPORT ONLY 	

                                  VERSION 3.16
        ===============================================================

   Little Goliath v3.16 ("Evolution") - Neural Network Version

     ### WHAT IS NEW? ####   
	 
     Many minor changes, bug fixes and optimisations of the code (for more speed).

	 Major improvement is the introduction of neural networks for extension and pruning decisions.
	 
	 I have also experimented with networks for move ordering. Unfortunately without significant success (too much overhead).
	 
	 Still no bitboard technology or smp. 
	 
	 In the next step, I will probably integrate my pattern recognition from the old "Goliath Pro". Together with the bitboard structure there, as this should bring a lot of performance for move ordering (quality) and many other things (e.g. SEE).
	 
     My special thanks goes to Kurt Utzinger (especially for his excellent game analyses) and Jörg Burwitz for the many tests, matches and constructive feedback.	 


   Little Goliath v3.15.4 ("Evolution") - Neural Network Version

     ### WHAT IS NEW? ####   
	 
     Another version with further bug fixes and small improvements before the coming main release. As my job keeps me very busy at the moment, i am behind schedule. Apologies for this. 
	 
	 I have received numerous error messages and have, I think, taken these all into account. Especially the overflow of the node counter and other variables (resulting in hideous display outputs).

     The UCI communication has been optimised (shows fail high/low of the principal variation and a more accurate hash table utilisation now). 
	 
	 The maximum calculation depth is set to 63 half-moves now.
	 
	 In addition, I have again increased pruning even along the principal variation significantly.	 


   Little Goliath v3.15.3 ("Evolution") - Neural Network Version

     ### WHAT IS NEW? ####   
	 
     This is not yet the announced main release (but the generation of the networks and adjustments are in progress, I am aiming for the end of February 2022 for the release) but only a bug fix plus minor optimisations.
	 
	 Bugfix: a displayed main variant that is too long can lead to a buffer overflow, which in the best case results in a crash, but often can also have an impact on the search. The problem is also present in previous generations of the engine, but only occurs with extremely high calculation depths (mostly in the late endgame) and complete main variants. So it is very rare. 
	 
	 The maximum calculation depth is now 62 instead of 49 half-moves (more is not possible without drastic changes to the program). 
	 
	 Hashtables may now be up to 2000 MB (previously 512 MB) in size.
	 
	 In addition, I have changed a small thing in the search. Until now, pruning in positions close to the principal variation was generally forbidden. I have almost completely removed this restriction. The plus in search depth seems to more than compensate for the possible disadvantages here. 
	 
	 I would like to thank all interested Goliath fans/friends and testers for the mostly positive feedback. I will try to respond to all questions and suggestions individually and promptly. Here are just a few frequently asked questions: 
	 
	 When calculating the nps (nodes per second), of course only those moves are counted that are executed on the engine's "internal board" during the search. I think this is the only sensible approach. The many questions about this suggest that other programmers give the number of moves generated by the move generator. This seems to be a doubtful approach then, it was not common in my last activities in 2005/2006. I also caution against drawing conclusions about the playing strength of a program from the nps. Goliath is certainly extremely optimised in this respect to achieve maximum speed (further adaptations to modern hardware/CPUs are also possible and envisaged here). I have therefore removed the limit on hash table size as far as possible, as the current maximum load is reached very quickly on modern computers, even in Blitz. Basically, the nps serve as an orientation for me during development. The smallest changes can sometimes have a significant impact on the cache behaviour, which is then reflected by the nps at first glance and makes further tests unnecessary. 
	 
	 A design for multiprocessors is standard nowadays. However, I will only implement this when it is clear that Goliath on single core can keep up with the top group or is at least close to it.
	 
	 Adding history pruning and LMR to the engine "on the fly" and seeing an increase in playing strength were some of the frequent suggestions. I have looked at LMR in Stockfish, among other places, and see overlap with my "BenMax Pruning" which I have been using since about 2001 and which is very mature. Switching to LMR should not bring me any advantage here. Personally, I don't think much of history pruning, even if other engines benefit from it. I do rely a lot on statistics and evaluations, but then I only use them when sorting moves. As a chess player, I simply don't like the approach of cutting off entire sub-variants just because a move is "often" bad. Tactical moves in particular often go hand in hand with a bad history, but can be correct in the position currently being examined (and only there). If I use history pruning in Goliath, he achieves a significantly higher search depth, but the tactical power decreases considerably. In a few thousand games I could only see a slight advantage in terms of the result, the tactics suffer too much, which makes it too unattractive for analyses to my taste. But yes, maybe there is an approach to make history pruning compatible with the engine. Primarily, however, I am pursuing a selective approach (type B) or am concentrating in this direction during development (plus neural networks for controlling the search - i.e. in addition to positional evaluation). In any case, the search of the engine is currently still too shallow. I hope that the upcoming version with the neural networks for pruning/extensions will solve this problem.	 


   Foreword to the first version after 16 years

   About 16 years ago I lost the desire to play chess and since then I 
   have neither played a tournament game nor dealt with chess programming. 
   Until a few weeks ago, I didn't even know who or what Stockfish was or 
   what the name of the reigning (human) World Chess Champion was. 
   By chance, I became aware of Stockfish and NNUE in the course of my 
   professional work and involvement with neural networks. Here I 
   recognised certain parallels to my own attempts around the turn of the 
   millennium to control the search function of chess programmes by using 
   neural networks, i.e. to simulate human thinking, so to speak (only the 
   promising variants are then further investigated by a classical A/B 
   search). The results at the time were mixed. While solutions were found
   in seconds in individual positions (for which a normal search would 
   have taken days), the approach failed in the practical game due to
   insufficient coverage of relevant patterns. According to my estimation
   at the time (based on the evaluation of several million games), there
   were around 10,000 of these. Generating the networks took a lot of time.
   Even the smallest additions or necessary changes took days of computing
   power. In the end, I had only integrated about 20% of the patterns into
   the net and, in view of the calculations that were still necessary, I 
   gave up in frustration and postponed everything until a later date. I 
   had not expected it to take 16 years. Actually, I didn't expect to ever
   take up chess again (I loved the game and was a tournament player with 
   almost 2300 ELO). As far as I can tell, hardly anything has changed in 
   the last 16 years in terms of programming (in the sense of new search 
   techniques, apart from NNUE), even if the programmes (above all 
   Stockfish) have become enormously powerful due to excessive fine-tuning.
   Now I want to revisit the topic, since the necessary fast hardware is 
   also available nowadays.

   So what's the next step?

   I will generate neural networks for positional evaluation, for 
   pruning/extension decisions and (the core) for the generation of 
   patterns/plans, which then control the actual search.

   In the first step, I replace the classical positional evaluation of the
   last "Little Goliath" engine with a neural network evaluation. This 
   should allow an initial assessment of whether such an approach makes 
   sense at all in interaction with the "old algorithms" (I anticipate that
   it seems to be the case). 

   In a second step, the other networks (pruning/extension and planning)
   will follow. Along with an optimisation of the engine with regard to 
   current hardware standards. The engine is already in progress, and the 
   generation of the extended networks is also underway.


   Little Goliath v3.15 ("Evolution") - Neural Network Version

     ### WHAT IS NEW? ####   
	 
     The engine uses a neural network for positional evaluation. In direct
	 comparison with the old engine, which uses manually generated evaluation 
	 functions, this engine achieves almost 100% in over 1000 test games (only
	 a few draw games). This is obviously significantly more than other
	 programmes have achieved when switching to neural networks. However, it 
	 does not seem to be due to the net itself but to its effect on a pruning 
	 system, which I call "path finding pruning" and which seems to depend 
	 essentially on an exact position evaluation. If this pruning is 
	 deactivated in both engines, the NN version "only" wins with about 70%. 
	 If I leave the pruning activated in the classic engine and play against 
	 the same version without it, the version with pruning is only just ahead 
	 (approx. 52%). On fast hardware, this also corresponds to the results 
	 from about 16 years ago. Consequently, the neural network or the more 
	 exact positional evaluation seems to have an extreme effect here. This is
	 an important approach for future programme versions, perhaps much more 
	 can be achieved by tuning (also other techniques). Another effect is the 
	 significantly lower search depth overall. Here, too, the network seems to
	 have an effect, the search is shallower, but more accurate. Again, it 
	 will be an approach to pruning even more aggressively. I always thought 
	 the pruning in Little Goliath was very aggressive. However, when I look 
	 at Stockfish, it doesn't come close to being aggressive either. I rejected 
	 similar approaches at the time, because as a chess player I was of the 
	 opinion that tactical variants are of decisive importance for the strength 
	 of the game and that the pruning here should not be too intensive. 
	 Obviously a misjudgement, which I will look at more closely.
	 
	 In addition, "Sigular Extensions" are now activated by default. This was not
	 the case before. Here, too, the net seems to have a positive effect. Since I
	 am often asked, here is the information that I use singular extensions in the
	 classic form. That is, as described by Hsu with regard to Deep Blue. I have 
	 only changed the conditions and considerably reduced the effort for testing 
	 for singularity (at the expense of accuracy). The method of controlling 
	 singular extensions via the hash tables, which is apparently used as 
	 "state-of-the-art" today, was already tested by me 20 years ago in many 
	 programme versions without success. Even today, with a 1:1 implementation as
	 in Stockfish, I cannot achieve any improvement with Little Goliath (the 
	 opposite is rather the case). Why this is so will certainly be the subject 
	 of further investigation. 
	 
	 Apart from that, I have not made any changes. As I said, my initial aim is 
	 to highlight the differences between the two versions (classic and NN). 
	 Unfortunately, that's why all the bugs (there were some back then, I didn't 
	 keep track) of the old version (from 2006) are still included then. 
	 
	 The engine is available as a 32-bit version, which should also run on old 
	 hardware. There is also a 64-bit version that is about 25% faster.
	 
	 A completely revised version will follow in a few weeks, then also with the 
	 extended networks. I am personally very curious to see what will be possible here. 


   Little Goliath v3.12 ("Evolution")

     ### CHANGES/BUGFIX #### (file version 1.0.0.9 -> 1.0.0.18)

     - book was always on (UCI-Option "wide" instead of "off"  was 
       always recognized by the engine)
     - book index broken (the engine chooses a random move in 2 out
       of 10 cases)
     - UCI interface problems: some parameters (Singular Ext., 
       Prefer Tactics initialized with random values in seldom cases)
     - learnfile was corrupted by a bad installation routine
     - broken forward pruning in endgames: recognized by Kurt Utzinger 
       (many thanks to him) a long time ago ("Nemesis" and "Revival"   
       includes this bug too). Looked like a Zugzwang problem, so I 
       accepted it. Later I realized that it was a real bug (a forward 
       pruning algorithm which needs some adjustments to work well in 
       endgames). Results in much better performance now!
     - enhanced opening book (about 400.000 positions)
    
     Remarks:
     Older learnfiles and books (for "Revival" and older engine versions) 
     are not compatible to Evolution's data format.


   Little Goliath - still alive :-) - v3.12 ("Evolution"):

     - some bugfixes and minor improvements
     - optimized chess knowledge
     - BenMax pruning improved again
     
     New UCI options:
      
     - "Prefer Tactics" (default=off) = prefer tactical lines 

     - "Singular Extensions" (default=off) = 
        extend line (if less than 3 good moves are possible)  	      

     - "Combinations" (default=normal) = look at more (=max) oder less
       (=min) combinations 

     - "Pruning" (default=25) = Possible values 0 .. 100 (pruning percentage,
       e.g. value "100" results in max. pruning)
 

     Installation:
     v3.12 comes with a large opening book (about 250.000 positions, games up 
     to May 10, 2005). A larger book (about 2.000.000 positions is available
     too (free download).

     If you have any comments or suggestions, please send an email to
     info@goliathchess.de (Michael Borgstaedt)

     Please note: goliath-chess.de replaced by goliathchess.de (web and mail)
	

   Little Goliath - yet another final release :-) - v3.11 ("Revival"):

     - new opening book format  
     - changed UCI options
     - reduced (but optimized) chess knowledge
     - improved BenMax pruning (Originally designed for the new Goliath 
       Blitz 2.0, so only some parts of this pruning algorithm are fully
       working in LG Revival. But nevertheless: it gives a big jump in 
       playing strength!)

     Installation:
     v3.11 comes with a large opening book (about 300000 positions, games up 
     to June 5, 2004).

     If you have any comments or suggestions, please send an email to
     info@goliath-chess.de (Michael Borgstaedt)
	

   Little Goliath - final release - v3.10 ("Nemesis"):

     - many bugfixes and improvements
     - tablebase support 
     - some new adjustable parameters, e.g.:
       "Pawn_Structure" (default=100, values 0-400)
       value 100 = 100%, 50 = 50% and so on

     Hint: pre-release version (for CSS-Masters tournament) comes
     with default settings "BenMax=normal" und "Powersearch=off".
     I strongly recommend to use "BenMax=agressive" and 
     "Powersearch=on" which seems to produce much better results


   New in version 3.9:

     Bugfixes: 
     - book access problem solved

     Improvements:
     - positional evaluation modified (some important parameters adjusted)
     - introducing BenMax pruning for deeper searches

     Installation:
     v3.9 comes with a small book (grandmaster games, about 48000 positions, 
     games up to Sept. 30, 2002)


   New in version 3.8:

     Bugfixes: 
     - the engine displays correct elapsed move time now (e.g. under ARENA)
     - detection of blocked pawns fixed (wrong detection and bad scoring 
       in former versions)
     - games with more than 203 moves possible now (former engines stopped
       playing at move no. 203 -> results in loss on time and not unloading 
       of the engine), some other (buffer)problems solved too.
     - modified book access, avoids most of the heavy blunders (bad lines
       in opening books)

     Search algo improvement:
     One of the main selective search algorithms was modified. This results
     in slightly improved playing at blitz levels and larger improvement at 
     longer time controls (more than 30 ELO expected) because of safer and 
     more efficient pruning at higher search depths.

     Parameter changed:
     "Forward pruning", possible choices now: "normal" (default), "more" 
     and "max". Very interesting results with "more" and "max", but too
     few testgames up to now.

     Installation:
     v3.8 comes with a small book (about 40000 positions, games up to 
     July 30, 2002)
     

   Version 3.7: internal test version (not released to the public)

     
   New in version 3.6:
     Some bugfixes (positional scoring and pawn-hashtable management).
     Much better time management, especially at blitz levels.
     Stronger engine (+ 30 ELO at blitz level, compared with 3.5c).
     Some new adjustable parameters:

     - Nullmove_(R): nullmove depth (2-5, default "automatic")
     - Extensions: range -4 .. 4 (default = 0)
     - Forward_Pruning: range 0 .. 5 (default = 0)
     - Style "risky" works much better now

   New in version 3.5c:
     The UCI-Version now supports multi-pv modus. Additional engine 
     options are available too. There are no engine-modifications, but
     the interface-support is much more reliable, so some bugs
     regarding time-management (1. loosing on time / 2. using too much 
     time in time-critical situations) and learnfile management are 
     fixed. 

   New in version 3.5b:
     Bugfix only (3.5a UCI-interface). No engine modifications.

   New in version 3.5a:
     First release of version 3.5 with UCI support. No engine modifications.

   New in version 3.5:

     More positional knowledge, modified search algo (new type of 
     forward pruning).
     The engine is identical with the YoungTalents version
     (called Goliath Light 1.5), only some options (tablebases and
     settings) are not available.

   New in version 3.0:

     Nearly a complete new engine (compared with the 2.x series). 
     New positional knowledge, different tactical style. The engine
     seems stronger than the 2.x series at longer time controls, weaker
     at blitz levels (depends on the opponent). 
     Besides, the engine is nearly identical with the YoungTalents version
     (called Goliath Light 1.0), which is listed in the SSDF rating list.
     Only some options (tablebases, settings) are not available in 
     the Winboard version.