The extension of estimation of distribution algorithms (EDAs) to the multi–objective domain has led to multi–objective optimization EDAs (MOEDAs). Most MOEDAs have limited themselves to porting single-objective EDAs to the multi–objective do- main. Although MOEDAs have proved to be a valid approach, the last point is an obstacle to the achievement of a significant improvement regarding “standard” multi– objective optimization evolutionary algorithms.
Adapting the model–building algorithm is one way to achieve a substantial advance. Most model–building schemes used so far by EDAs employ off–the–shelf machine learning methods. However, the model–building problem has particular requirements that those methods do not meet and even evade.
The focus of this paper is on the model–building issue and how it has not been properly understood and addressed by most MOEDAs. We delve down into the roots of this matter and hypothesize about its causes. To gain a deeper understanding of the subject we propose a novel algorithm intended to overcome the drawbacks of current MOEDAs.
This new algorithm is the multi–objective neural estimation of distribution algorithm (MONEDA). MONEDA uses a modified growing neural gas network for model– building (MB–GNG). MB–GNG is a custom–made clustering algorithm that meets the above demands. Thanks to its custom-made model–building algorithm, the preserva- tion of elite individuals and its individual replacement scheme, MONEDA is capable of scalably solving continuous multi–objective optimization problems. It performs better than similar algorithms in terms of a set of quality indicators and computational resource requirements.