Suppose the typical optimization problem: we want to optimize a system according to a certain number of objectives, and we have the possibility to modify a certain number of factors. There are different ways to solve this, but in most of cases we will be trying to find the “best” optimum (there are several exceptions, though, e.g. finding a Pareto front when we have more than one objective). In any case, we can get to the following question: what if the best optimum is not satisfactory enough?. In classical optimization problems the advise is to modify the range of the parameters, look into further modifications of the system, etc.
Before doing all this, my proposal is to look into the optimization of complementary situations. In other words, try to optimize only part of problem in one set of conditions, while the other(s) set(s) of conditions are directed to optimize the other(s) part(s) of the problem. In this way, when we look to all optimal conditions globally, our problem is solved. Advantage: we are not changing the system or the range of the factors, we are just trying to get the best of our system. Dissadvantages: (1) we have to manage to get two conditions (this might not be feasible in certain cases) and (2) the optimization problem becomes computationally more complex, (the complexity increases exponentially with the number of complementary situations).
We illustrated the problem of optimization of complementary conditions with HPLC when trying to separate a mixture of pharmaceuticals (ß-blockers). In HPLC, it happens that the user is often confronted to optimize the mobile phase conditions, together with columns and other factors. Moreover, it is (unfortunately) often to find no mobile phase able to separate all the solutes of a complex mixture. The typical solution is to change the system (column, separation mode, derivatization, etc.). But, what about finding two (or more) complementary conditions?. Each condition resolves only a certain subset of the solutes, while the others (that remain overlapped) are resolved in at least one of the other (complemenary) conditions.
(in the picture above, solutes 1, 3, 7, 6, 10 and 8 are solved in one condition, whereas solutes 5, 2, 9, 4 are solved in the other condition).
Computationally, we solved the problem using genetic algorithms (GAs). It turns that these methods are specially suited for this kind of task since the problem is combinatorial (and GAs are naturally not continuous) and it proved to have several local minima. Later we exteded the optimization method adding a local optimizer inside the GA, yielding to what we called “Lamarckian” or “Darwinian” GA. We cannot explain the full details here. All information can be found in refs [1a, 3a, 5a].
This project was developed at the University of Valencia (Spain). Several people were involved. See authors of the publications for more details about authorship.
See part of the PhD Defense of Gabriel.