Genetic programming is a concept I’ve been familiar with for awhile, but this was actually my first attempt at coding it up, mainly because it’s hard to beat stochastic gradient descent when you’re working on an optimization problem. During Winter 2020 I had the opportunity to work with some peers on a optimal control problem.
For the project we were looking to optimize the path of a particle in a two dimensional surface (although truthfully the initial project involved a falling rocket but that was bit ambitious). Think of a RC car, where you wish to minimize the amount of steering you do, but also end as close to some fixed final location in a set amount of time. A simple cost functional that would account for both the amount of turning as well as how close the particle got to the final location was developed, each weighted by various constants so we could alter them how I liked.
To represent the control of the car (defined as ‘u’) we needed to discretize it along our time domain. However when observing a simple constant control, and control with one switch (seen on the right), we see that the optimization function is clearly not convex. In the constant case there are 3 local minimums, and in the two dimensional case there is a whole trough of minimizers. We inferred that as we continued to partition the control into finer segments, it would only get more convoluted and riddled with local minimizers.