This is basically an implementation of this “Image Analogies” paper http://www.mrl.nyu.edu/projects/image-analogies/index.html In our case, we use feature maps from VGG16. The patch matching and blending is done with a method described in “Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis” http://arxiv.org/abs/1601.04589 Effects similar to that paper can be achieved by turning off the analogy loss (or leave it on!) and turning on the B/B’ content weighting via the –b-cont

Source: awentzonline/image-analogies: Generate image analogies using neural matching and blending.