If there’s one thing that distinguishes all Wes Anderson movies from the competition is that they are so purely… Andersonian, meaning immediately recognisable.
But what are the visual structures, colours, themes, and motives that make you scream “oh, but that’s definitely Wes Anderson” when you see an individual frame from The Royal Tenenbaums or The Life Aquatic?
I mean, everyone can probably make an educated guess about Anderson’s idiosyncratic style, from the obsessive symmetrical compositions to the surreal/weird/freakish characters, from the incredibly vast colour palettes to the recurring family themes.
But what if there was an objective, quick, and reliable way to analyse his movies? One that doesn’t necessarily rely on a fallacious personal approach to the artwork?
That’s what drove Yannick Assogba, a software developer and designer, to investigate Anderson’s visual motifs using machine learning, in particular a deep neural network known as “Inception V3”, produced by Google.
“I am a fan of Wes Anderson’s movies and have always found his style to be interesting so I wanted to do something that would give me another way to look at his movies,” he told Mashable.
Assogba uses four of Anderson’s films as source for his project — The Life Aquatic, The Royal Tenenbaums, Fantastic Mr. Fox, and Moonrise Kingdom — from which he extracts a frame every 10 seconds, for a sample of 2,309 frames in total.
As the word “learning” in machine learning reveals, neural networks are programs that have the ability to learn what information is relevant for them after a training process and then build a representation based on that input data.
“During training hundreds of thousands of labeled (i.e. categorized) images are fed into the network and the strengths of connections between nodes is continually adjusted until the network accurately predicts the labels for known images,” explains Assogba.
One of the first tests on Anderson’s movies was about colours.
Assogba built the matrix by recording the amount of red, green, and blue light used to produce each pixel in the movie. The results are quite mesmerising.
In Moonlight Kingdom, for example, a few clusters are clearly highlighted — dark scenes with a blue tint, yellows & browns with blueish horizon, desaturated with yellow highlights and warn greens and yellows. Sounds about right, doesn’t it?
But what happens when we add the other films? Some visual motifs stand out.
For example, a group of frames with a strong blue tint (Richie’s attempted suicide in Tenenbaums, the pirate attack in Life Aquatic) or strong reds, or again dark scenes with a blue tint or in the foreground or background.
So far so good, nothing really that makes you jump on your chair. But things get more complicated (and interesting) once we deal with physical objects in the movies rather than colours.
The deep learning model used by Assogba, Inception V3, is produced by Google and is trained on a collection of labelled images — animals, appliances, birds, furniture, people, etc. — known as ImageNet.
That means the neural network is trained and able to recognise objects from 1,000 of these categories. What happens when we apply that to the Wes Anderson movies?
Well, some hilariously Andersonian themes stand out, as Assogba notes:
– TV screens
– Texts and titles
– Figure on walkway
– Split Screen composition
You could probably argue that those recurring objects are quite self-evident for whoever watches Anderson’s movies with an attentive eye for detail and a basic understanding of his art. But there are differences.
“I’d say a big difference is the amount of time it takes, rather than watch the movies (repeatedly) and rely on our memories or notes, the machine can look at thousands of images from lots of different films and quickly compare them,” he said.
“It can suggest similarities and juxtapositions for a human to look at, some are ones we would find ourselves while others might be surprising or poetic because of imperfections in the algorithms and models.”