AI object recognition, infrathin
A project by Jerry Galle
What does it mean when machines learn to identify images and objects? Their myriad descriptions display internal complexities that seem to hinder them from identifying objects in a straightforward fashion.
The top of an egg, 3d print and plexi, 30 x 50 cm, 2017
The results as seen in Galle’s project “AI object recognition, infrathin” resemble the writings of a deranged art critic, trying to pin an art work using a poetic approach.
n00b, 3d print and frame, 60 x 30 x 25 cm, 2019
This type of pattern recognition is more about statistical interference with a body of data then with actual human perception. In addition the descriptions tend to be biased and are apt to refer to the web. This is of course a consequence of feeding the machine learning system with data (images) from online sources. Whenever such a machine learning system has problems defining an object or an image, it tends to categorize the input in quite a debilitating manner. The category “a cat”, for instance pops up more than wanted. Machine learning image recognition is unfiltered, when it sees a black object it will make racial references if that object has the slightest affinity with a human body or a face. This unfiltered state can generate hilarious explanations, to say the least. But these descriptions can also unwillingly become offensively invasive and prejudiced because they lack any context what so ever. The images the machine learning descriptions invoke, seem to have more affinity with memes and therefore with humour, than with actual (scientific) image recognition. The descriptions are fully situated in the collective, the web. And the current web is a place filled with memes, nonsense and humour, among many other things, of course.
More info here
dankPhishin’ non-stopCulture
man cave, 3d print, plexi, neon, 80 x 50 cm, 2019
A small small mannerism, 3d print and plexi, 50 x 80 x 15 cm, 2017
The score accustomed to a mould, 3d print and plexi, 50 x 40 cm, 2017