Digital deconstruction, or what our past, our future, and a waffle iron have in common.
Books like Evidence really catapulted found-object art into the mainstream a few years ago. But they have nothing on artist David Trautrimas, whose Habitat Machines series transforms everyday objects into eerie, fantastical, neo-industrial buildings.
Trautrimas collects old gadgets, from waffle irons to electric razors to oil cans, photographs them, then de- and re-constructs them digitally into retro-futuristic landscapes that bridge what is and what could be in a surreal, haunting way.
Pencil sharpeners become restaurants, coffee cups become bird feeders, post boxes become townhouses.
We’ve got a free weekly newsletter and people say it’s cool. It comes out on Sundays, offers the week’s main articles, and features short-form interestingness from our PICKED series. Here’s an example. Like? Sign up.
How a biblical creation myth replays itself from bone and muscle to jazz improvisation.
In labs around the world, a new breed is arising. A friendly breed of intelligent machines designed to look like us, move like us, behave like us, and interact with us in ways that seem less and less distinguishable from the human ways.
Cutting-edge AI projects aim, with an impressive degree of success, to embed human social and cognitive skills in intelligent machines that will eventually be able to seamlessly integrate into our social and cultural fabric. These machines will soon be able to read and understand our mental and emotional states and respond to them. They will be able to engage with us, to learn, adapt and share, challenging us to redefine our very conception of human identity and essence.
Here are 6 compelling beacons.
Robots walking amongst us has been a science fiction dream for many years. Recently, however, science is rapidly catching up in bringing this dream into reality. Japan’s technological giant Honda has been building an experimental anthropomorphic robot since the 80’s — ASIMO. His name stands for Advanced Step In Innovative Motion, but it’s hard to avoid associating it with Asimov, the iconic science-fiction author who envisioned intelligent humanoid robots in his stories and was the first to lay down the then-fictional 3 laws of robotics, regulating human-machine interactions.
Since as early as 2000, ASIMO’s advanced models have been capable, among other things, of demonstrating complex body movements, navigating complex environments, recognizing objects and faces, reacting to human postures, gestures and voice commands, and much more. ASIMO can safely conduct himself among us (not bumping into people) and perform an impressive set of complex tasks, like taking and order and serving coffee. Recent models even have limited autonomous learning capabilities.
Social interaction, usually taken for granted in our everyday life, is a very complex system of signaling. We all use such signaling to share our rich mental and emotional inner lives. It includes voice, language production and understanding, facial expressions and many additional cues such as gesturing, eye contact, conversational distance, synchronization and more.
The Sociable Machine project at MIT has been exploring this complex system with Kismet, a “sociable machine” that engages people in natural and expressive face-to-face interaction.
The project integrates theories and concepts from infant social development, psychology, ethology and evolution that enable Kismet to enter into natural and intuitive social interaction.
The most significant achievement with Kismet is its ability to learn by direct interaction the way infants learn from their parents — previously a skill inherent only to biological species, and thus a major paradigm shift in robotics.
Developed at MIT Media Lab’s Personal Robots Group, Nexi combines ASIMO’s mobility with Kismet’s social interactivity skills. Nexi presents itself as an MDS robot, which stands for Mobile, Dexterous, and Social.
The purpose of this platform is to support research and education goals in human-robot interaction, teaming, and social learning. In particular, the small footprint of the robot (roughly the size of a 3 year old child) allows multiple robots to operate safely within a typical laboratory floor space.
Nexi’s design adds advanced mobility and object manipulation skills to Kismet’s social interactivity. Nexi’s facial expressions, though basic, are engaging and rather convincing. It’s also hard to overlook the “cute” factor at play, reminiscent of human babies.
While still slow and very machine-like in appearance, Nexi demonstrates today what was science fiction just a few years ago.
Hardly anything is more essential to the recognition of humanity than facial expressions, which modulate our communication with cues about our feelings and emotional states. Hanson Robotics combines art with cutting-edge materials and technologies to create extremely realistic robotic faces capable of mimicking human emotional expressions, conversing quite naturally, recognizing and responding to faces, and following eye contact.
We feel that these devices can serve to help to investigate what it means to be human, both scientifically and artistically.
Jules, a Conversational Character Robot designed by David Hanson, has a remarkably expressive face and is equipped with natural language artificial intelligence that realistically simulates human conversational intelligence. This, together with his/her rich nonverbal interaction skills, offers a glimpse of how fast robots are becoming virtually indistinguishable from us — social, interactive, eerily affective.
The team is also working on a futuristic project aiming to develop machine empathy and machine value system based on human culture and ethics that will allow robots to bond with people.
ECCE stands for Embodied Cognition in a Compliantly Engineered Robot. Simply put, it means that while other humanoid robots are currently designed to mimic human form but not its anatomy and physiological mechanisms, ECCEROBOT is anthropomimetic — specifically designed to replicate human bone, joint and muscle structure and their complex movement mechanism.
The project leaders believe that human-like cognition and social interaction are intimately connected to the robot’s embodiment. A robot designed according to a human body plan should thus engage more fluently and naturally in human-like behavior and interaction. Such an embodiment would also help researches build robots that learn to engage with their physical environment the way humans do — an interesting concept that brings us a step closer to creating human-like robotic companions.
Music, many of us believe, makes us distinctively human — playing music together, especially improvising, is perhaps one of the most impressive and complex demonstrations of human collaborative intelligence where the whole becomes much more than the sum of its parts.
But extraordinarily skillful music-playing robots are already challenging this very belief. Earlier this year, we saw the stage debut of Shimon — a robotic marimba artist developed at the The Georgia Tech Center for Music Technology. Shimon doesn’t look the least bit human, entirely lacks mobility and affective social skills, but is capable of something definitely considered exclusively human — playing real-time jazz improvisation with a human partner.
Shimon isn’t merely playing a pre-programmed set of notes, but is capable to intelligently respond to fellow human players and collaborate with them, producing surprising variations on the played theme. The robot’s head (not on video), currently implemented in software animation, provides fellow musicians with visual cues that represent social-musical elements, from beat detection and tonality to attention and spatial interaction.
A man and a woman walk into a sign, or what Helvetica has to do with slipping on ice.
In 1974, the U.S. Department of Transportation commissioned AIGA to produce Symbol Signs — a standardized set of 34 symbols for the Interstate Highway System. Five years later, 16 more symbols were added to complete what’s become known as “the Helvetica of pictograms” — a 50-piece symbol set so iconic and universally pervasive it has become an integral part of our visual language.
But beyond their practical application, Symbol Signs have amassed a cultish following in the design community, generating derivative work ranging from the quirky to the wildly creative.
Artist Iain Anderson’s symbol-based short film, Airport, was a finalist in the 2005 Sydney Film Festival. And Norwegian designer Timo Arnall created The Adventures of Helvetica Man — a Flickr set paying tribute to the main hero of the Symbol Signs.
A few weeks ago, we tweet-raved about Symbolic Gestures — a wonderful exposé on all the creative ways in which the National Park Service has adapted the iconic symbols to convey a wide and incredibly rich range of contexts.
And non-traditional eco getup Green Thing used Symbol Signs as a storytelling device in a brilliant short film for one of their seven green actions, Walk The Walk.
Download the 50 original Symbol Signs from the AIGA website — they’ve been released into the public domain, free and available with no copyright in EPS and GIF formats — and see what story you can weave.
Brain Pickings has a free weekly interestingness digest. It comes out on Sundays and offers the week's best articles. Here's an example. Like? Sign up.
donating = loving
Brain Pickings remains ad-free and takes hundreds of hours a month to research and write, and thousands of dollars to sustain. If you find any joy and value in it, please consider becoming a Member and supporting with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:
(If you don't have a PayPal account, no need to sign up for one – you can just use any credit or debit card.)
You can also become a one-time patron with a single donation in any amount:
Brain Pickings participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn commissions by linking to Amazon. In more human terms, this means that whenever you buy a book on Amazon from a link on here, I get a small percentage of its price. That helps supportBrain Pickings by offsetting a fraction of what it takes to maintain the site, and is very much appreciated.