Brain Pickings

Posts Tagged ‘software’

09 SEPTEMBER, 2009

Robots In Our Image

By:

How a biblical creation myth replays itself from bone and muscle to jazz improvisation.

In labs around the world, a new breed is arising. A friendly breed of intelligent machines designed to look like us, move like us, behave like us, and interact with us in ways that seem less and less distinguishable from the human ways.

Cutting-edge AI projects aim, with an impressive degree of success, to embed human social and cognitive skills in intelligent machines that will eventually be able to seamlessly integrate into our social and cultural fabric. These machines will soon be able to read and understand our mental and emotional states and respond to them. They will be able to engage with us, to learn, adapt and share, challenging us to redefine our very conception of human identity and essence.

Here are 6 compelling beacons.

ASIMO

Robots walking amongst us has been a science fiction dream for many years. Recently, however, science is rapidly catching up in bringing this dream into reality. Japan’s technological giant Honda has been building an experimental anthropomorphic robot since the 80’s — ASIMO. His name stands for Advanced Step In Innovative Motion, but it’s hard to avoid associating it with Asimov, the iconic science-fiction author who envisioned intelligent humanoid robots in his stories and was the first to lay down the then-fictional 3 laws of robotics, regulating human-machine interactions.

Since as early as 2000, ASIMO’s advanced models have been capable, among other things, of demonstrating complex body movements, navigating complex environments, recognizing objects and faces, reacting to human postures, gestures and voice commands, and much more. ASIMO can safely conduct himself among us (not bumping into people) and perform an impressive set of complex tasks, like taking and order and serving coffee. Recent models even have limited autonomous learning capabilities.

KISMET

Social interaction, usually taken for granted in our everyday life, is a very complex system of signaling. We all use such signaling to share our rich mental and emotional inner lives. It includes voice, language production and understanding, facial expressions and many additional cues such as gesturing, eye contact, conversational distance, synchronization and more.

The Sociable Machine project at MIT has been exploring this complex system with Kismet, a “sociable machine” that engages people in natural and expressive face-to-face interaction.

The project integrates theories and concepts from infant social development, psychology, ethology and evolution that enable Kismet to enter into natural and intuitive social interaction.

The most significant achievement with Kismet is its ability to learn by direct interaction the way infants learn from their parents — previously a skill inherent only to biological species, and thus a major paradigm shift in robotics.

NEXI

Developed at MIT Media Lab’s Personal Robots Group, Nexi combines ASIMO’s mobility with Kismet’s social interactivity skills. Nexi presents itself as an MDS robot, which stands for Mobile, Dexterous, and Social.

The purpose of this platform is to support research and education goals in human-robot interaction, teaming, and social learning. In particular, the small footprint of the robot (roughly the size of a 3 year old child) allows multiple robots to operate safely within a typical laboratory floor space.

Nexi’s design adds advanced mobility and object manipulation skills to Kismet’s social interactivity. Nexi’s facial expressions, though basic, are engaging and rather convincing. It’s also hard to overlook the “cute” factor at play, reminiscent of human babies.

While still slow and very machine-like in appearance, Nexi demonstrates today what was science fiction just a few years ago.

HANSON ROBOTICS

Hardly anything is more essential to the recognition of humanity than facial expressions, which modulate our communication with cues about our feelings and emotional states. Hanson Robotics combines art with cutting-edge materials and technologies to create extremely realistic robotic faces capable of mimicking human emotional expressions, conversing quite naturally, recognizing and responding to faces, and following eye contact.

We feel that these devices can serve to help to investigate what it means to be human, both scientifically and artistically.

Jules, a Conversational Character Robot designed by David Hanson, has a remarkably expressive face and is equipped with natural language artificial intelligence that realistically simulates human conversational intelligence. This, together with his/her rich nonverbal interaction skills, offers a glimpse of how fast robots are becoming virtually indistinguishable from us — social, interactive, eerily affective.

The team is also working on a futuristic project aiming to develop machine empathy and machine value system based on human culture and ethics that will allow robots to bond with people.

ECCEROBOT

In the quest to create machines in our image, of particular interest is ECCEROBOT — a collaborative project coordinated and funded by the EU Seventh Framework Programme.

ECCE stands for Embodied Cognition in a Compliantly Engineered Robot. Simply put, it means that while other humanoid robots are currently designed to mimic human form but not its anatomy and physiological mechanisms, ECCEROBOT is anthropomimetic — specifically designed to replicate human bone, joint and muscle structure and their complex movement mechanism.

The project leaders believe that human-like cognition and social interaction are intimately connected to the robot’s embodiment. A robot designed according to a human body plan should thus engage more fluently and naturally in human-like behavior and interaction. Such an embodiment would also help researches build robots that learn to engage with their physical environment the way humans do — an interesting concept that brings us a step closer to creating human-like robotic companions.

SHIMON

Music, many of us believe, makes us distinctively human — playing music together, especially improvising, is perhaps one of the most impressive and complex demonstrations of human collaborative intelligence where the whole becomes much more than the sum of its parts.

But extraordinarily skillful music-playing robots are already challenging this very belief. Earlier this year, we saw the stage debut of Shimon — a robotic marimba artist developed at the The Georgia Tech Center for Music Technology. Shimon doesn’t look the least bit human, entirely lacks mobility and affective social skills, but is capable of something definitely considered exclusively human — playing real-time jazz improvisation with a human partner.

Shimon isn’t merely playing a pre-programmed set of notes, but is capable to intelligently respond to fellow human players and collaborate with them, producing surprising variations on the played theme. The robot’s head (not on video), currently implemented in software animation, provides fellow musicians with visual cues that represent social-musical elements, from beat detection and tonality to attention and spatial interaction.

Spaceweaver is a thinker, futurist and writer living in future tense, mostly on the web. Check out his blogs at Space Collective and K21st, and follow him on Friendfeed and Twitter.

Brain Pickings has a free weekly newsletter and people say it’s cool. It comes out on Sundays and offers the week’s best articles. Here’s an example. Like? Sign up.

03 SEPTEMBER, 2009

Google Groupies Galore: Goollery

By:

What album covers have to do with shoe shopping and Renaissance paintings.

The open-source movement is among the great cultural feats of our time. And the move towards open API’s, inviting derivative, often collaborative work, is a major force driving this new paradigm. Google was arguably the pioneer there, releasing the Google Maps API in June 2005, and following up with API’s for many of their other products. More recently, the Android API has generated a number of fascinating independent developments in today’s white-hot area of augmented reality. So: How does one keep up with all the API wonderfulness out there?

Enter Goollery, a comprehensive gallery of Google-related projects from around the world.

Inviting you to browse by Google product or project date, the collection features such gems as a map of where iconic album covers were shot, to an artist who paints scenes and locations he has only experienced via Street View, to Layar, the new critically-acclaimed augmented reality browser for Android.

Among our favorites is the Tate’s mashup, which lets you explore locations depicted in artwork from the National Collection of British Art using Street View. Looking at place from a Renaissance painting and seeing it today somehow captures our cultural evolution on a multitude of levels, from the aesthetic to the social to the environmental.

Explore Goollery for more fascinating celebrations of voyeurism and the freedom to roam around in other people’s data.

We’ve got a weekly newsletter and people say it’s cool. It comes out on Sundays, offers the week’s articles, and features five more tasty bites of web-wide interestingness. Here’s an example. Like? Sign up.

13 JULY, 2009

Digital Choreography: Synchronous Objects

By:

Twenty desks, one python, and what the human body has to do with lines of code.

Motion is a thing of beauty. Think about dance choreography — the human body in motion. Or the best stop-motion animation — pixels in motion. Naturally, the convergence of the two — choreography and digital motion — would be a magnificent thing.

Enter Synchronous Objects — a collaboration between German choreographer William Forsythe and Ohio State University Dance & Technology director Norah Zuniga Shaw.

The film plays off of the famed Forsythe piece One Flat Thing, Reproduced, using digital technology in a way that lets motion inform choreography. The project embodies the cross-pollination of ideas from different fields — dance, software, technology, sound design, motion graphics.

With this project, we seek to construct a new way of looking at dance, one that considers both discipline specific and cross-disciplinary ways of seeing.

Although this version of the film was done in Adobe Flash, upcoming work is experimenting with Field — a rich new open-source authoring environment built on Java and Jython (Python on the Java VM), designed for use in digital movement and visual expression. Field was conceived in — where else — the MIT Media Lab and has been used for anything from choreographic sequences to HD motion graphics installations.

Synchronous Objects and the technologies that inform it present a brave new frontier for motion arts, a future where human and algorithm come together to orchestrate beauty.

We highly recommend watching Synchronous Objects with headphones on — the sound effects alone are a piece of magic, adding a whole new layer to the already superb visual experience.

We’ve got a weekly newsletter and people say it’s cool. It comes out on Sundays, offers the week’s articles, and features five more tasty bites of web-wide interestingness. Here’s an example. Like? Sign up.

05 MARCH, 2009

Sign of The Times: Data Visualization Heaven

By:

158 years of the cultural dialog, replayed and rewritten in visual language.

Newspapers have long been a paradise of visual information — from the early 20th century isotype language pioneered by Otto Neurath, to the elaborate vintage infographics we so love. So imagine our excitement when The New York Times announced Times Open last week, an open API initiative encouraging the development of applications around The Times‘ enormous vault of data.

If you swim in the shallow end of the geek pool, fear not: Here’s the Cliff’s Notes on API — it stands for application programing interface and is pretty much what shapes the behavior of one application as it interacts with others. For example, a WordPress plugin that displays your latest tweets on your blog uses the Twitter API to work the magic.

But what makes the NYT development particularly important is that the API opens up data from the paper’s entire 158-year archive — from the Civil War to the moon landing to the latest Radiohead album reviews — allowing developers and artists alike to do just about anything with it.

And they already are.

Vancouver-based generative software artist Jer Thorp has done a series of visualizations exploring the social conversation around certain terms as reflected in The Times over the last 27 years.

From the gossip on sex and scandal, to a face-off between the most iconic superheroes, to the increasing anxiety about global warming, the series is a visual documentary of our collective concern over issues big and small, the kind of mundane chatter and momentous movements that define a culture.

San Diego artist and developer Tim Schwartz is digging even deeper with visualizations of history that use The New York Times’ entire 158-year corpus of data. His interface plots terms over time, exploring how the cultural dialog has changed as our society evolves. It’s amazing to think some of our cultural givens were virtually nonexistent less than a century ago — like, for example, homosexuality, practically unspoken about publicly until the 70’s.

But perhaps most fascinating is how this changes and almost reverses the relationship between newspapers and data visualization — traditionally, infographics in publishing are visual representations of extraneous information that complements the newspaper’s depiction of the outside world, its message. This — the visualization of meta-data about the newspaper itself — is pretty much the opposite, an introspective analysis of the medium as it shapes the message.

If you find yourself intrigued by and drawn to this world of data visualization, do check out this excellent introduction to it, a wonderful find by our friends at BBH Labs.

via Nieman Journalism Lab