Despite our sleep-deprived state, we managed to live-blog our way through another day of TED. Well, sort of — the guys at Long Beach had severe technical difficulties on their end, which sort of reduced TED to ED for a couple of hours.
But sleeplessness and glitches notwithstanding, it was a phenomenal day embezzled with a number of well-deserved standing ovations. Neurologist extraordinaire Oliver Sacks opened the See session with a fascinating talk about a specific kind of visual hallucinations in perfectly sane blind patients, called Charles Bonnet syndrome, which occur because visual receptors become hyperactive when they receive no real input. Apparently, over 10% of blind people get this, but only about 1% ever acknowledge it in fear of being ridiculed and perceived as insane — what a stark reminder of the clash between cognitive health and social health.
UCSB researcher JoAnn Kuchera-Morin followed, introducing perhaps the most fascinating piece of the night: The AlloSphere machine, a “scientific data discovery and artistic creation tool.” She proceeded to show phenomenal imagery, including the AlloBrain — a project that builds medical narrative through real fMRI data mapped sonicly and visually, with tremendously rich potential for medical application.
Also shown: the multi-center hydrogen bonds of a new material used for transparent solar cells, a clearly gigantic stride for clean energy. The footage itself was absolutely stunning, especially framed in the knowledge that it’s all real, not a CGI simulation.
Another visually and conceptually captivating talk came next, with light and space sculptor Olafur Eliasson. He tossed the audience into a visual experiment right there on the stage screen, demonstrating the link between eye and brain in a very raw, tangible way before introducing his equally compelling work — work that is, above all, creating a sense of consequence by making space accessible and instilling in people a sense of community and togetherness.
Olafur’s entire talk was a string of eye-opening epiphanies on the nature of art, our relationship to the world and each other, our shared sense of responsibility.
Art is obviously not just about decorating the world, but also about taking responsibility.
Ed Ulbrich was next, with perhaps the biggest shock-value jaw-dropper of the night: He took us behind the scenes of The Curious Case of Benjamin Button, revealing that Brad Pitt’s character was actually entirely computer-generated from the neck up. Crafting it was so laborious that just the person responsible for the character’s eye system spent over two full years on it. (Pause on that for a moment.)
They also had to create every possible lighting condition for the character, in order to make him appear realistic and believable in all scenes.
Their biggest challenge was animating facial emotion. Traditionally, facial animation is done by recording the motion on 100 surface polygons, with 100 tracking points. But they found that the richest emotional information came from the stuff between the points. So, unsurprisingly to us, they turned to one of our greatest heroes: Paul Ekman and his brilliant Facial Action Coding System.
Setting up a system of 3D cameras, they were able to record a surface of over 100,000 polygons, tracking 10,000 points.
We ended up calling the entire process “emotion capture” rather than “motion capture.”
In the end, Ulbrich made an excellent point that most of us hardly give much thought to: Despite the technological advances and the computer-generated character, animating it still fell on Brad Pitt’s unique acting skill and dramatic capacity — because the Button character, tech smoke-and-mirrors notwithstanding, is but a digital puppet to be operated entirely by its actor-puppeteer.
Closing the See session was experimental audio-visual artist Golan Levin, who introduced a mind-blowing subtitling technology that animates text with the amplitude, pitch and frequency of the speaker’s voice, so that the text literally becomes alive with meaning. Levin also revealed his fascination with the human gaze, introducing a revolutionary eye-tracking system aimed at making the computer aware of what it is looking at and able to respond.
What if art was aware we were looking at it, how could it respond?
He proceeded to show off another rather peculiar (by which we mean creepy-cool) extension of the technology: The Double-Taker, an enormous eye of a snout that follows a person as he or she moves through space, in a very organic albeit creepy way.
And although we were teched out of much of the Understand session that followed, regretfully missing anthropologist Nina Jablonski‘s much- anticipated talk, we did catch Elizabeth Gilbert‘s profound insight on the paradox of the creative process, which is always inevitably tied to anguish as artists fear being unable to outdo themselves creatively.
The final session, Invent, opened with iconic yet controversial architect Daniel Libeskind, whose reconstruction plan for Ground Zero was the people’s choice, but was tragically crushed by commercial pressures and had to give way to the current winner. Libeskind talked about the clash between hand and computer, pointing out the challenge of making the computer respond to the hand rather than vise-versa.
Showcasing some of his phenomenal commercial and concept work, he raised deeper questions about the role of architecture in the human story.
Architecture is not only the giving of answers, it’s also the asking of questions.
Green auto pioneer Shai Agassi followed. Besides showing the enormity of the scale, on which cars impact the world, he also drew a rather brilliant analogy: Before the Industrial Revolution, much of the U.K.’s labor force came from an immoral element — human slaves. And as soon as slavery ended, the Industrial Revolution began. We are, in effect, getting much of our energy from an immoral source, subjecting the planet to a form of slavery. Ending “planetary slavery” is the only way to the next social revolution.
The remaining talks showcased a broad range of truly revolutionary innovation in robotics and medicine, from Catherine Mohr‘s amazing surgical robots, to Robert Full‘s brilliant technology simulating the toe-peeling and air-righting of the gecko, to Daniel Kraft‘s Marrow Miner tool that bypasses transplant pain by allowing local anaeshtesia, harvesting 10 times more marrow.
Finally, polymorphic playwright Sarah Jones, one of the best entertainers to ever hit TED stage, closed the session with her truly — truly — captivating performance of her array of characters, each of whom she channels to an unbelievable level of believability. That’s one talk you’d want to see when it becomes available.
The last segment was the awarding of this year’s TED Prize, the streaming of which was accessible to everyone online and available in select theaters across the U.S. The winners — marine preservation advocate Sylvia Earle, space explorer Jill Tarter, and music education pioneer José Abreu — are every bit as deserving as you’d expect, so be sure to check out their wishes — and if you’re passionate about that field, you can even offer help to each of the three on his or her TED Prize page.
We’ll be live-blogging today as well, so be sure to follow us on Twitter if you’re into, you know, hearing stuff before everyone else does.