Innovation that sticks, or how to turn nature’s aggravations into universal usefulness.
This year, Velcro — one of the world’s most beloved multipurpose inventions — celebrates its 60th birthday, and today marks the 53rd anniversary of Velcro’s US patent. The miracle adhesive was the brainchild of Swiss electrical engineer George de Mestral. One afternoon, as he was taking a walk in the forest, he noticed the that burrs — the seeds of burdock thistle — stuck to his clothes and wondered how they did that. So he excitedly rushed home, stuck one under the microscope, and spent the next ten years perfecting nature’s brilliant hook-and-loop adhesion mechanism, eventually producing one of history’s smartest applications of biomimetic design.
To celebrate Velcro’s birthday, here are three different animated short films that tell the same great story of ingenuity and perseverance in just over a minute each.
From HowStuffWorks, here’s a characteristically short-and-sweet evaluation of the invention. Though I have to disagree with their 2/5 on the benefits-to-humanity scale — anything that’s good enough for NASA should be good enough for at least a 4.
From Pan-African media portal ABN Digital, a beat-by-beat recap on the chronology of Velcro’s invention and its impact as a zipper alternative.
How the web gives us what we want to see, and that’s not necessarily a good thing.
Most of us are aware that our web experience is somewhat customized by our browsing history, social graph and other factors. But this sort of information-tailoring takes place on a much more sophisticated, deeper and far-reaching level than we dare suspect. (Did you know that Google takes into account 57 individual data points before serving you the results you searched for?) That’s exactly what Eli Pariser, founder of public policy advocacy group MoveOn.org, explores in his fascinating and, depending on where you fall on the privacy spectrum, potentially unsettling new book, The Filter Bubble — a compelling deep-dive into the invisible algorithmic editing on the web, a world where we’re being shown more of what algorithms think we want to see and less of what we should see.
I met Eli in March at TED, where he introduced the concepts from the book in one of this year’s best TED talks. Today, I sit down with him to chat about what exactly “the filter bubble” is, how much we should worry about Google, and what our responsibility is as content consumers and curators — exclusive Q&A follows his excellent TED talk:
The primary purpose of an editor [is] to extend the horizon of what people are interested in and what people know. Giving people what they think they want is easy, but it’s also not very satisfying: the same stuff, over and over again. Great editors are like great matchmakers: they introduce people to whole new ways of thinking, and they fall in love.” ~ Eli Pariser
What, exactly, is “the filter bubble”?
EP: Your filter bubble is the personal universe of information that you live in online — unique and constructed just for you by the array of personalized filters that now power the web. Facebook contributes things to read and friends’ status updates, Google personally tailors your search queries, and Yahoo News and Google News tailor your news. It’s a comfortable place, the filter bubble — by definition, it’s populated by the things that most compel you to click. But it’s also a real problem: the set of things we’re likely to click on (sex, gossip, things that are highly personally relevant) isn’t the same as the set of things we need to know.
How did you first get the idea of investigating this?
EP: I came across a Google blog post declaring that search was personalized for everyone, and it blew my mind. I had no idea that Google was tailoring its search results on an individual basis at all — the last I’d heard, it was showing everyone the same “authoritative” results. I got out my computer and tried it with a friend, and the results were almost entirely different. And then I discovered that Google was far from the only company that was doing this. In fact, nearly every major website is, in one way or another. (Wikipedia is a notable exception.)
In an age of information overload, algorithms certainly finding the most relevant information about what we’re already interested in more efficiently. But it’s human curators who point us to the kinds of things we didn’t know we were interested in until, well, until we are. How does the human element fit into the filter bubble and what do you see as the future of striking this balance between algorithmic efficiency and curatorial serendipity?
EP: The great thing about algorithms is that, once you’ve got them rolling, they’re very cheap. Facebook doesn’t have to pay many people to edit the News Feed. But the News Feed also lacks any real curatorial values — what you’re willing to Like is a poor proxy for what you’d actually like to see or especially what you need to see. Human curators are way better at that, for now — knowing that even though we don’t click on Afghanistan much we need to hear about it because, well, there’s a war on. The sweet spot, at least for the near future, is probably a mix of both.
One interesting place this comes up is at Netflix — the basic math behind the Netflix code tends to be conservative. Netflix uses an algorithm called Root Mean Squared Error (RMSE, to geeks), which basically calculates the “distance” between different movies. The problem with RMSE is that while it’s very good at predicting what movies you’ll like — generally it’s under one star off — it’s conservative. It would rather be right and show you a movie that you’ll rate a four, than show you a movie that has a 50% chance of being a five and a 50% chance of being a one. Human curators are often more likely to take these kinds of risks.
How much does Google really know about us, in practical terms, and — more importantly — how much should we care?
EP: That depends on how much you use Google — about me, it knows an awful lot. Just think: it’s got all of my email, so it not only has everything I’ve written to friends, it has a good sense of who I’m connected to. It knows everything I’ve searched for in the last few years, and probably how long I lingered between searching for something and clicking the link. There are 57 signals that Google tracks about each user, one engineer told me, even if you’re not logged in.
Most of the time, this doesn’t have much practical consequence. But one of the problems with this kind of massive consolidation is that what Google knows, any government that is friends with Google can know, too. And companies like Yahoo have turned over massive amounts of data to the US government without so much as a subpoena.
Companies like Yahoo have turned over massive amounts of data to the US government without so much as a subpoena.” ~ Eli Pariser
I’d also argue there’s a basic problem with a system in which Google makes billions off of the data we give it without giving us much control over how it’s used or even what it is.
Do you think that we, as editors and curators, have a certain civic responsibility to expose audiences to viewpoints and information outside their comfort zones in an effort to counteract this algorithmically-driven confirmation bias, or are people better left unburdened by conflicting data points?
EP: In some ways, I think that’s the primary purpose of an editor — to extend the horizon of what people are interested in and what people know. Giving people what they think they want is easy, but it’s also not very satisfying: the same stuff, over and over again. Great editors are like great matchmakers: they introduce people to whole new ways of thinking, and they fall in love.
Is it possible to reconcile personalization and privacy? What are some things we could do in our digital lives to strike an optimal balance?
EP: Well, personalization is sort of privacy turned inside out: it’s not the problem of controlling what the world knows about you, it’s the problem of what you get to see of the world. We ought to have more control over that — one of the most pernicious things about the filter bubble is that mostly it’s happening invisibly — and we should demand it of the companies we use. (They tend to argue that consumers don’t care — we should let them know we do.)
On an individual level, I think it comes down to varying your information pathways. There was a great This American Life episode which included an interview with the guy who looks at new mousetrap designs at the biggest mousetrap supply company. As it turns out, there’s not much need for a better mousetrap, because the standard trap does incredibly well, killing mice 90% of the time.
The reason is simple: Mice always run the same route, often several times a day. Put a trap along that route, and it’s very likely that the mouse will find it and become ensnared.
So, the moral here is: don’t be a mouse. Vary your online routine, rather than returning to the same sites every day. It’s not just that experiencing different perspectives and ideas and views is better for you — serendipity can be a shortcut to joy.
Ed. note:The Filter Bubble is out today and one of the timeliest, most thought-provoking books I’ve read in a long time — required reading as we embrace our role as informed and empowered civic agents in the world of web citizenship.
Donating = Loving
Bringing you (ad-free) Brain Pickings takes hundreds of hours each month. If you find any joy and stimulation here, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:
You can also become a one-time patron with a single donation in any amount:
Brain Pickings has a free weekly newsletter. It comes out on Sundays and offers the week’s best articles. Here’s what to expect. Like? Sign up.
From longevity science to robotics to cancer research, Stevenson explores the most cutting-edge ideas in science and technology from around the world, the important ethical and philosophical questions they raise and, perhaps most importantly, the incredible potential for innovation through the cross-pollination of these different ideas and disciplines.
This is a book that won’t tell you how to think about [the future], but will give you the tools to make up your mind about it. Whether you’re feeling optimistic or pessimistic about the future is up to you, but I do believe you should be fully informed about all the options we face. And one thing I became very concerned about is when we talk about the future, we often talk about it as damage and limitation exercise. That needn’t be the case — it could be a Renaissance.” ~ Mark Stevenson
Stevenson proses a number of mental reboots that shift some of our present cognitive bad habits, from linear thinking about the future to hierarchical, top-down views of innovation.
Brain Pickings has a free weekly interestingness digest. It comes out on Sundays and offers the week's best articles. Here's an example. Like? Sign up.
donating = loving
Brain Pickings remains free (and ad-free) and takes me hundreds of hours a month to research and write, and thousands of dollars to sustain. If you find any joy and value in what I do, please consider becoming a Member and supporting with a recurring monthly donation of your choosing, between a cup of tea and a good dinner:
You can also become a one-time patron with a single donation in any amount:
Brain Pickings participates in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for sites to earn commissions by linking to Amazon. In more human terms, this means that whenever you buy a book on Amazon from a link on here, I get a small percentage of its price. That helps supportBrain Pickings by offsetting a fraction of what it takes to maintain the site, and is very much appreciated.