The organizers of Build Peace tell me it was the first conference specifically on peace and technology, and they should know. I don’t know the peace building field very well, but I could see that some of its leading lights were in attendance. I learned quite a bit, and I am very glad I went.
I have to start by saying I don’t think “technology for peace” is a sure win. My understanding is that peace building is incredibly difficult work, and rarely truly successful, and I don’t see why technology necessarily changes that. Yet I am also a technologist and I presented some of my own data-driven peace work at the conference. Clearly I believe it might be good for something.
There is a great need for conversations between capable conflict resolution workers and thoughtful technologists — hence this conference. Here are some of the things I think I learned.
UPDATE: Debrouwere continues the conversation with a response to the key points here, in the comments to his original post.
Dutch journalist/coder Stijn Debrouwere has written a very thorough post describing the ways in which standard tags, like the ones on this blog or on Flickr, fall short when applied to news articles. There are lots of things we might like to know about a story, such as where and when it happened and who was involved. This additional information, sort of like the index to a book, is known as “metadata”, and there is within the online journalism community a great call for its use, including by Debrouwere:
Each story could function as part of a web of knowledge around a certain topic, but it doesn’t.
So here’s a well-intentioned idea you’ve heard before: journalists should start tagging. Jay Rosen insists that “Getting disciplined and strategic about tagging” may be one way professional journalism separates itself from the flood of cheap content online.” Tags can show how a news article relates to broader themes and topics. Just the ticket.
News metadata is a major topic, and many people have speculated deeply about the value of creating news metadata at the time of reporting, such as the ever-sarcastic Xark and the thoughtful Martin Belam who writes about why “linked data” is good for journalism. But I’m going to respond to Debrouwere because I read him today, because he has lovely diagrams that explain his good ideas, and because, in criticizing “tags” as a form of metadata, I think he misses some very important points.
And he’s not alone. My sense is that many of the coder-journalists of today have not learned from the mistakes of generations of technically-minded people who wished to talk about the world in more precise ways.
Moving forward from simple tagging, Debrouwere imagines more sophisticated annotation schemes that start to pick up on what the tags actually mean. For starters, the tags could be drawn from separate “vocabularies.” Does a tag refer to a person, or a place, or perhaps an event? Debrouwere uses the following picture, which I’m going to borrow here because it explains the idea so nicely:
But, he says, we can get even more sophisticated. What did the story actually say? If it mentioned a person, what did it say about them? Was it an interview? A profile? Did it criticize them? Here’s the diagram he draws: Continue reading The world cannot be represented in machine-readable form
My iPhone seemed to work better on ice, so I spent the last two hours alternating between chilling it in the freezer and pressing buttons. The WiFi kept cutting out, and I read somewhere that one of the failure modes for the iPhone radio was thermal. Amazingly enough, it worked, and the WiFi would run for maybe three minutes after ten minutes of chilling. I desperately needed it to work, because my 3G service was down until I could install Ultrasn0w, the iPhone unlocking software. Which can only be installed by a program called Cydia, which only downloads new software over a WiFi network. I have to use unlocking software in the first place because US model iPhones are keyed to work only with AT&T, which doesn’t exist in Hong Kong. I successfully unlocked my phone months ago, and everything was working fine until I upgraded the firmware, which I did in the hopes of fixing the WiFi which failed last week.
If you didn’t follow that, consider yourself fortunate. You’ve never needed to wonder about such things.
It gets better. When I reset my phone it lost the WiFi password to my home network. I couldn’t find it written down. I couldn’t remember the password to log into my router to look it up. The internet told me how to reset the router at the hardware level, but to reconfigure the wireless I’d need to connect my laptop to it with a cable. Which I didn’t have. Luckily, I eventually remembered the router password.
I started drinking.
Password problem solved, every ten minutes I’d open the freezer door, reset the WiFi on my phone, wait for Cydia to download its package list, then tell it to download the mere 50kb of Ultrasn0w and hope to hell the radio didn’t blink out in the middle of the tiny transfer. Now I know exactly how many bars I get in the back of the freezer.
After eight or nine tries, I opened the freezer door to find my phone on the 3G network. Success!
Actually, it was way more involved than this. I left out a bunch of steps, all the things I tried that didn’t work. And of course the firmware upgrade did not fix the WiFi, so this experiment put me right back were I started and wasted six hours of my life and two tumblers of rather nice whiskey. At least I didn’t have to go out of my way to retrieve the ice.
Last year I imagined an iPhone app that superimposed virtual objects over video from the phone’s camera. With the advent of the iPhone 3GS and its built-in compass, it’s now happening.
This video shows NearestWiki, which tags nearby landmarks/objects and guides you to them. I am aware of a few other AR apps, as this post on Mashable and this AP story discuss. Many of these apps do building/object recognition, and one even recognizes faces and displays a sort of business card. We’re already seeing annotation with data from Wikipedia, Twitter and Yelp, and I suspect that we’re going to see these tools get very deep in the very near future, with Wikipedia-style tagging of the entire history and context of any object.
Just a moment while I get over the fact that the future is already here.
Ok, I’m properly jaded again. Yeah, it’s an app platform, and that’s cool — but imagine the possibilities for art. Bets on who’s going to make the first “alternate reality spyglass” piece? Bets on how much Matthew Barney will sell it for in the app store?
The ambition of the RepRap project (“replicating rapid-prototyper”) is undeniably cool: to design a machine which is essentially a self-replicating 3D printer. By building up objects layer by layer, rapid prototyping technology can be used to manufacture the parts for just about any simple object or machine. It would be like having your own little factory in exactly the same way that having a laser printer is like having your own printing press, except that you can use this little factory to make another factory to give to your friend.
Theoretically, desktop manufacturing technology then spreads exponentially, until everyone can make whatever material objects they need from downloaded plans, for only the cost of feed plastic.
The dream is best explained in this excellent little video:
It’s hard to overstate the fundamental shift that would come with truly widespread desktop manufacturing. Right now all of the objects we use are manufactured somewhere far away and shipped to us, and the designs are expensive and slow to change. Instead, imagine if everyone had a household appliance, perhaps fed by spools of plastic and metal wire, that could manufacture just about any object from plans downloaded from the internet. It’s hard to see how private designs could compete with millions of amateur object designers geeking out over their widgets for the benefit of humanity, which means that designs for all the basic desirable objects would be freely available.
Want a new phone? Download the latest Android phone plan from the Open Handset Foundation. That’s cool, but the really cool thing is this: everyone in the world could have one for the price of plastic. More to the point, everyone in the world could have e.g. irrigation pumps, car parts, light switches, medical devices, essentially all the trappings of modern technology.
It is of course debatable whether or not an increase in humanity’s use of energy-consuming technology is a good idea at this time. However, it seems to me unconscionable to deny it to the world’s poor just because we got there first. Further, one could also replicate the parts for home biomass reactors, electric cars, and other advanced energy devices — regardless of whether or not anyone can make a profit selling such items commercially.
New versions of the replicator with enhanced production capabilities (now with integrated circuits!) would be designed to be manufacturable using existing models. This means that manufacturing technology would itself spread virally. To bootstrap this, all you need are a few basic self-replicating machines, then the technology passes from friend to friend until the whole world is saturated and capable of producing all future upgrades.
But we are nowhere near that dream. There’s a lot of promise to desktop manufacturing, but I’ve come to believe that the RepRap approach is probably not the right one. And I’m going to try to explain why.
I was recently pointed to the most amazing thing, a music / fire / street performance called Glissendo, conceived by one “Ulik, the Machanical Clown” and executed by French art group Le Snob. They’re playing “Lightning” by Phillip Glass on a Dixieland band, riding Segways under the robes, and of course the band leader has dual hand-mounted flamethrowers.
Elegant, beautiful, and strangely sad.
The only substantial thing I can find on this Ulik character is this video. In it, Ulik performs with some of his contraptions such as a home-made jet-engine backpack (used with skis or rollerblades), a life-sized puppet who holds a camera and interviews him, and the front half of a car. It’s all wonderfully creative stuff, and it makes me wonder why we haven’t seen more hi-tech in circus.
For the potential is ample. We could use modern control-system technology to perform previously impossible man-machine feats of daring. I wonder about automatically balancing Segways 30 feet high than one could dance on top of, harnesses connected to a crane that cancels out its own friction and inertia and modulates the effective gravity under performer control, a ridiculously precise robotic juggling partner, or powered jumping stilts with built in balance and timing systems. This is not mere robotic circus; at their best, such machines become something between costume and vehicle, an extension of the performer’s body that makes them, taller, stronger, faster, or able to move excitingly inhuman ways.
Given that such wide artistic and technological possibilities exist, I find it hard to believe that they won’t be developed. We may currently be witnessing the last generation of aerial circus that does not make heavy use of technology.
It is now possible to see what a person is looking at by scanning their brain. The technique, published last November by a team of Japanese neuroscientists, uses FMRI to reconstruct a digital image of the picture entering the eye, albeit at very low resolution and only after hundreds of training runs. Still, it’s an awesome development, and many articles covering this research have called it “mind reading” (1, 2, 3, 4, 5). But it really isn’t, and it’s fun to explore what real “mind reading” would imply.
When I hear “mind reading” I want psychic abilities. I want to be able to know what number you’re thinking of, where you were on the night of March 4th, and what you actually think of my souffle. This is the sort of technology that could be badly misused, as the comments on one blog note:
Am I the only one finding this DEEPLY disturbing? It opens the doors to some of the scariest 1984-style total-control future predictions. Imagine you can’t hide your f#&%!ng MIND!
Fortunately, we’re not there yet. Morover, if we did have the technology to read minds, we’d have much bigger societal issues than privacy to deal with. The existence of “mind reading machines” would imply that we possessed good formal models of the human mind, and that is a can of worms.
How much overlap is there between the web in different languages, and what sites act as gateways for information between them? Many people have constructed partial maps of the web (such as the blogosphere map by Matthew Hurst, above) but as far as I know, the entire web has never been systematically mapped in terms of language.
Of course, what I actually want to know is, how connected are the different cultures of the world, really? We live in an age where the world seems small, and in a strictly technological sense it is. I have at my command this very instant not one but several enormous international communications networks; I could email, IM, text message, or call someone in any country in the world. And yet I very rarely do.
Similarly, it’s easy to feel like we’re surrounded by all the international information we could possibly want, including direct access to foreign news services, but I can only read articles and watch reports in English. As a result, information is firewalled between cultures; there are questions that could very easily be answered by any one of tens or hundreds of millions of native speakers, yet are very difficult for me to answer personally. For example, what is the journalistic slant of al-Jazeera, the original one in Arabic, not the English version which is produced by a completely different staff? Or, suppose I wanted to know what the average citizen of Indonesia thinks of the sweatshops there, or what is on the front page of the Shanghai Times today– and does such a newspaper even exist? What is written on the 70% of web pages that are not in English?
The Turkish Government censors internet access from within the country, as I discovered yesterday when attempting to access YouTube from the Turkish town of Selçuk, as this screenshot shows (click to enlarge):
The English text on this page reads: “Access to this web site is banned by ‘TELEKOMÜNİKASYON İLETİŞİM BAŞKANLIĞI’ according to the order of: Ankara 1. Sulh Ceza Mahkemesi, 05.05.2008 of 2008/402″
Just to complete the irony, I was looking for a video of the Oscar Grant shooting when I first discovered this “blocked site” page.
Electronic voting machines were supposed to make elections cheaper, faster, and more secure, but so far they have failed. In the last decade there has been something of a rush to adopt e-voting, followed by suspicion and controversy over the black-box, “just trust us” nature of the first generation of commercial systems, followed by a return to paper ballots in many jurisdictions. However, if we wish to improve election processes, cheap and fast is probably the wrong goal. It may be possible to use cryptographic techniques to implement end-to-end auditable elections, new in human history.
The e-voting fiasco has illustrated that paper ballots are a better system than they might at first seem. Paper preserves voter secrecy, it is auditable after the fact, and it is even reasonably transparent, if one also allows election observers. But paper ballots must be closely guarded and cannot be directly counted by members of the general public, who in the end have no choice but to trust election officials, observers, counting equipment, and the entire chain of custody. Rather than simply duplicating paper ballots electronically, we should strive to improve upon them.
This seems to be possible. Modern cryptography suggests the possibility of a new kind of incredibly transparent and fair election, where ordinary citizens can verify the soundness of the election for themselves, without ever needing to trust blindly that a huge array of machines and people have acted correctly. This represents a fundamentally new ability: for the first time, it may be possible to hold truly “open” elections.