From Electronic to Cryptographic Voting

Electronic voting machines were supposed to make elections cheaper, faster, and more secure, but so far they have failed. In the last decade there has been something of a rush to adopt  e-voting, followed by suspicion and controversy over the black-box, “just trust us” nature of the first generation of commercial systems, followed by a return to paper ballots in many jurisdictions. However, if we wish to improve election processes, cheap and fast is probably the wrong goal. It may be possible to use cryptographic techniques to implement end-to-end auditable elections, new in human history.

The e-voting fiasco has illustrated that paper ballots are a better system than they might at first seem. Paper preserves voter secrecy, it is auditable after the fact, and it is even reasonably transparent, if one also allows election observers. But paper ballots must be closely guarded and cannot be directly counted by members of the general public, who in the end have no choice but to trust election officials,  observers, counting equipment, and the entire chain of custody. Rather than simply duplicating paper ballots electronically, we should strive to improve upon them.

This seems to be possible. Modern cryptography suggests the possibility of a new kind of incredibly transparent and fair election, where ordinary citizens can verify the soundness of the election for themselves, without ever needing to trust blindly that a huge array of machines and people have acted correctly. This represents a fundamentally new ability: for the first time, it may be possible to hold truly “open” elections.

Continue reading From Electronic to Cryptographic Voting

The 25th Annual Chaos Communication Conference

A geek soldering at the CCC

Geeks line the hallways, young men in black t-shirts each with a laptop. And they’re always young men. There are no girls here. There are a dozen open wifi networks and I wouldn’t trust any of them. There are tangles of cables. There are anarchists. There are flying robots and broken flying robots being soldered in public.

The air is thick with something, but I don’t know what.

Continue reading The 25th Annual Chaos Communication Conference

How To End Poverty

It just occurred to me that it’s worth Googling this phrase.

The fact that this five-second act has a reasonable chance of turning up a representation of the most carefully argued opinions of large cross section of (admittedly English-speaking) humanity, including quite possibly the opinions of the people who might actually understand the problem best — I find that quite a testament to the progress of, you know, civilization. And that capability is only about a decade old.

Who says that the olden days were better?

Next up, an article about how the internet might still drastically fail to achieve its potential.

Supercomputer Social Experiment

I have a ridiculous idea for a game that will momentarily yield one of the most powerful computers ever.

I didn’t have the idea first, exactly. There’s a piece of software called FlashMob that automatically links whatever computers are nearby into a temporary grid computer. So, you could, for example, invite everyone over for pizza and run your cryptography hack until after the movie finished.


Of course, the software takes its name from flash mob the social experiment. Game. Movement, whatever. So what about putting the game back in the software?

It would require one concept and after that, one email. The email would be from a reasonably socially connected person in any large and wired city to all of their friends and all the appropriate lists. The email would direct everyone to install FlashMob on their laptops, set their wireless to join a particular network, and request them to show up at a particular time and place

The concept is the hard part. This is the thing that would make people come, because it is the thing that would make it art. The central question is: what could you compute in an hour on five hundred laptops that was so cool or beautiful that it would inspire people to make it real?

Cyberspace is Everting

The phrase is due to William Gibson in his novel Spook Country, where artists use WiFi and GPS and VR goggles to create a new kind of art: virtual installations ghosted over the real world. Slip on the glasses and see River Pheonix’s body lying on the sidewalk on LA, or a giant squid hovering over Tokyo. Cyberspace begins to reach out to us, becomes “outside” instead of “inside.”

The thing is, you could do this now with an iPhone.

Here’s the plan: use the GPS to get an approximate fix, down to a few meters. Then look out through the camera to get a shot of the environment. Match this against data from Google Street View to recover precise camera position and orientation — the algorithms already exist. Composite in the ghosts, and display the result on the screen. The iPhone is now a window into cyberspace.

You could use it to visualize reviews tagged to store windows. Watch data packets fly between cell towers. Follow a line on the sidewalk to your destination. Remind yourself of people’s names. Or, of course, for art. Imagine a real-time version of the strangely delightful Death Star over San Francisco.

Of course, there are problems. The iPhone doesn’t have the processing power to do this in real-time, so you’d be limited to snapshots on current hardware, a ghostly camera instead of a camcorder. Google Street View is also copyrighted data, an expensive and proprietary data set (not to mention controversial) and it’s not clear how they’d react to such novel uses. But our phones are becoming ever-more powerful, and open maps are inevitable (either wiki-style, as in OpenStreetMap, or through data mining, as Photo Tourism does with Flickr.) The pieces of a personal cyberworld viewing device already exist, and they’re getting faster and cheaper.

And of course, as soon as they’re fast and cheap enough, we’ll start to get used to the idea of seeing the world through an image-processed lens. We’ll instantly find new things to do with it; the old McLuhan/Gibson/Banks notion of externalized perception and cognition will suddenly become solidly mainstream and consumer.

I’m here to tell you it’s not long now.

Intelligent News Agents, With Real New

You cannot read all of the news, every day. There is simply too much information for even a dedicated and specialized observer to consume it all, so someone or something has to make choices. Traditionally, we rely on some other person to tell us what to see: the editor of a newspaper decides what goes on the front page, the reviewer tells us what movies are worth it. Recently, we have been able to distribute this mediation process across wider communities: sites like Digg, StumbleUpon, or Slashdot all represent the collective opinions of thousands of people.

The next step is intelligent news agents. Google (search, news, reader, etc.) can already be configured to deliver to us only that information we think we might want to see. It’s not hard to imagine much more sophisticated agents that would scour the internet for items of interest.

In today’s context, it’s easy to see how such agents could actually be implemented. Sophisitacted customer preference engines are already capable of telling us what products we might like to consume — the best example is Amazon’s recommendation engine. It’s not a big leap to imagine using the same sort of algorithms to model the kinds of blog articles, web pages, youtube videos, etc. that we might enjoy consuming, and then deliver these things to us.

There is a serious problem with this. You’re going to get exactly what you ask for, and only that.

True, we all do this already. We read books and consume media which more or less confirm our existing opinions. This effect is visible as clustering in what we consume, as in this example of Amazon sales data for political books in 2008.

Social network graph of Amazon sales of political books, 2008

This image is from a beautiful analysis by Basically, people buy either the red books or the blue books, but usually not both. The same sorts of patterns hold for movies, blogs, newspapers, ideologies, religions, and human beliefs of all kinds. This is a problem; but at least you can usually see the other color of books when you walk into Borders. If we end up relying on trainable agents for all of our information, we risk completely blacking out anything that disagrees with what we already believe.

I propose a simple solution. Automatic network analyses like the one above — of books, or articles, or web pages — could easily pinpoint the information sources that would expose me to the maximum novelty in the minimum time. If my goal is to gain a deep understanding of the entire scope of human discourse, rather than just the parts of it I already agree with, then it would be very simple to program my agent to bring to me exactly those things that would most rapidly give me insight into those regions of information space which are most vital and least known to me. I imagine some metric like “highest degree node most distant from the nodes I’ve already visited” would would work handily.

You can infer a lot about somewhat from the information they currently consume. If my agent noticed that I was a liberal, it could make me understand the conservative world-view, and vice-versa. If my agent detected that I was ignorant of certain crucial aspects of Chinese culture and politics, it could reccomend a primer article. Or it might deduce that I needed to understand just slightly more physics to participate meaningfully in the climate change debate, or decide (based on my movie viewing habits) that it was high time I review the influential films of Orson Welles. Of course, I might in turn decide that I actually, truly, don’t care about film at all; but the very act of excluding specific subjects or categories of thought would force us, consciously, to admit to the boundaries of our mental worlds.

We could program our information gathering systems to challenge us, concisely and effectively, if we so want. Intelligent agents could be mere sycophants, or they could be teachers.

What Foxmarks Knows about Everyone

I recently installed Foxmarks, a Firefox extension that automatically synchronizes your web bookmarks across all the computers you might use. Refreshingly, the developers got it right: the plug-in is idiot-simple and works flawlessly.

This is accomplished through a central server, which means a lot of bandwidth, hardware, reliability costs, etc. In short, it’s not a completely cheap service to provide. As there is no advertising either in the plug-in or on the site (yet?) I began to wonder how they planned to pay for all this. I found my answer on their About Us page:

We are hard at work analyzing over 300 million bookmarks managed by our systems to help users discover sites that are useful to them. By combining algorithmic search with community knowledge-sharing and the wisdom of crowds, our goal is to connect users with relevant content.

Of course.

There is a lesson here: knowledge of something about about someone is fundamentally different than knowledge of something about everyone. As with Google, Amazon, or really any very large database of information over millions of users, there are extremely valuable patterns that only occur between people. The idea is as old as filing, but the web takes this to a whole new level, especially if you can convince huge numbers of people to voluntarily give up their information.

So far, I haven’t said anything new. What I am suggesting is a shift in thinking. Rather than being concerned primarily about our individual privacy rights when we fill out a form full of personal details, perhaps we should be pondering what powers we are handing over by letting a private entity see these large-scale intra-individual patterns — patterns that they can choose to hide from everyone else’s view, naturally.

I am beginning to wonder very seriously about the growing disparity between public and private data-mining capability. Is this an acceptable concentration of power? What effects does this have on a society?

Weak AI Will Win

Depending on who you ask, machines taking over the world is either a good thing for humanity or a bad thing. The traditional SciFi script has advanced intelligences replicating through all the networks of the galaxy and having high-bandwidth intellectual conversations about things like the fundamental nature of physics and whether biological life deserves to continue to exist, since it’s such an out-dated evolutionary stage and all. But in his new novel Daemon, and in his talk last night at the Long Now Foundation‘s lecture series, Daniel Suarez argues that it’s not hyper-intelligence at all that we need to be wary of: humanity can lose control of the situation well before the appearance of consciousness on the internet. We’re already delegating our decision making to the machines, specifically the lowly “bots” we use now for a variety of practical online tasks.

Continue reading Weak AI Will Win

Medicine is the Killer App For Technology

I’ve met quite a few people who feel that civilization was a mistake. Technology in particular, they say, is bad in some way. If they’re an anarcho-primitivist theorist, they’ll tell you it’s alienating: it creates hierarchies, produces psychological illusions of scarcity, and turns us into little more than specialized insects. If they’re less geeky and more hippie, they’ll just expound on how happy they were living in that rural Indian village, how spiritual that life was, how much more natural a world without technology would be.

In the bright Nepali sunshine, sipping chai in a tourist cafe overlooking the lake, I found I could not agree, no matter how cute the dreadlocked girl sitting across from me. I see a lot of idealism and projection in her arguments. I also see an iPod in her bag. But neither could I come up with a concrete reason to insist that technology is fundamentally good, that the human race should invest as heavily in technology as it has. I admit that I really enjoy both the intellectual playground of technology and the fruits it brings, but that’s no way to form a moral imperative.

Until Ethiopia. I was working on a trachoma epidemiology study. This is an ancient, simple disease, and so fragile that the merest hint of civilization will destroy it — we’re not quite sure why yet. It could be antibiotics used for other things wipes it out, it could be that just washing your hands daily in clean water prevents its spread. But if left untreated long enough, this feeble disease will make you blind.

I had the cliché moment. I hiked out across the roadless wilderness to that idealized little village, that tiny traditional portion of the way we used to live. The simple folk gathered round us, gazing strangely at our white skin and synthetic fabrics. In turn we stared at their traditional cotton garments and coarse shiny jewelry, artifacts of a society that makes everything with its own hands. We stood a moment in that field, contemplating one another across vast distances of education and context. Then I looked into the scarred corneas of a blind young man and felt suddenly: this sucks. This man cannot see, for no reason at all. Extremely simple medicine could have prevented that.

It’s one of those moments when you realize that you’re not okay with the world as it is.

Medicine is good because health is good. I see no other way to draw this conclusion. And medicine is technological. Antibiotics are in no sense natural, x-rays and heart transplants less so. Medicine is the moral justification for continued technological development and dissemenation. It’s the killer app for technology, because it’s not just medical technology that must be known: modern medicine requires an entire technological infrastructure to design and manufacture its many, many inputs. Computers. Polymers. Superconducting magnets. Refrigerators to make the ice to keep cold our collected samples, and enzymes to do the PCR to detect the trachoma DNA, mathematics to do the statistical analysis to determine if our mass antibiotic distribution is actually denting the epidemic. It takes a world to raise a hospital.

That’s the moral reason for continued technological development. That blind man. Go tell his mother that we’d all be happier as hunter-gatherers.

Of course, that’s not why we actually will continue to develop our technology.

In the late afternoon sunlight I lounged against a tree, waiting for the last few villagers to show up so we could test them. They had fed us some (traditional, natural, idealized) beer, and I was sleepy and idle. I extracted my MP3 key from my kit and put the headphones in, leaned back to something relaxed. A kid came up to me, looking expectantly. He must have been about twelve.

“MP3 player?” he said.

“Yeah,” I replied.

“How many gigabytes?” he asked. Then: “I want one.”

I find it hard to disagree with him.

The Singularity is Not Near

Blah blah blah singularity blah blah machine AI blah blah the world will undergo a paradigm shift, it’s coming, all bow down before the mighty new technologies that will change humanity forever. The problem I have with talk of the technological singularity is not that it doesn’t make sense, and not that I don’t believe that technological advancement is indeed rapid, accelerating, and world-changing, but that we have somehow invented a symbol of vast but actually rather vague significance. I don’t think the “singularity” is a useful idea. I think it’s a buzzword to some, and a religion to others.

For what makes Futurology (capitalization mine) really, actually different than a belief that something momentous will happen in 2012, when the Mayan calendar wraps around? Not a lot, as far as I can tell. And now it turns out that two religious scholars have concluded exactly the same thing, in a 2008 paper in the Journal of Contemporary Religion:

Futurology-as-religion has charismatic leaders, authoritative texts, mystique, and a fairly complete vision of salvation. Futurology is, in effect, a new religious movement (NRM).

Continue reading The Singularity is Not Near