LOL! Worth staying with it until the end.
Other references I’ve found today in trying to understand what the hell the US government is doing:
Jeffrey Sachs (possibly best known as the author of The End of Poverty) discusses Geithner’s asset buying plan here:
There are countless preferable and more transparent courses of action. The toxic assets could be sold at market prices, not inflated prices, making the bank shareholders bear the costs of the losses of the toxic assets. If the banks then need more capital, the government could invest directly into bank shares. This would bail out the banking system without bailing out the bank shareholders. The process would be much fairer, less costly, and more transparent to the taxpayer.
And I have finally, finally found a detailed, clear, and well-documented primer on how we got into this mess in the first place. In fact it’s an entire online supplementary chapter to Stanford Professor Charles Jones’ macroeconomics textbook. It clearly explains basic concepts like bank balance sheets, liquidity crises, the role of the federal interest rate, leverage, etc. and goes through a detailed history of the last two years from a macro-economic point of view. Lots of graphs too, the recession in pictures! Highly recommended.
Despite spending the last several days reading up on Treasury Secretary Geithner’s plan to buy bad bank assests, I now feel only marginally better prepared to judge whether this is a good idea or not. Of course, no one is asking me, but I still think it’s a big problem that I can’t evaluate this plan, because the fact that we live in a democracy means that citizens need to be able to understand what their government is doing.
Now, I am no economist and I have no idea how to run a bank — much less all the banks. However, I am smart, interested, and I’ve done my homework, including previously reading a first year economics textbook (covering both micro- and macro-economics) and several other interesting books (1,2,3) on how markets work or don’t. In short I have been the model of a concerned citizen, and I still have no idea what is going on. This is partially because the situation is very complex, but it is also because there is no way a private citizen can get access to the data that would clarify matters — large banks will barely share their balance sheets with the government, much less me.
This is a problem. It means that the government, financial, and academic communities have not paid nearly enough attention both to basic economics education, and to transparency in real-world business. It is therefore impossible for anyone else to check their assumptions and restrain their huge power. Lest this sounds like unhelpful complaining, I promise to make a concrete suggestion for improvement by the end of this post.
Continue Reading »
Or at least help us to understand it. Climateprediction.net is a large-scale scientific computing experiment, relying on individual computer users who donate their computer time for the simulation of tens of thousands of global warming scenarios. This is important because, lacking other Earths to experiment with, computer simulations are really the only way we can validate our existing models of climate change — and then predict the future with models we think are accurate.
The climateprediction.net project comprises three separate experiments – one to explore the model we are using, the second to see how well the models replicate past climate and the third to finally produce a forecast for 21st century climate. Each model that we distribute will be used for all three experiments.
Built upon the BOINC scientific computing framework oriignally developed for the groundbreaking SETI@Home project, Climateprediction.net relies upon hundreds of thousands of volunteer users who donate their spare computer time. All of these machines together are effectively one of the largest supercomputers in the world, and this allows previously impossible scientific studies. The Climateprediction.net scientific team can run not just one or a few climate prediction simulations, but hundreds of thousands. One study performed this way was the Seasonal Attribution Experiment:
Continue Reading »
It is now possible to see what a person is looking at by scanning their brain. The technique, published last November by a team of Japanese neuroscientists, uses FMRI to reconstruct a digital image of the picture entering the eye, albeit at very low resolution and only after hundreds of training runs. Still, it’s an awesome development, and many articles covering this research have called it “mind reading” (1, 2, 3, 4, 5). But it really isn’t, and it’s fun to explore what real “mind reading” would imply.
When I hear “mind reading” I want psychic abilities. I want to be able to know what number you’re thinking of, where you were on the night of March 4th, and what you actually think of my souffle. This is the sort of technology that could be badly misused, as the comments on one blog note:
Am I the only one finding this DEEPLY disturbing? It opens the doors to some of the scariest 1984-style total-control future predictions. Imagine you can’t hide your f#&%!ng MIND!
Fortunately, we’re not there yet. Morover, if we did have the technology to read minds, we’d have much bigger societal issues than privacy to deal with. The existence of “mind reading machines” would imply that we possessed good formal models of the human mind, and that is a can of worms.
Continue Reading »
We live in a cacaphony of news, but most of it is just echoes. Generating news is expensive; collecting it is not. This is the central insight of the news aggregator business model, be it a local paper that runs AP Wire and Reuters stories between ads, or web sites like Topix, Newser, and Memeorandum, or for that matter Google News. None of these sites actually pay reporters to research and write stories, and professional journalism is in financial crisis. Meanwhile there are more bloggers, but even more re-blogging. Is there more or less original information entering the web this year than last year? No one knows.
A computer could answer this question. A computer could trace the first, original source of any particular article or statement. The effect would be like donning special glasses in the hall of mirrors that is current news coverage, being able to spot the true sources without distraction from reflections. The required technology is nearly here.
This is more than geekery if you’re in a position of needing to know the truth of something. Last week I was researching a man named Michael D. Steele, after reading a newly leaked document containing his name. Steele gained fame as one of the stranded commanders in Black Hawk Down, but several of his soldiers later killed three unarmed Iraqi men. I rapidly discovered many news stories (1, 2, 3, 4, 5, 6, 7, etc.) claiming that Steele had ordered his men to “kill all military-age males.” This is a serious accusation, and widely reprinted — but no number of news articles, blog posts, and reblogs can make a false statement more true. I needed to know who first reported this statement, and its original source.
Continue Reading »