“Eager to defend the civilian control of nuclear weapons from military encroachment, John F. Kennedy and Robert McNamara had fought hard to ensure that only the president could make the ultimate decision. But they hadn’t considered the possibility that the president might be clinically depressed, emotionally unstable, and drinking heavily—like Richard Nixon, during his final weeks in office. Amid the deepening Watergate scandal, Secretary of Defense Schlesinger told the head of the Joint Chiefs to seek his approval before acting on “any emergency order coming from the president.” Although Schlesinger’s order raised questions about who was actually in command, it seemed like a good idea at the time.”—Eric Schlosser, Command and Control
After reading Eric Schlosser’s Command and Control, an exploration of nuclear safety and implementation, it becomes impossible to think of a nuclear weapon as a singular thing. These bombs live in a complicated system, a network, of people, technologies and politics which manage them; determine how they’re built, test, stored, and activated. One cannot truly discuss a nuclear weapon without understanding these complexities for the bomb is inseparable from this network. It is such a complicated device to design and utilize that it can only be born and managed within a network of support. Schlosser’s work is a fantastic introduction and case study for actor-network theory.
It’s striking that nearly1 all the threatening accidents detailed in the book aren’t due to faults within the bomb itself. The accidents are the result of the uncontrollable, complex systems which are necessary to manage the bombs. The causes, taken on their own, are almost always mundane: a release lever is instinctually grabbed by a pilot during turbulence to stabilize himself, a socket wrench loses it’s socket, or an aircraft’s air conditioning stops working. One time a technician loaded the wrong magnetic tape and triggered a simulation of a massive Soviet attack:
As the computer screens at NORAD filled with Soviet missiles, a Threat Assessment Conference was called. Although the pattern of the attack seemed to fit with the Pentagon’s assumptions about Soviet war plans , its timing made little sense. Tensions between the superpowers weren’t particularly high, and nothing in the news seemed to warrant a “bolt from the blue” attack on the United States. Duty officers at NORAD contacted the radar and ground stations whose sensors were relaying information about the launches. None of them had detected signs of any missiles. The NORAD computers seemed to be providing an erroneous— but highly realistic— account of a Soviet surprise attack.
As a precaution, the Klaxons were sounded at SAC bases nationwide. Bomber crews ran to their planes, and missile crews were put on heightened alert. Fighter-interceptors took off to look for signs of a Soviet attack. The National Emergency Airborne Command Post left Andrews Air Force Base without President Carter on board. And air traffic controllers throughout the country prepared to clear America’s airspace for military flights, warning every commercial airliner that it might soon have to land.
As the minutes passed without the arrival of Soviet warheads, it became clear that the United States wasn’t under attack. The cause of the false alarm was soon discovered. A technician had put the wrong tape into one of NORAD’s computers. The tape was part of a training exercise—a war game that simulated a Soviet attack on the United States. The computer had transmitted realistic details of the war game to SAC headquarters, the Pentagon, and Site R.
This genre of accident – the computer error that threatens to launch a giant volley of weapons, rather than a single warhead failing – should be especially frightening to anyone who’s ever managed a website, server, or any complex code base. And it happened several times:
At about two thirty in the morning on June 3, 1980, Zbigniew Brzezinski, the president’s national security adviser, was awakened by a phone call from a staff member, General William E. Odom. Soviet submarines have launched 220 missiles at the United States, Odom said. This time a surprise attack wasn’t implausible. The Soviet Union had recently invaded Afghanistan, confirming every brutal stereotype promoted by the Committee on the Present Danger. The United States was leading a boycott of the upcoming Moscow Olympics, and relations between the two superpowers were at their lowest point since the Cuban Missile Crisis. Brzezinski told Odom to call him back with confirmation of the Soviet attack and its intended targets. The United States would have to retaliate immediately; once the details of the attack were clear, Brzezinski would notify the president. Odom called back and said that 2,200 missiles were heading toward the United States— almost every long-range missile in the Soviet arsenal. As Brzezinski prepared to phone the White House, Odom called again. The computers at NORAD said that Soviet missiles had been launched, but the early-warning radars and satellites hadn’t detected any. It was a false alarm. Brzezinski had allowed his wife to sleep through the whole episode, preferring that she not be awake when the warheads struck Washington.
SAC bomber crews had run to their planes and started the engines. Missile crews had been told to open their safes. The airborne command post of the Pacific Command had taken off. And then the duty officer at the Pentagon’s National Military Command Center ended the Threat Assessment Conference, confident that no Soviet missiles had been launched. Once again, NORAD’s computers and its early-warning sensors were saying different things. The problem was clearly in one of the computers, but it would be hard to find. A few days later NORAD computers warned SAC headquarters and the Pentagon for a third time that the United States was being attacked. Klaxons sounded, bomber crews ran to their planes— and another Threat Assessment Conference declared another false alarm.
This time technicians found the problem: a defective computer chip in a communications device. NORAD had dedicated lines that connected the computers inside Cheyenne Mountain to their counterparts at SAC headquarters, the Pentagon, and Site R. Day and night, NORAD sent test messages to ensure that those lines were working. The test message was a warning of a missile attack—with zeros always inserted in the space showing the number of missiles that had been launched. The faulty computer chip had randomly put the number 2 in that space, suggesting that 2 missiles, 220 missiles, or 2,200 missiles had been launched. The defective chip was replaced, at a cost of forty-six cents. And a new test message was written for NORAD’s dedicated lines. It did not mention any missiles.
The effects of mundane accidents become ridiculously amplified when connected to a network which manages non-mundane things:
After studying a wide range of “trivial events in nontrivial systems,” Perrow concluded that human error wasn’t responsible for these accidents. The real problem lay deeply embedded within the technological systems, and it was impossible to solve: “Our ability to organize does not match the inherent hazards of some of our organized activities.” What appeared to be the rare exception, an anomaly, a one-in-a-million accident, was actually to be expected. It was normal.
The complexity of such a system was bound to bring surprises. “No one dreamed that when X failed, Y would also be out of order,” Perrow gave as an example, “and the two failures would interact so as to both start a fire and silence the fire alarm.”
Such effects are well known to those that study complexities or knowledge practices. Upon finishing Command and Control I immediately went to the bookshelf and pulled down one of these texts, John Law and Annemarie Mol’s Complexities. From the introduction:
The process of scaling up poses many problems. Large-scale technologies usually grow out of laboratory experiments, but the process of translation is tricky because laboratory experiments are simplificatory devices: they seek to tame the many erratically changing variables that exist in the wild world, keeping some stable and simply excluding others from the argument. This often works well in the laboratory: if one does an experiment in a test tube, it is not unreasonable to assume that the air in the lab will absorb any heat that is produced. Calculation is greatly simplified by choosing to neglect a variable such as “heat.” However, it works less well when what was confined to a test tube is scaled up to become a power plant. What happens now to all that excess heat? where does it go? And where do radioactive waste products go?
So there is scaling, and there are unpredictabilities, erratic forms of behavior. These do not fit the schemes of most sciences very well either because the latter prefer to treat with only a few variables, not too many. The problem is that what was not predictable tends to occur anyway. So how should this be handled?
The answer–one answer– is that such chaotic events are tamed by theories of chance. In being reduced to a probability and framed as a risk they are turned into something that however erratic, is also calculable. The risk of an explosion in the factory on the edge of your town (an explosion that will take your town with it) is, say, 0.000000003 percent per annum. Now go calculate whether this is a good enough reason to be anxious!
This sort of reductionism via probability is all over Schlosser’s book. Bomb safety is measured and justified by standards expressed as odds:
For example, it proposed that the odds of a hydrogen bomb exploding accidentally— from all causes, while in storage, during the entire life of the weapon— should be one in ten million.
These odds, derived in laboratories, were used to allocate money for bomb safety. The factors which cannot reasonability be measured in the laboratory remain a problem. The mundane accidents caused by the messy world, which have been removed from the lab so that variables can be isolated, will always affect the network. Hence, as Scholosser writes:
The probabilities remained unknown. What were the odds of a screwdriver, used to repair an alarm system, launching the warhead off a missile, the odds of a rubber seat cushion bringing down a B-52?
The only issues I can recall contained solely within the weapons is a case of an unexplained noise emanating from a thermonuclear warhead (which never posed any danger) and strips of boron disintegrating within the core of a line of weapons, which rendered them useless. Neither of these, oddly, were dangerous (at least relative to any of the other mishaps). ↩
“Now, do I agree with those sentiments? Yes. I do. I would absolutely think a person who wants to spend all his free time with the parents of his girlfriend’s ex-boyfriend should be classified as insane. I assume most rational people would feel the same way. But here’s the thing: Every single aspect of this episode is insane. The whole idea of Kramer going into business with Jerry’s dad is insane. Another “Raincoats” subplot requires George to take a little boy he barely knows to France; still another examines the ethics of making out while watching Schindler’s List. Every element of “The Raincoats” is nuts. But only Reinhold’s insane niceness is a problem. It’s the only thing that prompts Jerry and Elaine to have a straightforward conversation about how such behavior is unacceptable.”—Chuck Klosterman
With its long-term contract with Nielsen set to expire today, the Fox Television Stations Group was preparing to become the first network-owned TV station group to walk away from Nielsen in decades. While sources say negotiations continued over the weekend, the two companies were characterized as being at loggerheads over some key contractual and methodological issues, and that the Fox stations were considering dropping Nielsen altogether and instead using rival TV ratings service Rentrak exclusively.
A short explainer:
Nielsen is the company which decides how much TV shows are worth, which shows live and which shows die.
Nielsen “utilizes either paper diaries or combinations of paper diaries and electronic meters in all but the biggest TV markets.” This panel of roughly 20,000 households has been deemed ‘representative’ of the entire 117.5 million households in the USA. (That’s roughly one Nielsen home for every 5,800 homes).
Rentrak calculates TV measurements using set-top box data (no potentially erroneous diaries) from roughly 25 million set-top boxes.
It seems like every post about Google Glass is dripping with bias either for or against the device, so before we get into it here’s a little transparency: I’m Google Glass Explorer #1499. I paid $1500 of my own money to get Glass, and I’ve owned the device for over a year. I thought Glass was really amazing when it first showed up, and I wrote a review after about a month and half of ownership. Once the novelty wore off though, Glass spent most of its life in a drawer, only to occasionally be dusted off to try out the newest update.
Now, after playing with the Android Wear emulator for a few months and actual Wear hardware for a few days, it’s time to call it: Google Glass is obsolete. Android Wear on a smartwatch does nearly everything Glass can do and then some, and it comes in a package that is significantly more ergonomic, convenient, cheaper, and socially acceptable. Android Wear has almost all the positives of Google Glass and none of the negatives.
Amazon’s FireFly technology recognizes 100 million objects (read: products). And there’s a dedicated button to activate it on every Amazon Fire Phone:
Using Firefly, a button on the side of the Fire Phone will instruct the camera to recognize a phone number, a book, a DVD, a URL, a QR code, and more. Additionally, Firefly will be able to listen for music (like Shazam) and identify a song that’s playing in the ambient noise around you. Amazon said that iHeartRadio, a popular app developed by Clear Channel, is already integrated with this function and will let you build a playlist based on an artist you hear and like.
Firefly can also identify TV shows down to the episode you’re watching, as well as art pieces (identifying art based on an image was something this writer desperately desired while failing an art history class in high school).
Basically the Fire Phone can identify anything in the physical world which you can purchase at Amazon.
The new center, according to the development authority, would have a $76 million economic impact on Camden over 35 years.
But at least one person at the morning meeting questioned whether it would have any real effect on the city’s residents.
Kelly Francis, president of the Camden County branch of the NAACP, asked O’Neil whether there would be any entry-level jobs at the complex.
"We need a shooting guard," O’Neil jokingly responded.
Camden, one of the poorest cities in the nation, has an unemployment rate of 12.3 percent, and although the Sixers are required to provide 250 jobs there to maintain the tax breaks, O’Neil said 200 of them are already filled.
The 76er’s and their received $82 million in tax credits from the state of New Jersey.
There’s lots of handwringing going on. Hoping to bring some context to this news and why it’s an important change:
So Apple was faced with a challenge: their users’ devices were being logged without their knowledge, without their consent, all while using a hardware-based identifier. Apple’s adherence to standard network practices – broadcasting MAC addresses to WiFi hubs – created an environment where this situation could occur. So Apple made moves to change that standard practice.
Starting in iOS 8, iPhones, iPads, and iPod Touches will broadcast random MAC addresses. In Apple’s words, “The MAC address for WiFi scans may not always be the device’s (universal) address.” Companies that log MAC addresses won’t be able to connect individual visits to a single device. They’ll know someone is there, but not where else they’ve gone. >
Some have suggested that this move is a play to get more people using Apple’s own iBeacon API. This may be true. But iBeacons are much more user friendly. To see a company’s iBeacons, users must install an associated application and grant it the appropriate location permissions. Applications that use iBeacons are opt-in and users are always able to opt-out by managing their location permissions in their device settings.
“As Feynman’s friend Murray Gell-Mann said at Feynman’s memorial service, Feynman “surrounded himself with a cloud of myth, and he spent a great deal of time and energy generating anecdotes about himself.””—Feynman and the Bomb
I wager that HealthBook will launch and will at least track steps, sleep, pulse, and oxygen levels. The latter two figures will be monitored with the thumbprint sensor in the iPhone 5S.
A partnership with Nike will be announced, given the recent break-up of the Nike Plus team. Expect this to speed up Facebook’s integration of Moves.
I haven’t been paying attention to speculation over the last 6 months, but I expect Mac OS X to be visually updated.
AppleTV will also be updated significantly. Gaming will be discussed and a new UI will be launched. They’ve been plugging AppleTV numbers to frequently as of late to let the current “UI” last much longer. I’d put even money an AppleTV App Store will be made available for developers.
“Fundamentally, and in the long run, the problem which is posed by the release of atomic energy is a problem of the ability of the human race to govern itself without war.”—A Report of a Panel of Consultants on Disarmament of the Secretary of State, January 1953