Terminal Madness started out as a Computer Bulletin Board, ( BBS ) back around 1993. Fascinated that one could get all the information they ever wanted "on line", for FREE, the "BBS" was named Terminal Madness. I have been hooked on tech ever since. I took my "username" BrainStorm as I love the concept of getting a group of people together to solve... a problem, "Brainstorming". More

Apple iPad Air 2019 and iPad Mini 2019: Price, Specs, Release Date

Apple iPad Air 2019 and iPad Mini 2019: Price, Specs, Release Date

Blink and you might have missed the early-morning tweet from Apple CEO Tim Cook—the one showing him scrawling the word "Hello" on an iPad Mini using a stylus. An updated Mini is the device that small-tablet fans have been hoping for since 2015, the year the iPad Mini 4 launched. It’s been so long since the Mini saw an update, some people presumed the product to be dead.

But it turns out the iPad Mini lives again, as does the sleek iPad Air. New versions of both were announced by Apple this morning.

The iPad Mini and iPad Air are “new” in the accurate sense of the word; they have new processors and updated displays. They also both now support the Pencil, Apple's $99 stylus. They effectively replace all previous Mini and Air tablets. But their names and their builds are throwbacks: white bezels where Apple's pricier tablets have none, a Lightning port instead of USB-C, and a familiar home button in an era when Apple is doing away with them everywhere else. The new iPads add two more models to Apple’s entire iPad lineup, which now totals five versions of the mobile computer—even as the tablet market continues to decline.

The new iPad Mini has the same 8.0-inch by 5.3-inch body as the one before it. Its display resolution is also the same, but now gets a boost from Apple’s “True Tone” color shifting technology and can display a wider range of colors. It has support for faster Wi-Fi speeds and gigabit LTE. It also runs on Apple’s A12 Bionic chip, which has its own neural engine for machine learning-powered tasks, though this isn’t quite as amped up as the A12X chip found in Apple's more expensive hardware.

Both new iPads are on sale now. The iPad Air starts at $499, and the iPad Mini starts at $399. Both new models are more expensive than last year's basic iPad, which starts at $329.

Apple

The new iPad Air is a bit of a head-scratcher. First, the original iPad Air was discontinued a couple of years ago, so today's release is a true revival. Despite its “Air” moniker, it’s not even the thinnest iPad out there; the iPad Pro is a hair slimmer. The Air does weigh just a pound, which means it’s lighter than the Pros. Otherwise, the 10.5-inch iPad Air has many of the same updates as the Mini. It has a high-resolution True Tone display, the same A12 Bionic processor, the same home button/fingerprint sensor, the same support for the first-generation Pencil. Both have eight-megapixel rear cameras, seven-megapixel front-facing cameras, and capture HD video.

Both new tablets are selling now. The iPad Air starts at $499 for 64 gigabytes of internal storage, and the iPad Mini starts at $399 for the same amount of storage. That means the Mini still isn’t the least expensive iPad available. That title goes to last year’s 9.7-inch iPad, which starts at $329, has a less powerful processor, and doesn’t have the better display found in the new models. But the new iPad Air and iPad Mini cost much less than the iPad Pros, which have a high-end-laptop price tag.

Lauren Guenveur, a senior research analyst at IDC, told WIRED that the new iPad Air is a kind of “jumping point” between the lower-range iPads and the high-end iPad Pros. It fills an important gap, especially since Apple also quietly killed off the 10.5-inch iPad Pro today. “The Air is the ‘in-between’ selling price and ‘in-between’ brand name that they needed to sell in order to get people up the stack.”

But the iPad Mini is also uniquely positioned because of its small size. It’s the kind of hardware that can appeal to a wide range of people, from game-obsessed kids, to people looking for something Kindle-sized for media consumption, to frequent flyers, to doctors and other professionals who want something (sort of) pocketable when they’re out in the field. It’s also a popular choice for small businesses looking to modernize their point-of-sale terminals. Despite Apple’s pedigree as a consumer tech company, it knows that consumers don’t upgrade their tablets all that often, and there’s opportunity in the commercial space.

“One of the only bright spots in the slate segment right now is the commercial segment, especially field workers,” says Guenveur. “They still need a device that’s slightly more capable than a smartphone in what they do.”

Guenver added that she believes AR glasses will fulfill a lot of those kinds of in-the-field needs at some point in the near future. So if Apple was ever going to refresh its once-popular iPad models, “It’s a now or never moment in that sense,” says Guenveur.

The release of the new iPads is coming exactly a week before Apple is set to host a media-focused event at which the company is expected to reveal a new subscription service for news and potentially flex its streaming video muscle in an attempt to compete more seriously with Netflix and Amazon Prime Video. Last year around this time, Apple held an iPad event in Chicago focused on both hardware and software: The new Pencil-friendly version of the iPad was accompanied by software aimed at teachers and students.

But clearly, Apple didn’t want to mix its hardware and software announcements this time around. And the new iPads were announced with much less fanfare than usual. Last fall, the announcement of new iPad Pros was deemed worthy of a large-scale event at the Brooklyn Academy of Music. This morning, all we got was a simple and enigmatic tweet.

More Great WIRED Stories

Original author: Lauren Goode
Continue reading
  3 Hits
  0 Comments
3 Hits
0 Comments

'The Inventor,' Theranos, and Multiplatform Schadenfreude

'The Inventor,' Theranos, and Multiplatform Schadenfreude

Over the course of 119 minutes in The Inventor: Out for Blood in Silicon Valley, Alex Gibney's new documentary about Elizabeth Holmes and Theranos, very little is made of Holmes' speaking voice. Very little needs to be; she's on camera for much of the movie, her presence stitched together from news clips, conference appearances, and a surprising wealth of leaked internal footage. Yet the absence is curious. Her cake-in-the-throat alto, which many allege is an affectation, has emerged over the past few years as one of people's favorite characters in the duplicitous saga of Theranos. The Inventor lets it stand on its own. That may be because Gibney sets out to tell a different story—but it's more likely that he simply knows you know.

Peter Rubin covers culture and technology for WIRED.

Through John Carreyrou's Wall Street Journal reporting and subsequent book-length exposé to ABC News' podcast The Dropout, and now to HBO's The Inventor, the story of Holmes' rise and downfall has been repurposed more than a Nanotainer's contents in Theranos' mythic (and mythological) blood-analysis machine. What it has not been is diluted. People's appetite for tonight's documentary feels just as voracious as it was for Carreyrou's Bad Blood, which came out in May 2018—and taken in sum, that hunger has become an oddly fitting literalization of the "just hook it to my veins" meme.

Gibney's treatment of the story has much to recommend it, but it's most instructive as the apotheosis of a particular cultural moment. With so many ways to consume stories, consumers are increasingly using them all in order to wring every possible microdrop of schadenfreude out of the most enduring story of all: hubris.

2019 was barely two weeks old when Netflix and Hulu each released a Fyre Festival documentary, but the two projects immediately formed into a single yin/yang of vicarious payoff. Dough-faced charlatan Billy McFarland proved the perfect villain: unrepentant, utterly mediocre, and leaving innocent caterers and organizers scattered in his wake. Viewers watched both not out of compassion for those wronged but to see retribution befall the entire continuum of asshole, from figurehead to narcissistic "influencers."

The Fyre docs were a marked departure from the usual scammer celebrations. This was a case of the haves duping the haves, in a way that made everyone else feel both righteous and entertained. Given its obvious parallels to the number-juicing scams of VC-funded startups—with McFarland as fleece-wearing tech bro and festival-goers as fleeced seed-greedy investors—it perfectly teed up a tale of a tech founder who built her legend on clouds. For anyone who had already devoured Bad Blood and was listening to The Dropout, a new documentary would surely rain hot Fyre on Theranos.

It does. Despite Gibney claiming at the documentary's San Francisco premiere that he and his producing partner had sought to explore the psychology of fraud, excoriation lurks in every corner of The Inventor. Gibney's otherwise plentiful voiceover disappears when the camera lingers over Holmes' awkward mannerisms: her unblinking eyes, her odd clapping. The film stacks up repeated instances of Holmes reciting her vision to save the world through preventative health, playing her scripted origin story for laughs. The documentary's non-Theranos characters include no wronged patients; instead they're almost exclusively gatekeepers: Phyllis Gardner, a Stanford professor and mentor who had rebuffed Holmes, only to watch the young founder celebrated for an idea she knew was impossible; Fortune's Roger Parloff and The New Yorker's Ken Auletta, each of whom had profiled Holmes before cracks appeared in the facade.

Related Stories

John Carreyrou

A New Look Inside Theranos’ Dysfunctional Corporate Culture

Brian Barrett

The Theranos Indictments Expose the Soul of Silicon Valley

Andrea Valdez

Fyre Festival Documentaries Dissect Attendees'—and Your—FOMO

None of them are direct victims of Theranos' deceit, acting instead as proxy for the viewer. We are the ones angered by Holmes' self-importance, her bulletproof glass and armed guards. We are the ones who shake our heads at her egomania, at her Jobsian turtlenecks and self-comparisons to Archimedes. We are the ones indicting the many older men—from advisers to investors to Parloff and Auletta themselves—who seem to have been taken in by what Gardner archly underdescribes as Holmes' "charm." Gibney's portrait illuminates little of Holmes' psychology. Rather, it does much to provide more of what Bad Blood and The Dropout did: glee at the inevitable outcome. (Behavioral economist Dan Ariely is the rare exception. By articulating the way humans are prone to believe their own machinations, he does more to cast Holmes as a complex figure than the rest of the documentary combined.)

The result, which also receives a boost from Theranos whistle-blowers Erika Cheung and Tyler Shultz, is entertaining, but it feels more like a third course than a meal of its own. The arc of the story is there—as it's always been over the past 10 months of transmedia Theranostication. There's video now, and there's valuable insight, but it's in the service of complement. The Theranos story is being told not through any single text, but through an all-you-can-eat multimedia buffet.

Don't push back from the table just yet. Adam McKay (Vice, The Big Short) is attached to direct an adaptation of Bad Blood, with Jennifer Lawrence as Holmes. Meanwhile, though the company itself has dissolved, Holmes still awaits criminal trial on wire fraud charges, proceedings that will no doubt find repeated outlet through news reports, podcasts, and takes aplenty. You'll hear Holmes' voice again and again in the months to come. What you hear in it depends on how much appetite you have left.

More Great WIRED Stories

Original author: Peter Rubin
Continue reading
  5 Hits
  0 Comments
5 Hits
0 Comments

Best iPads (2019): Which New iPad Should You Actually Buy?

Best iPads (2019): Which New iPad Should You Actually Buy?

Choosing an iPad is more complicated than it needs to be, but we're here to help.

Buying an iPad should be simple. You just buy whatever’s new, right? If only. Apple sells four different main iPad models, each with their own strengths. In addition, there are a growing number of older iPads floating around the eBays of the world. Since all these devices generally look the same, it’s important to know what you’re buying and what you should pay for it. This guide covers the best iPads available right now, the important differences between each iPad model, and every old iPad in existence, including the ones you shouldn’t buy at any price. Also, be sure to check out all our latest buying guides, including the Best iPhones, Best Tablets, and Best MacBooks.

Updated March 2019: We revamped our recommendations because Apple revealed a new 10.5-inch iPad Air and 8-inch iPad Mini. Since Apple has now partnered with Amazon, we've also included some links to Apple products on the retailer. (Note: When you buy something using the retail links in our product reviews, we may earn a small affiliate commission. Read more about how this works.)

image

Apple

image

Apple

image

Apple

image

Apple

image

Apple

image

Apple

image

Apple

Original author: Jeffrey Van Camp
Continue reading
  5 Hits
  0 Comments
5 Hits
0 Comments

Need an Ohm's Law Party Trick? Take a Light Bulb's Temperature

Need an Ohm's Law Party Trick? Take a Light Bulb's Temperature

In every semester of introductory physics, an instructor (or a textbook) introduces the idea of Ohm's law. Ohm's law is a relationship between the voltage across an element, the current going through the element, and the resistance of the thing. You can write it as the following equation.

Rhett Allain

But what do these three quantities really mean? A full explanation would take a whole semester, so let me instead give a brief summary. The ΔV is the change in voltage (also sometimes just called voltage). It's essentially the change in energy per unit charge for some charged object to move across a region. The unsurprising unit for voltage is the volt. You can measure the voltage with a voltmeter by placing one lead on each side of the element you want to measure.

The electric current (I) is a measure of the movement of electric charges in the element. It is literally the amount of charge (in coulombs) that moves past a point per second. The standard current unit is the ampere (amp), which is equal to 1 coulomb per second. You can measure the current through a device by connecting an ammeter in a way that the same current passes through the ammeter and the element.

Finally, the resistance is really just a proportionality constant between the voltage and the current measured in units of ohms (often using the symbol Ω). For a plain copper wire, the resistance is usually extremely low—so low that you could just say it's zero ohms (but it isn't). If you break a wire so that there is an air gap, the resistance is super high—approximately infinity.

Normal resistors come in a wide range of resistance values. This is what they look like:

Rhett Allain

And next is the disclaimer. Trust me, this disclaimer always comes next but maybe you missed it. Here it is: This expression only works for certain elements that we call "ohmic." Other materials that don't follow this are called "non-ohmic."

OK, maybe this disclaimer isn't really true. Perhaps it would be better to say that some materials have a mostly constant resistance and other materials have a nonconstant resistance. For a non-ohmic material, the calculated resistance at low current is different from the resistance at high current.

How about an example of a non-ohmic element? The filament in an incandescent bulb does not have a constant resistance. If you take a bulb and increase the voltage across it, the current increases too. An increase in current means the bulb gets hot—hot enough to glow. As the temperature increases, however, the resistance also increases.

Now for the fun part. Let's measure current and voltage to determine the temperature of a light bulb. Yes, this will be fun. Here's how it will work. I'm going to take one of these old-style tubular lights (incandescent) and connect it to a variable DC power supply (instead of plugging it into the wall). Yes, incandescents will run just fine on DC current instead of AC current. I will measure the voltage and current and then slowly increase the voltage. Eventually the filament will start to glow—like this:

Rhett Allain

Here is a plot of the voltage and current from 0 volts to just under 30 volts (as high as my power supply went). Notice that this is NOT a linear function:

Fitting a linear function to just the low current values of the data, I get a "cold" resistance of 161.5 Ω. When the bulb is glowing, the resistance (the slope of the curve) is around 490 Ω. For materials (like this filament made of tungsten), the resistance is proportional to the temperature according to this model.

Rhett Allain

This says that the resistance (R) can be calculated if you know the resistance (R0) at some other temperature (T0) along with the resistance temperature coefficient (α). For this bulb, I can assume that the resistance at room temperature is 161 Ω with room temperature at 294 K (about 70 F). Also, the coefficient for tungsten is 4.5 x 10-3 K-1.

Now I can just work backward. If I know the resistance of the hot bulb is 490 Ω, I can solve for the corresponding temperature.

Rhett Allain

Putting in my value for the resistance at the glowing (but not super bright) point, I get a temperature of 748 K (887 F). Yes, that's hot but not full brightness hot. If you want the bulb all the way "on," the filament would be at a temperature of about 3000 K. Instead of getting this bulb to a higher voltage, it might be simpler to use a flashlight bulb and repeat the experiment. I think I will leave that to you as a homework assignment.

More Great WIRED Stories

Original author: Rhett Allain
Continue reading
  7 Hits
  0 Comments
7 Hits
0 Comments

1 Year After Uber’s Fatal Crash, Robocars Carry On Quietly

1 Year After Uber’s Fatal Crash, Robocars Carry On Quietly

In America, 2018 was supposed to be a very big year for self-driving cars. Uber quietly prepped to launch a robo-taxi service. Waymo said riders would be able to catch a driverless ride by year’s end. General Motors’ Cruise said it would start testing in New York City, the country’s traffic chaos capital. Congress was poised to pass legislation that would set broad outlines for federal regulation of the tech.

Instead, one year ago today, an Uber self-driving SUV testing in Arizona struck and killed a woman named Elaine Herzberg as she was crossing the street. The crash derailed much of the optimism surrounding the advent of autonomy, underscoring its potential to do harm. And it ushered in a year during which the greatest promise of the technology—a drastic drop in road deaths—could feel farther away than ever.

Uber stopped testing on public roads for nine months, recalibrated its program, and now only uses one part of a Pittsburgh neighborhood to experiment with self-driving. (Arizona’s governor, who had declared the state’s roads open for testing in 2015, expelled the company after the crash.) Waymo launched its service in Phoenix, but keeps its human safety drivers behind the wheel. GM got mired in regulations and politics and stopped talking about testing in New York. That autonomous vehicle bill languishes in Congress.

And the American people aren’t waiting for the National Transportation Safety Board’s final report on the crash to make up their minds. A recent AAA survey of US adults found 71 percent are afraid to ride in a self-driving vehicle, compared to 63 percent before the crash. Axios reports even President Trump is among them.

So if you’re keeping an ear out for self-driving predictions and pronouncements, chances are you’re catching more whispers than exclamations. Amidst the hushed tones, though, you will hear more open talk about safety.

While national legislation isn’t going anywhere, the federal Department of Transportation has encouraged companies testing automated vehicles to submit “voluntary safety self-assessments”. In an ideal world, these would include detailed information on how companies structure testing. They’d provide details on crashworthiness, and how their vehicles protect occupants and road users as engineers work towards ever-elusive self-driving perfection. Critics complain many of the assessments submitted so far are less technical documents than glossy brochures stuffed with marketing-speak. Still, 13 companies have now turned them in, compared to just two this time last year.

Some companies have also made high-profile safety hires. In January, Waymo hired former NTSB chairperson Debbie Hersman as its first chief safety officer. Uber brought on former DOT safety official Nat Beuse in December. Even smaller startups, like the automated trucking company Starsky Robotics, have started to bring on more employees with robust safety engineering training—its own discipline with its own approach to building machines.

And now more than ever, the denizens of this blooming ecosystem are quick to emphasize the difficulty of making their technology work. That the vehicles have to be safe not just when they’re ready for commercial service, but while they’re testing.

“If you really want to reach a higher level of safety, you have to do a lot more than just building prototypes,” says Burkhard Huhnke, the vice president of automotive strategy at the silicon chip design company Synopsys. “Showcasing fascinating self-driving technology has nothing to do with the full solution for the problem.”

Doing all that takes real time and effort. “I don't see [self-driving technology] happening in the next five years or so—it will really take a longer time than everyone thought,” says Huhnke. “This is not an industry where quick, startup ideas develop a lot of value and can sell it to a company. It takes a while to develop safety, security, and reliability into the systems.” For the foreseeable future, that means the automated vehicle industry’s goal is staying safe while engineering to promote safety. And that means keeping the public safe, too.

More Great WIRED Stories

Original author: Aarian Marshall
Continue reading
  3 Hits
  0 Comments
3 Hits
0 Comments

SEC: Elon Musk Fully Ignored a Key Term of Settlement

SEC: Elon Musk Fully Ignored a Key Term of Settlement

In defiance of an October settlement with the US Securities and Exchange Commission, Elon Musk did not have his tweets pre-approved by an official Tesla babysitter, the SEC says in newly filed court documents.

In a filing submitted Monday evening, lawyers for the federal agency wrote it was “stunning to learn” that “Musk had not sought pre-approval for a single one of the numerous tweets about Tesla he published in the months since the Court-ordered pre-approval policy went into effect.” (The SEC lawyers also complained in the filing that “it took more than two weeks for Musk and Tesla to concede as much.”)

The dispute dates back to August 2018, when Musk tweeted that he had “funding secured” for a plan to take Tesla private. (The electric vehicle company has traded publicly since 2010.) It turned out that was not entirely true—something the SEC objected to, given that Musk was the CEO and chair of a publicly traded company. The agency sued, and by late September, the parties had reached a settlement: Musk and Tesla would each pay a $20 million fine, Musk would step down as chairperson for at least three years (though would remain CEO), and Tesla would have its lawyers “pre-approve” any of its execs’ written communications “that contain, or reasonably could contain, information material to the Company or its shareholders.”

Then, on February 19, Musk—who often uses Twitter to discuss Tesla and engage with fans and critics alike—tweeted that Tesla “will make around 500K” cars in 2019. According to communications between Tesla and SEC lawyers that were later submitted to the court, this set off a minor kerfuffle inside the electric car company’s Fremont, California, factory, with the company’s “Designated Securities Counsel”—ie, its official Twitter babysitter—”immediately arranging” to meet Musk there. Together, according to court documents, they drafted an update, which Musk tweeted out about four-and-a-half hours later.

That correction was not enough for the SEC, which asked Judge Alison Nathan of the Southern District Court of New York to hold Musk in contempt for failing to hold up his side of the settlement. If Nathan does so, she would decide the penalty. “If the SEC prevails, there is a good likelihood that the District Court will fine Mr. Musk and that it will put him on a short leash, with a strong warning that further violations could result in Mr. Musk being banned for some period of time as an officer or director of a public company,” Peter Haveles, a trial lawyer with the law firm Pepper Hamilton, told WIRED last month.

In court filings submitted last week, Musk’s legal team argued that the information the CEO tweeted in February wasn’t at all new—which means it didn’t fall under the purview of the settlement. While Tesla’s fourth quarter update said the company expected to deliver 360,000 to 400,000 vehicles in 2019, Musk told an analyst on the quarterly earnings call that Tesla would “produce maybe on the order of 350,000 to 500,000 Model 3s, something like that this year.”

But the SEC’s new filing seeks to draw a distinction between that statement—what it calls a “cryptic reference” that, it notes, confused some analysts—and the company's official guidance submitted to shareholders. It argues Musk should have gotten approval before disseminating that information, even by tweet, because it was new.

The SEC filing also addresses the Musk legal team’s contention that its enforcement of the settlement is a violation of the CEO’s First Amendment right to free speech. Tesla, not the SEC, would be the party reviewing Musk’s communications for truthfulness, the SEC argues. So “no First Amendment concern exists given that Musk’s speech is to be reviewed by … a private actor, and not the government.”

The SEC, at least, seems to think this court filing is convincing enough to put the whole situation to bed. “The SEC respectfully submits that, because there appears to be no disputed issues of material fact, an evidentiary hearing is unnecessary,” the lawyers write in Monday night’s filing. In other words: We win. Tesla did not immediately respond to comment, but expect its legal team to disagree.

More Great WIRED Stories

Original author: Aarian Marshall
Continue reading
  7 Hits
  0 Comments
7 Hits
0 Comments

Women's Pain Is Different From Men's—the Drugs Could Be Too

Women's Pain Is Different From Men's—the Drugs Could Be Too

Men and women can’t feel each other’s pain. Literally. We have different biological pathways for chronic pain, which means pain-relieving drugs that work for one sex might fail in the other half of the population.

So why don’t we have pain medicines designed just for men or women? The reason is simple: Because no one has looked for them. Drug development begins with studies on rats and mice, and until three years ago, almost all that research used only male animals. As a result, women in particular may be left with unnecessary pain—but men might be too.

Now a study in the journal Brain reveals differences in the sensory nerves that enter the spinal cords of men and women with neuropathic pain, which is persistent shooting or burning pain. The first such study in humans, it provides the most compelling evidence yet that we need different drugs for men and women.

"There’s a huge amount of suffering that’s happening that we could solve," says Ted Price, professor of neuroscience at the University of Texas, Dallas, and an author of the Brain article. “As a field, it would be awesome to start having some success stories.”

Modern-day pain control is notoriously dismal. Our go-to medicines—opioids and anti-inflammatories—are just new versions of opium and willow bark, substances we’ve used for thousands of years. Although they are remarkably effective in relieving the sudden pain of a broken bone or pulled tooth, they don’t work as well for people with persistent pain that lasts three months or longer.

Some 50 million people struggle with pain most days or every day, and chronic pain is the leading cause of long-term disability in the United States. Women are more likely than men to have a chronic pain condition, such as arthritis, fibromyalgia, or migraines.

Meanwhile, pain medications are killing us. About 17,000 people die each year from prescribed opioids as clinicians write almost 200 million opioid prescriptions, or more than one for every two American adults.

The failure to include sex differences in the search for better pain relief stems in part from flawed but deep-seated beliefs. “[Medical researchers] made the assumption that men and women were absolutely identical in every respect, except their reproductive biology,” says Marianne Legato, a cardiologist who began sounding an alarm in the 1980s about differences in heart attack symptoms among women. She went on to pioneer a new field of gender-specific medicine.

The physiology of pain is just one of many ways that men and women differ, she says. But she isn’t surprised that no sex-specific medicines have emerged. The medical community—including pharmaceutical companies—didn’t appreciate the variation between men and women, including in their metabolisms, immune systems, and gene expression. "If there were differences in how their drugs worked between men and women, they didn’t want to hear about it," she says.

The Brain study came about from a unique opportunity at M.D. Anderson Cancer Center in Houston. You can’t take a biopsy of spinal tissue, but researchers were able to study clusters of sensory neurons in eight women and 18 men who had spinal tumors removed. The analysis included sequencing RNA to determine which genes are active in the neural cells. They compared men and women who had a history of chronic neuropathic pain to those who didn’t. Their pain wasn’t caused by the tumors themselves. Some patients had nerve compression causing neuropathic pain, while others didn’t have neuropathic pain or chronic pain at all.

In men who did have neuropathic pain, macrophages—cells of the immune system—were most active. In women, neuropeptides, which are protein-like substances released by neurons, were prominent. "This represents the first direct human evidence that pain seems to be as sex-dependent in its underlying biology in humans as we have been suggesting for a while now, based on experiments in mice," says Jeffrey Mogil, professor of pain studies at McGill University in Montreal and a leading researcher on sex differences in pain, who was not involved in the Brain study.

Price and his colleagues emphasize that the finding needs further study. But it suggests that a new type of migraine drug that targets a neuropeptide known as CGRP might be broadly effective for chronic pain in women, he says. Women greatly outnumber men among migraine sufferers, and women made up about 85 percent of the participants in the Phase 3 clinical trials of the three anti-CGRP drugs approved by the Food and Drug Administration in 2018. Price wonders if the anti-CGRP drugs aren’t specific to migraines—but to women. His work with mice suggests that the drugs don’t work in males, but block pain in females. "CGRP is a key player in lots of forms of chronic pain in women, not just migraine," he says.

Tailoring new medicines to men or women would be revolutionary, particularly considering that it took many years for women (and female animals) to get included in pain research at all. Fearful of potential birth defects, in 1977 the FDA cautioned against including women of childbearing age in clinical trials, which meant women used drugs solely designed for men. By 1993, the thinking had changed, and Congress passed a law requiring the inclusion of women in clinical trials funded by the National Institutes of Health. Although clinical trials now include both men and women, they often don’t report results by sex.

Meanwhile, animal researchers continued to use mostly male animals. As a graduate student in the 1990s, Mogil was killing time one day and decided to run some data separately for male and female mice—and discovered the drug the lab was testing worked only in males. When he excitedly told his supervisor, the post-doctoral neuroscientist responded, "Jeff, sex differences are to enjoy, not to study." (He spent his career studying them, anyway.)

In a 2005 review of research in the journal Pain, Mogil found that 79 percent of pain studies involved only male animals. Only 4 percent looked for sex differences. In a huge leap forward, in 2016 the NIH began requiring most animal research it funds to involve both male and female animals—and to evaluate sex differences.

What is the legacy of the gender-blind research? Mogil once emailed a researcher, asking whether a pain drug worked better in men than women. The researcher didn’t know, and couldn’t pursue the question because the data was controlled by the pharmaceutical company. Mogil was left wondering if drugs that looked promising in male-only animal studies might have failed in clinical trials when the results were blended with those in women, depriving men of a viable treatment.

Medicines that could work best for women wouldn’t make it into the pipeline at all when basic science excluded female animals. Price wonders if unresolved pain among women might have led to their higher levels of chronic pain.

The acknowledgement of sex differences in pain could stir up the field and lead to new advances. Amid the promise of "personalized" medicine, with drugs tailored to patients based on genetic sequencing, developing pain medicines for half the population seems like a no-brainer. "Now there’s a whole new frontier opening up in front of our eyes," Price says.

More Great WIRED Stories

Original author: michele cohen marill
Continue reading
  7 Hits
  0 Comments
7 Hits
0 Comments

The Evidence That Could Impeach Donald Trump

The Evidence That Could Impeach Donald Trump

As all of Washington—and the country—await the conclusion of Robert Mueller’s special counsel probe, which could come at any moment, House Speaker Nancy Pelosi put words last week to the as-yet-unspoken consensus on Capitol Hill: Impeaching the president will be a high bar.

“Impeachment is so divisive to the country that unless there’s something so compelling and overwhelming and bipartisan, I don’t think we should go down that path, because it divides the country. And he’s just not worth it,” Pelosi told The Washington Post last week.

The comment, like so much of the Trump era, hit Washington as shocking but not surprising. It was in many ways a classic “Kinsley gaffe,” as columnist Michael Kinsley once labeled any gaffe when a politician inadvertently tells the truth, because her comment was obviously, demonstrably true. While the House could move to impeach the president, his conviction and removal by the Senate would require the cooperation of numerous Republicans. The political reality, as Pelosi’s comments acknowledge, is that nothing about Trump thus far has moved the GOP substantially in that direction.

After all, the Republican Party has clearly decided that the hush money payments Trump directed—a serious campaign finance felony violation—are “not worth it.”

The campaign finance conspiracy to buy up the rights to the stories of Stormy Daniels and Karen McDougal is far from the paperwork mistake that the GOP has painted it to be—it goes directly to the legitimacy of the electoral system. Michael Cohen has already shown the world evidence that makes clear the knowing involvement of the president in this scheme while he was in the White House. The president would almost certainly have been indicted personally except he’s in office, which leaves some gray area about his ability to face prosecution.

Similarly, the GOP has decided that the criminality surrounding the president is “not worth it.” For them, the fact that the man who promised to hire “the best and most serious people” has instead proven himself so incompetent a manager and leader that he’s been taken advantage of by nearly everyone close to him is not cause for concern.

SIGN UP TODAY

Get the Backchannel newsletter for the best features and investigations on WIRED.

In seemingly any other time, Mueller’s exposé of the sheer greed and criminality at the heart of the campaign would have been enough to upend a normal presidential administration. Because even if Mueller never shows a Russia connection to Trump, the special counsel and prosecutors in the Southern District of New York have already shown that Trump’s 2016 presidential bid was the most criminal campaign in the history of US politics, a collection of grifters working on the sly to advance their own financial interests at the expense of the United States.

To recap, the campaign chairman and deputy campaign chairman were involved in a decade-long, $65 million money-laundering scheme that defrauded the US government, banks, and taxpayers while they worked on behalf of pro-Russian interests, a conspiracy that continued right through the campaign. Meanwhile, the campaign’s national security adviser was working as an unregistered foreign agent of the authoritarian government of Turkey, and the president’s longtime adviser and lawyer was also involved in his own years-long bank and tax fraud around taxi medallions.

Such activity is not only criminal, it shows a massive disregard for the normal course of politics, societal norms, and American values. This was a campaign filled with people who were touting warm, sugary apple pie on the trail while selling slices out the back door to foreign governments and telling tax authorities that the pie plate was entirely empty.

Lastly, the GOP has clearly decided that potential kompromat on the president is “not worth it.” Because, again, we know that Donald Trump, while campaigning for president, was engaging in business negotiations with the highest levels of Russian government—and then lied about it to the American people for two years, lies that Russia clearly knew were false, leaving him exposed to massive counterintelligence risk.

It’s hard not to think that, in normal times, any one of these things would have been enough to give some members of the president’s own party pause, let alone all three.

At the same time, there’s still truth to the President’s increasingly unhinged tweet storms: There is “NO COLLUSION,” at least not yet.

None of Mueller’s indictments, guilty pleas, or court filings has yet shown evidence of “collusion,” the sound-bite shorthand that actually means a witting conspiracy against the United States in which some manner of Russian intelligence, officials, or Kremlin-linked businesspeople cooperated with Trump campaign advisers to defeat Hillary Clinton in 2016. There have been no shortage of suspicious activities so far: 100-plus contacts with Russia, Roger Stone’s odd communications with Wikileaks, Jared Kushner’s request for a secure Russian comms channel, Michael Flynn’s odd conversations with the Russian ambassador, and much more.

But Mueller hasn’t connected any of those dots yet, which is why everyone is eagerly awaiting the Mueller Report, in whatever form it may take. Nancy Pelosi’s comments last week seemed to speak out loud that which had already been baked into the capital’s political firmament and the GOP’s calculus: Sure, the president has been credibly accused of crimes, but none of them so far were that startling or astonishing.

Mueller—or the Southern District, or one of the other 18-plus investigations targeting the president—could dramatically alter the impeachment narrative in Washington in at least three ways: (1) by outlining clear evidence of a specific presidential crime, (2) a demonstrable, smoking-gun-included pattern of obstruction, or (3) demonstrable action taken to compromise American interests at the expense of advancing a foreign power’s goals, including actively conspiring with Russia in the 2016 campaign.

As the president’s tweets and his TV lawyer Rudy Giuliani continue to harp, we haven’t seen any of those scenarios unfold yet. But if Mueller or SDNY has any of that, it's going to make it very hard for the GOP line to hold.

For the first scenario—leaving aside the campaign finance allegations, which the GOP seems to have decided don’t matter and that prosecutors don’t seem inclined to push forward yet—we haven’t seen specific evidence in court filings of Donald Trump’s irrefutable personal involvement in specific crimes, either in his role as a businessman, as a candidate, or as president. If, though, there’s clear, credible, documentable evidence that the president suborned perjury, lied to the special counsel, or engaged in any manner of other crimes, it seems clear that Congress would treat that very differently, especially if it was framed in a way that Mueller, prosecutors, or the Justice Department indicate they would normally recommend criminal charges. This is partly why the reaction to BuzzFeed’s not-entirely-clear bombshell that Trump “directed” Cohen to lie hit with such impact: Within hours, impeachment calls on Capitol Hill were coming fast, and it was only the unprecedented statement by Mueller’s office that pumped the brakes.

As for obstruction of justice, pundits have tied themselves in knots over the past two years debating whether the president could be charged with obstruction of justice or impeached over the firing of FBI director James Comey, whether the president was acting within his Article II executive powers, and so on. That approach almost certainly defines Mueller’s obstruction investigation too narrowly.

Mueller appears to have been laying the groundwork for a much broader pattern of obstruction, a pattern of lies, actions, and obfuscations where the Comey firing is merely one of many related incidents—potentially dozens—that stretch across multiple years and leave no doubt of the president’s intent to obstruct. This is potentially backed up by documentary evidence like contemporaneous notes, memos, emails, or telephone calls. Mueller has expressed his interest in the Air Force One statement drafted by the president, downplaying the 2016 Trump Tower meeting, as well as potentially Michael Cohen’s coordination, if any, with the White House over his false testimony to Congress. One specific line from the special counsel’s filing in Cohen’s case might telegraph where Mueller is heading: “By publicly presenting this false narrative, the defendant deliberately shifted the timeline of what had occurred in hopes of limiting the investigations into possible Russian interference in the 2016 US presidential election.” This scenario—of a president seeking to mislead the American public—was part of the charges against Richard Nixon, after all.

To the third point, we may still see evidence that the president took an action—or tried to—for the direct benefit of a foreign power at the express compromise of American interests, either Russia or a Middle Eastern power, or that he outright accepted help from Russia during the 2016 campaign. If such a conspiracy exists and Mueller or other prosecutors are able to show that the president is elevating other nations before our own or otherwise conspiring with Vladimir Putin, it’s hard to imagine that Donald Trump’s political situation doesn’t become rapidly untenable. Any allegations in this realm would go to the core of the Russia collusion question and be all but impossible for the GOP to ignore.

To be clear, too, if they find evidence to support any one of the above scenarios, Mueller or other investigators may end up finding evidence of more than just one scenario. In some ways, the most logical outcome might be that if evidence for one exists, then evidence will exist for all three. (For instance, that if there is collusion, then the president took action on behalf of a foreign government and then also obstructed the investigation.)

Regardless, it’s worth restating that, even if he shuts shop today, Mueller hasn’t found nothing. He’s already uncovered numerous serious crimes—crimes committed by the president and his campaign and White House aides, crimes against the US government, taxpayers, voters, Congress, and the American public.

The only question is whether whatever Mueller has left to show us is, in Washington’s estimate, “worth it.”

Garrett M. Graff (@vermontgmg) is a contributing editor for WIRED and coauthor of the book Dawn of the Code War: America's Battle Against Russia, China, and the Rising Global Cyber Threat. He can be reached at This email address is being protected from spambots. You need JavaScript enabled to view it..

When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.

More Great WIRED Stories

Original author: Garrett M. Graff
Continue reading
  6 Hits
  0 Comments
6 Hits
0 Comments

Coders’ Primal Urge to Kill Inefficiency—Everywhere

Coders’ Primal Urge to Kill Inefficiency—Everywhere

Ho, as it turned out, had a very strict and peculiar itinerary planned. He’s fond of ramen dishes, and to fit as many as possible into their visit to Tokyo, he’d assembled a list of noodle places and plotted them on Google Maps. Then he’d written some custom code to rank the restaurants so they could be sure to visit the best ones as they went sightseeing. It was, he said, a “pretty traditional” algorithmic challenge, of the sort you learn in college. Ho showed Chang the map on his phone. He told her he was planning to keep careful notes about the quality of each meal too. “Oh wow,” she thought, impressed, if a bit wary. “This guy is kind of nuts.”

Ho was also witty, well read, and funny, and the trip was a success. They ate a lot of ramen but also drank beer ringside at a sumo wrestling match, visited the Imperial Palace, and stopped by the hotel where Lost in Translation was filmed. It was the beginning of a seven-year relationship.

Adapted from "Coders: The Making of a New Tribe and the Remaking of the World," by Clive Thompson

Penguin Press

Oddities like the ramen optimizer have been part of Ho’s daily routines for years. As a kid growing up in Macon, Georgia, Ho owned a Texas Instruments TI-89 calculator, he told me. One day while leafing through the instruction manual, he discovered that the calculator contained a form of the Basic programming language and taught himself enough to painstakingly ­re-create Nintendo’s The Legend of Zelda game on the calculator. He learned Java on the computer and, after high school, went to Georgia Tech in Atlanta to study computer science. Abstract algorithmic concepts were interesting enough, but what really got him going was using computers to avoid repetitive labor. “Anytime I have to repeat something over and over,” he told me, “I get bored.”

In his final year of college, Ho started a company that created forums where students studying the same courses at different colleges could answer one another’s questions. But it didn’t amass nearly enough users, so he shut it down. He interviewed at a few companies like Google and Micro­soft but sank into a funk. He didn’t want to work for someone else. As a question of value creation, being an employee was a terrible proposition, he felt. Sure, you earned a check. But most of the value of your labor was captured by the founders, the ones who owned equity. He had the skills to build something, soup to nuts. He just didn’t know what.

A few months later, he stumbled into an idea on a visit home to Macon. He went to Staples on an errand with his dad, a pediatrician who ran his own office. Ho’s father needed to buy two time clocks, those old-school machines where employees insert cards to be stamped with the time they start and stop work for the day. Each clock cost around $300.

Ho was astounded: Had time-clock technology not changed since The Flintstones? “I can’t believe this is still a thing,” he thought. He realized he could quickly cobble together a website that performed the same task, but better: Employees could check in with their phones, and the site would total up the hours automatically. “Don’t buy this time clock,” he told his father. “I’m going to code you one.” Three days later, he had a prototype. His father’s office began using the service and, to Ho’s delight, they loved it. The system was remarkably more efficient than a paper-based time clock.

He spiffed up the website, gave it a name—Clockspot—and, four months later, a law firm signed on as a client. When its first payment came through, Ho nearly jumped out of a chair at the Georgia Tech library where he was working. He was getting money for his software! Nine months later, Ho’s company was earning around $10,000 a month from cleaning companies, home-health-care-aide firms, the city of Birmingham, Alabama. He worked nonstop for two years improving and debugging the code. Eventually he got it working so well that Clockspot was running mostly on autopilot. Besides himself, the only employee Ho needed was a part-time customer service agent. He was making a healthy income and had plenty of time for his travels and other interests. He’d optimized his life efficiency.

Jason Ho, founder of Clockspot, tries to make his life activities more efficient with code. “Anytime I have to repeat something over and over, I get bored.”

Cayce Clifford

Like any sentient person, you’ve noticed that software is eating the world, to use venture capitalist Marc Andreessen’s famous phrase. You’ve seen Facebook swallow the public sphere, Uber overhaul urban transportation, Instagram supercharge selfie culture, and Amazon drop off your shopping within 24 hours. Technological innovators generally boast that their services change the world or make life more convenient, but underpinning everything they do is speed. Whatever you were doing before—hailing a cab, gossiping with a friend, buying toothpaste—now happens faster. The thrust of Silicon Valley is always to take human activity and shift it into metabolic overdrive. And maybe you’ve wondered, why the heck is that? Why do techies insist that things should be sped up, torqued, optimized?

There’s one obvious reason, of course: They do it because of the dictates of the market. Capitalism handsomely rewards anyone who can improve a process and squeeze some margin out. But with software, there’s something else going on too. For coders, efficiency is more than just a tool for business. It’s an existential state, an emotional driver.

Coders might have different backgrounds and political opinions, but nearly every one I’ve ever met found deep, almost soulful pleasure in taking something inefficient—even just a little bit slow—and tightening it up a notch. Removing the friction from a system is an aesthetic joy; coders’ eyes blaze when they talk about making something run faster or how they eliminated some bothersome human effort from a process.

This passion for efficiency isn’t unique to software developers. Engineers and inventors have long been motivated by it. During the early years of industrialization, engineers elevated the automation of everyday tasks to a moral good. The engineer was humanity’s “redeemer from despairing drudgery and burdensome labor,” as Charles Hermany, an engineer himself, wrote in 1904. Frederick Winslow Taylor—the inventor of Taylorism, which helped lay the groundwork for manufacturing assembly lines—inveighed against the “awkward, inefficient or ill-directed movements of men.” Frank Gilbreth fretted over wasted movements in everything from bricklaying to vest buttoning, while his industrial-­engineering partner and wife, Lillian Evelyn Gilbreth, designed kitchens such that the number of steps in making a strawberry shortcake was reduced “from 281 to 45,” as The Better Homes Manual enthused in 1931.

Many of today’s programmers have their efficiency “aha” moment in their teenage years, when they discover that life is full of blindingly dull repetitive tasks and that computers are really good at doing them. (Math homework, with its dull litany of exercises, was one thing that inspired a number of coders I’ve talked to.) Larry Wall, who created the Perl programming language, and several coauthors wrote that one of the key virtues of a programmer is “laziness”—of the variety where your unwillingness to perform rote actions inspires you to do the work to automate them.

The orientation toward efficiency becomes hard to turn off. For some coders, people and their incessant demands can be a pain in the butt, and human relations another daily hassle to be optimized.

Eventually that orientation toward efficiency becomes hard to turn off. “Most engineers I know go through life seeing inefficiencies everywhere,” Christa Mabee, a coder in San Francisco, once told me. “Inefficiencies boarding your planes, whatever. You just get sick of shit being broken.” She’ll find herself walking down the street wishing people navigated the sidewalks and street crossings in a more optimal fashion. Jeannette Wing, a professor of computer science who runs the Data Science Institute at Columbia University, popularized the phrase computational thinking to describe what Mabee was talking about. It involves the art of seeing the invisible systems in the world around you, the rule sets and design decisions that govern how we live.

Jason Ho had a knack for seeing and trying to perfect those invisible systems. I met Ho and Chang in—of course—a ramen restaurant in San Francisco a few years ago. Ho was managing Clockspot, though it was ticking along so nicely by that point that he was working only a few hours a week. “He says he works 20 hours a month, but I don’t think I’ve seen him work that much,” Chang said. (The couple has since broken up, but the two remain on good terms.) Ho spent quite a bit of time traveling; once he’d even handled a Clockspot outage while at base camp on Mount Everest.

His optimizing and coding work, however, never stops. When he decided to buy a house, he wrote a piece of software into which he could dump the information for scores of homes on the market—their locations, prices, and neighborhood statistics—and the program would calculate the properties’ probable long-term value. The program’s top pick was a modern condo in Nob Hill. He duly bought it. Because he hates shopping, he bought dozens of pairs of the same T-shirt and khakis, a classic strategy for coders, since it removes the friction of decisionmaking when getting dressed.

A few years ago, Ho decided to take up bodybuilding, which presented a particularly demented optimization challenge: How ripped could he get? He carried a small scale to restaurants and weighed his food portions. “He tracked every single thing he ate in this massive spreadsheet,” Chang said. Ho sheepishly showed me the spreadsheet on his phone; a sprawling beast that plotted every ingredient in his workout meals, for a total of 3,500 calories per day. He worked out in a gym but also devised ways to squeeze exercise into whatever he was doing. If he passed a thick metal railing, he’d use it to do pull-ups; if he passed a dumpster, he’d lift it up on one edge.

After two years of training, he placed second in an amateur bodybuilding competition. He flipped through his phone to find pictures of himself from the period. In one picture he’s lightly oiled and posing in his underwear before a sunny window. He looks like a Greek statue. “I was down to about 7 percent body fat,” he said. It felt good to look so ripped, he said, but mostly he’d just wanted to see whether it was possible.

Ho showed me another chart he’d constructed. This one was a life guide, of sorts, a way of optimizing not just his body but how he devoted every waking second. He decided he wanted to spend time doing only the things where every ounce of effort was most likely to produce maximum results. He’d made 16 rows labeled with life activities. Among them: entrepreneurship, programming, guitar, StarCraft, shopping, and “spending time with friends and family.”

Then, in columns, he plotted various criteria—like whether the activity is inherently meaningful and not just a means to an end (“autotelic”), whether it “can be mastered,” whether it “impacts multiple areas of life.” For “programming” and “entrepreneurship,” Ho ticked off yes for every quality. When he came to the social realm of “spending time with friends and family,” he checked the box for “impacts multiple areas of life.” For “can be mastered,” he wrote maybe.

Related Stories

Jason Tanz

Soon We Won't Program Computers. We'll Train Them Like Dogs

Clive Thompson

The Revolutionary Quantum Computer That May Not Be Quantum at All

Cade Metz

Instagram Strikes a Sizable Blow in Silicon Valley's Tabs Vs Spaces War

For a lot of people, this might seem nuts. The idea that you might want to systematize the emotional parts of life and regard social activity as a source of inefficiency is, to many people, discomfiting. Ho is gregarious and outgoing, but for some coders, people and their incessant demands can be a pain in the butt, and human relations another daily hassle to be fixed. It’s a problem that technologists, back in the early days of computing, pondered with some unease. As Konrad Zuse, the German civil engineer who built the first programmable computer, was credited with saying: “The danger of computers becoming like humans is not as great as the danger of humans becoming like computers.”

I thought of this one evening when I was engrossed in a Quora thread in which dozens of coders shared tales of how they’d automated the nuances of everyday life. There were some unsettling, if morbidly fascinating, ploys for turning social contact into a set-and-­forget robotized task. “I got tired of hearing ‘You never message me’ from friends and family,” one programmer wrote, so he created a script that would randomly send them texts, created using a Mad Libs–style mashup. A text would begin with this gambit—“Good morning/afternoon/evening, Hey {name}, I’ve been meaning to call you”—and then append one option from a list of endings: “I hope all has been well/I will be home later next month love you/let’s talk sometime next week when are you free.”

At a hackathon in San Francisco, a middle-­aged coder excitedly showed me an app he’d created that would send automated romantic messages to a partner. “When you don’t have enough time to think about her”—yep, he assumed the emotionally needy partner would be a her—“this can take care of it for you,” he enthused. These sorts of attempts to make socializing efficient go all the way up the chain to the biggest high-tech firms: Think of Gmail’s auto-complete feature, which encourages us to speed up email by having an algorithm compose our responses for us.

Linguists and psychologists have long documented the value of phatic communications—the various emotional devices humans use in everyday life to make others feel at ease or listened to: “How’s it going?” “Crazy weather, eh?” “What are you up to tonight?” The more I talked to coders, the more stories I heard of people who found that stuff as irritating as grit in gears.

Christopher Thorpe, a veteran of more than a half-dozen tech firms, told me about “an incredibly talented engineer” he once worked with who fit that bill. “He was very upset with me that we told jokes in all our meetings, because we were wasting time. ‘Why are we spending five minutes having fun with 20 people in the office? This is work time.’ Everybody is laughing—but, you know, you’re wasting all this valuable time.” The joke had frittered the time of 20 people! This guy would begin rattling off the math: “Five minutes times 20, that’s like, you know, you’ve wasted an hour and a half of person-time on these jokes.”

The truth is, I have some sympathy for coders’ mania for optimizing daily life, because I’ve tasted those electric thrills myself. Three years ago I started working on a book about the psychology of programmers, so I decided to pick up the long-­discarded coding I’d done on VIC-20s back in the ’80s and dabble in some modern languages like Python and JavaScript. The more I played around writing little scripts, the more I began to notice, and be deeply annoyed by, moments of inefficiency in my daily affairs. While writing, for example, I’d find myself frequently consulting various online thesauri. (Feel free to judge me.) They were useful but so sludgy that each time I did a search it took maybe two seconds to load the results. So I decided to write my own command-line thesaurus, using a site that offered a thesaurus API. After a quick morning of tinkering with Python, I had a script. I’d type a word into the command line and get synonyms and antonyms back with lightning speed. It was green text on black, unadorned, and crude. But damn, it was fast: No more waiting around for the browser to load a slurry of tracking scripts while cookies clogged up my hard drive.

Granted, the amount of time this saved me was not terribly consequential. Assuming I search for synonyms twice an hour on average while I’m writing, and assuming (generously) that my creation saved me a rollicking two seconds per search, I spared myself, maybe, one hour a year of irked waiting. Hardly worth mentioning. Still, the burn of velocity warmed my soul. Each time I searched for a synonym, the zippy results produced a surge of pleasure. I was applying the drug of efficiency to my veins, and it felt good.

Before long I’d gotten addicted to writing code for little routines. I made one to clean up YouTube transcripts that I’d downloaded; another to crawl and archive links I posted to Twitter; one that continually checked the website of my son’s elementary school and texted him when the teacher posted homework. (He was sick of hitting Refresh.)

Sometimes the most productive programmers are those who reduce code bases. Programming is reminiscent of poetry, where compression can confer power.

A lot of my little programs were badly written, barely functioning hack jobs; I picked the most simple, brute-force way to get it done. When I looked at the code of really experienced programmers, I’d admire how much more elegantly they wrote. I’d come up with a sprawling, ugly function to sift through some data and then find that an experienced programmer could do it in a few crisp lines. (And their code ran a lot faster too.) Journalists sometimes marvel at the huge size of Google’s code base—2 billion lines!—as an indication of its might. But coders aren’t impressed by volume. Sometimes the most productive programmers are those who reduce code bases, make them shorter and denser. After three years at Facebook, an engineer named Jinghao Yan checked all of his contributions to the company’s code base and found that the math was negative. “I’ve added 391,973 lines to and removed 509,793 lines from the main repository,” he wrote on another Quora coder thread. (There are a lot of programmers on Quora, as it turns out.) “So if I coded 1,000 hours a year, that’s about 39 net lines removed per hour!”

Programming is reminiscent of poetry, where compression can confer power. “In a well-crafted poem, every single word has meaning and purpose,” as the coder and writer Matt Ward wrote in an essay for Smashing Magazine. “A poet can spend hours struggling for just the right word, or set aside a poem for days before coming back to it for a fresh perspective.” Among the most famous modernist poems, inspired by the age-old concision of haiku, was Ezra Pound’s “In a Station of the Metro”:

The apparition of these faces in the crowd; Petals on a wet, black bough.

“In just two lines and fourteen simple words,” Ward notes, “Pound paints a striking image, ripe with meaning and begging to be devoured by scholars and critics. Now, that’s efficiency.”

Back in 2016, I visited Ryan Olson, a lead engineer for Instagram. His team had just pushed out the platform’s Stories function. It was a massive update. Olson told me about traveling around San Francisco in a blur of exhaustion mere hours after the update went live­­ and seeing people already using Stories. “It’s a pretty cool experience,” he said. “Last night I was at the gym, and I looked over and someone is using the product. I don’t know if there’s ever been historically any other way where you could reach so many people” or where “so few people define the experience of so many.”

It’s one thing to optimize your personal life. But for many programmers, the true narcotic is transforming the world. Scale itself is a joy; it’s mesmerizing to watch your new piece of code suddenly explode in popularity, going from two people to four to eight to the entire globe. You’ve accelerated some aspect of life—how we text or pay bills or share news—and you can see the ripples spread outward.

This is how the big riches in software are often made, too, so there’s a concomitant frisson of power and wealth. Venture capitalists pour money into things they think will grow like kudzu, and the markets reward it. This nexus of motivations tends to produce, in efficiency-loving Silicon Valley engineers, not just a pleasure in scale but an absolute lust for it.

SIGN UP TODAY

Sign up for the Daily newsletter and never miss the best of WIRED.

Indeed, among the royalty of Silicon Valley there’s often a sort of contempt for things that don’t scale. Smallness can seem like weakness. A few times while talking to tech bigwigs, I’d mentioned Jason Ho’s company, explaining how I found it a smart and admirable business, a perfect example of an entrepreneur nailing an unmet need. But they scoffed. To them, Ho’s Clockspot was a “lifestyle business”—Valley-speak for an idea that will never scale into the stratosphere. That sort of product is fine, sure, they say, but Google could copy it and put him out of business in a second.

Obviously we’ve benefited enormously from software engineers’ twitchy, instinctive desire to speed things up, to create plenitude. But the simultaneous, relentless drive for efficiency at scale has troubling side effects. Facebook’s News Feed speeds up how friends show us photos but also how malcontents spread disinformation. Uber optimizes car-hailing for riders but upends the economics of making a living as a driver. Amazon prepares drones for delivery of electronics over main streets denuded of stores.

Perhaps we—the folks whose lives are being so relentlessly optimized—are finally noticing these repercussions. We’re certainly complaining more about Big Tech, noticing how it outgasses civic problems, how it enrages while it enchants. We don’t quite know what to do about it; we still like the convenience, the way software constantly claims we can do more with less. But the doubts are prickling at our skin.

Maybe we’re becoming uncomfortable with how we, too, in our daily habits, have embraced the romance of hyperoptimization. Look at the scene on any city street: Employees listening to podcasts at 1.5X speed while racing to work, wearing Apple Watches to ensure they’re hitting 10,000 daily steps, peeking at work email under the dinner table. We’ve become like the coders themselves, torquing every gear in our lives to remove friction. Like any good engineer, we can make the machines of our lives run awfully fast, though it’s not clear we’re happy with where we’re going.

Adapted from Coders: The Making of a New Tribe and the Remaking of the World, by Clive Thompson, to be published March 26, 2019, by Penguin Press, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC.

Clive Thompson (@pomeranian99) is a WIRED contributing editor.

This article appears in the April issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at This email address is being protected from spambots. You need JavaScript enabled to view it..

More Great WIRED Stories

Original author: Clive Thompson
Continue reading
  3 Hits
  0 Comments
3 Hits
0 Comments

Beyond Cas9: 4 Ways to Edit DNA

Beyond Cas9: 4 Ways to Edit DNA

Crispr scientists are, essentially, muggers in lab coats. When they need a new pair of DNA-slicing scissors, called a nuclease, they just steal one from a germ. But repurposing microbial machinery isn’t so simple: Some nucleases are too big; some are too blunt; some don’t work well inside human cells. So, as Crispr wends its way out of the petri dish and into our genes, the search is on for slimmer, sharper tools. With trillions of muggable microbes, there are plenty to choose from. Here are just a few, from the stalwarts to the up-and-comers.

Cas9

It started with Streptococcus pyogenes. Seven years ago, the bug that causes strep throat, toxic shock syndrome, and flesh-­eating disease supplied researchers with their first gene-snipping nuclease, Cas9. Though still widely used, Cas9 isn’t perfect. For one thing, it’s bulky. To get a nuclease to its target gene, you sometimes have to smuggle it across the cell border inside a virus. The larger the nuclease is, the less editing you can squeeze into a single trip. Another problem: S. pyogenes has afflicted humans for so long that many of us may carry an immunity to Cas9, making it an iffy editing tool.

Christie Hemm Klok

Cas12e

The US Department of Energy’s Joint Genome Institute maintains a database of microbial DNA from various unsavory places, including a Superfund site in California and a shuttered uranium mill in Colorado. A couple of years ago, after poring through 155 million gene sequences, scientists found three previously undiscovered nucleases, including one they called CasX. Now renamed Cas12e, it has a couple of things going for it: It’s tiny, which makes it easy to deliver into a cell, and it doesn't appear to be related to Cas9, which makes it less likely to trigger the same immune response in humans.

Cas12b

If you give a mouse fermented shrimp compost, you get Bacillus hisashii, a heat-­loving gut bacterium with a nuclease called Cas12b. (Other microbes produce the same enzyme, but their versions don’t work well at human body temperature.) Earlier this year, in the journal Nature, researchers reported they’d created a streamlined version of Cas12b that’s especially well suited to editing human immune cells. Best of all, their lab-grown mutant is about 20 percent smaller than that first Cas9 and way less prone to off-target snipping.

Cas12a

This handy little nuclease is smaller and more accurate than Cas9, and it has shown promise in speeding up the production of biofuels and bioplastics. Last year, researchers announced that Cas12a can also screen for HPV in STD swabs. First they programmed it to seek out two cancer-causing strains of the virus. Then, in a test tube, they combined it with a kind of fluorescent alarm system. When the nuclease came into contact with HPV, it would hack apart both the virus and the alarm compound, causing the test tube to light up. Voilà, a positive result.

This article appears in the April issue. Subscribe now.

Original author: Anthony Lydgate
Continue reading
  5 Hits
  0 Comments
5 Hits
0 Comments

Preparing to Unleash Crispr on an Unprepared World

Preparing to Unleash Crispr on an Unprepared World

As it turned out, the mysterious sequences were an immune system. When a microbe was exposed to a new virus, it would cut a swatch of the invader’s DNA (the junk) and store it safely between two dividers (the palindromes). That way, if the virus ever returned, the microbe could simply consult its archive and dispatch the proper immune response.

Christie Hemm Klok

The task of figuring out the details of that process fell to a later generation of scientists. In 2011, a microbiologist named Emmanuelle Charpentier determined that the Crispr scheme has three key ingredients: an enzyme that acts like a scissors, snipping the strands of the DNA double helix; a guide RNA, which tells the scissors where to cut; and a component that locks the scissors into place. The following year, Charpentier teamed up with biochemist Jennifer Doudna, and the pair asked what proved to be the multibillion-dollar question: Could they exploit this system and use it to edit genes?

The tool they ended up creating—also known, confusingly, as Crispr—not only worked, it effectively blew every existing technology out of the water. To edit a gene using Crispr, all you have to do is give your guide RNA an address corresponding to a particular location on the genome. The scissors will then snip out the selected gene, or even a tiny fragment of the gene, and insert a replacement as needed. (A natural repair mechanism automatically stitches the whole thing back together.)

The result has been transformative. For one thing, Crispr works in almost every animal that scientists have tried, from silkworms to monkeys, and in just about every cell type—kidney cells, heart cells, you name it. (Previous gene-editing techniques even had trouble with rats.) What’s more, Crispr is both fast and cheap. Before Doudna and Charpentier made their discovery, it might have taken more than a year to engineer a mouse with a single mutation. Now it can take as little as two days of work. And while the new editing technique sometimes produces typos, it’s far, far more precise than its predecessors. One scientist told me that with Crispr he needs only 10 cells to yield at least one perfect mutation. In the old days, he would have had to fiddle with about a million cells to get the same result.

LEARN MORE

The WIRED Guide to Crispr

Scientists around the globe have spent the past seven years honing this new tool, using it to study the underlying genetics of disease, speed up drug development, and boost the performance of industrial bacteria and cells. Now they’re poised to bring it out of the lab and into the real world. Some of their early applications are already showing promise. Two summers ago, for instance, ExxonMobil announced that it had used Crispr to ­double the amount of biofuel generated by the marine algae Nannochloropsis gaditana. German researchers recently found a way to create Crispr’d pigs that are resistant to African swine fever, a disease that’s been ruinous for farmers in sub-Saharan Africa.

But other uses of the technology have been more disturbing. Last November, a Chinese researcher named He Jiankui announced the birth of humanity’s first gene-edited babies, twin girls with a Crispr’d version of the CCR5 gene, which he claimed gave them immunity to certain strains of HIV. (The fact that he made his change at the embryo stage means the girls will pass on their edited DNA.) The experiment was widely condemned as unethical, unnecessary, and potentially dangerous; Chinese authorities called it “abominable.” But it also augured the next phase of Crispr’s development—from a universally embraced lab tool to one with the potential to permanently alter species, ecosystems, and people.

That phase will bring with it a slew of new ethical and regulatory decisions. If we are to find our way through them, we’ll need a firm grasp of the facts and an accurate understanding of Crispr’s many benefits and risks. But we’ll also need to confront a difficult question: How far do we, as individuals and as a society, want this technology to go?

Jennifer Kahn (@JenniferMKahn) wrote about the nonprofit Ocean Cleanup in issue 26.10.

This article appears in the April issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at This email address is being protected from spambots. You need JavaScript enabled to view it..

Original author: Jennifer Kahn
Continue reading
  4 Hits
  0 Comments
4 Hits
0 Comments

Better Living Through Crispr: Growing Human Organs in Pigs

Better Living Through Crispr: Growing Human Organs in Pigs

Belmonte was a young scientist at the time, working in a lab in Heidelberg, Germany. He was transfixed by the mysteries of gene expression—the biological signals that govern how an animal develops—and the pure potential that lurked in embryonic cells. Take any vertebrate: a chicken, a pig, a human. At maturity, they are dramatically different organisms, but they start out nearly identical. Belmonte began to wonder: If a mouse limb could live on a chicken’s wing, what else might be possible? How else might scientists alter the signals that dictate what a creature becomes?

Christie Hemm Klok

Belmonte’s interest in the malleability of destiny was, on some level, personal. The child of poor, barely educated parents in rural southern Spain, he had been forced to drop out of school for a few years as a young boy to support his family with farmwork. Only as a teenager did he return to the classroom—at which point he promptly set off on a rapid trajectory from philosophy (Nietzsche and Schopenhauer were favorites) to pharmacology to genetics.

By 2012, Belmonte was one of the world’s preeminent biologists, running his own lab at the Salk Institute in La Jolla, California, and another one in his native Spain. Like his colleagues all over the globe, he was pondering how to make use of a powerful new tool in the discipline’s arsenal—the Crispr-Cas9 gene-editing platform. After the first major Crispr papers appeared, Belmonte quickly set his sights on an audacious target. In the US alone, around 100,000 people are on a waiting list for an organ transplant at any given time, and some 8,000 of them die each year for lack of a donor. As Belmonte saw it, Crispr and chimeras could be a solution. He hoped to use the new gene-editing technique to fool the bodies of large livestock into becoming incubators for human hearts, kidneys, livers, and lungs.

Belmonte’s exploratory research started in mice. Using Crispr, he and his team deleted the genes that allowed the animals to grow several organs, including eyes, a heart, or a pancreas. Rather than let these maimed mouse embryos develop on their own, the Salk researchers injected some rat stem cells into the mix. Lo and behold, the rat cells replaced the missing organs, and the animals lived a normal murine lifespan. By 2017, Belmonte and his colleagues had moved onto bigger test subjects. They injected human stem cells into 1,500 ordinary pig embryos, then implanted those embryos into sows. Within about 20 days, some had developed into people-­pig chimeras. It was a modest success. The embryos were far more pig than person, with approximately one human cell for every 100,000 porcine cells. But the experiment was nonetheless a major milestone: They were the first chimeric embryos ever created by merging two large, distantly related species.

Much as he did with mice and rats, Belmonte plans to use Crispr to switch off a pig’s propensity to create its own organs, then fill the gap with human cells. But the second step—getting the human cells to take root in pigs at higher rates—has proved devilishly hard. “The mouse-rat efficiency is quite good,” Belmonte says. “Human-pig efficiency is not so high. So that is a problem.” Today, Belmonte’s lab is slogging through an arduous process of trial and error, testing how different animal and human cells interact when combined, in hopes that they can apply what they learn to pig-­human chimeras. But even that slog is, by the research standards of just a few years ago, proceeding at lightning speed. With conventional methods, Belmonte says, “it would take hundreds of years. But thanks to Crispr, we can move quickly to many, many genes and modify them.”

LEARN MORE

The WIRED Guide to Crispr

If Crispr has helped to supercharge the ambition of Belmonte’s work, it has also sent him careening into some of the thorniest ethical terrain in science. The ancients regarded chimeras as bad omens, and modern Americans have been similarly spooked by them—especially those that blur the line between human and animal. In his 2006 State of the Union address, President George W. Bush ranked the creation of such hybrids as among “the most egregious abuses of medical research.” In 2015, Belmonte learned that he was in the running for a Pioneer Award, one of the National Institutes of Health’s most prestigious and generous grants; then he found out that his application was on hold, he says, because of his chimera work. That same year, the NIH suspended federal funding for any studies that introduce human stem cells into animal embryos, saying it needed time to think through the ethical issues. A year later, the agency announced plans to lift the moratorium and opened the idea to public comment; 22,000 responses flooded in. So far, funding is still on pause. (Belmonte eventually won a Pioneer Award, but still carried out much of his pig research in Spain with private funds.)

John De Vos, director of the Department of Cell and Tissue Engineering at Montpellier University Hospital and Medical School in France, has no trouble envisioning worst-case scenarios involving pig chimeras. If too many human cells make it into a pig’s brain, for instance, the animal could theoretically develop new kinds of awareness and intelligence. (In 2013, scientists in Rochester, New York, injected mice with human brain cells, and the mice turned out smarter than their peers.) “It would be horrible to imagine a form of human consciousness locked in the body of an animal,” De Vos says. What if scientists inadvertently created a pig able to intellectualize its own suffering, one with a sense of moral injustice? Even if you could accept killing a farm animal to harvest its organs—which many animal welfare activists don’t—surely it would be monstrous to kill one with humanlike intelligence to go along with its humanlike pancreas.

Belmonte offers a straightforward solution to this problem: more Crispr. Using gene editing, he says, researchers can prevent a human cell from colonizing the brain of a pig. Similar interventions could keep human DNA from entering the porcine germ line—proliferating into future piglet-people for generations to come—another scenario that has made bioethicists especially squeamish. “In the laboratory,” Belmonte says, “we have technologies that could avoid those ethical concerns.”

A wiry 58-year-old, Belmonte has a dimpled smile, narrow eyes, and a gentle but energetic demeanor. Chimera research is, as it happens, only one major front that his lab is exploring with Crispr. He and his team have also performed a slew of experiments in epigenetic editing—a variation on Crispr that modulates gene expression rather than hack away at the DNA sequence itself. With it, they have reversed the symptoms of diabetes, kidney disease, and muscular dystrophy in mice. For good measure, they’re also trying to rewind the aging process itself.

“He is pushing boundaries on the things we can do nowadays,” says Pablo Juan Ross, a professor in the department of animal science at UC Davis, who has been conducting chimera experiments with pigs and sheep in his own laboratory. Both scientists are keen on proving the value of gene editing and chimeras. With such a dire need for human organs, Ross asks, wouldn’t we rather have technology that can develop them on demand with animals, instead of waiting for the next teenager to die in a car accident?

But while he is eager to show what might be possible, Belmonte is not particularly impatient to see his research leave the laboratory. He opted to destroy his fetal pig chimeras during their first trimester, before they could develop into anything more ethically confounding—despite the fact that in Spain, where they were grown, regulations would have allowed Belmonte to euthanize the animals after bringing them to term. And he is altogether wary of editing genes in people. “We need to know much more before we can use Crispr in a human being,” Belmonte says. “I wouldn’t dare to move it outside of the lab yet.”

Related Stories

Erika Hayasaki

The Strange Case of the Woman Who Can't Remember Her Past—Or Imagine Her Future

Emma Marris

Process of Elimination

Megan Molteni

How Do You Publish the Work of a Scientific Villain?

The science itself isn’t the only thing that needs to progress. There also has to be a thorough debate about gene editing, Belmonte says; scientists like him must have a strong voice in it, but so should physicians, the public, and the government. De Vos agrees. “Einstein did basic research in physics,” he says. “But it was at a country level that it was decided to apply these findings to bomb Hiroshima—not at the scientist level.”

Still, the clock is ticking on that debate. Belmonte firmly believes that scientists today are on the cusp of curing diseases, reversing aging, and saving lives with homegrown organs. “What we are talking about is not like taking an aspirin,” he says. “It could change our own evolution, our own species.”

A revolution in the culture at large—or at least a reckoning—will lag not far behind, whether we like it or not. “We change our values depending on the facts that are presented to us,” Belmonte adds. “That’s the way societies have evolved.” Given the pace of new developments in biology, we already have a lot of catching up to do.

Erika Hayasaki (@ErikaHayasaki) wrote about a scientific effort to dial down human pain in issue 25.05.

This article appears in the April issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at This email address is being protected from spambots. You need JavaScript enabled to view it..

Original author: Erika Hayasaki
Continue reading
  2 Hits
  0 Comments
2 Hits
0 Comments

A More Humane Livestock Industry, Brought to You By Crispr

A More Humane Livestock Industry, Brought to You By Crispr

Cow 401 and her herdmates were the product of two and a half years of research, Van Eenennaam’s attempt to create a strain of gene-edited cattle specially suited to the needs of the beef industry. Had everything gone as planned, all the calves in this experiment would have been born male—physiologically, at least. Like humans, cattle carry two sex chromosomes; those born XX are female, and those born XY are male. But it isn’t the Y that makes the man. It’s a single gene, called SRY, that briefly flickers to life as an embryo grows and instructs it to develop male traits. Using Crispr, Van Eenennaam’s team added a copy of SRY to the X chromosome too. That way, even if a cow was born genetically female, she’d be expected to appear male all the same. Since beef ranchers generally prefer males to females (more meat for the money), Van Eenennaam believed there could someday be a market for these Crispr’d animals.

Christie Hemm Klok

More than that, though, the project was a proof of concept. One of Van Eenennaam’s goals is to make the raising of livestock not only more efficient but also more humane. If a calf’s sex could be altered with a copy-paste of a single gene, that might pave the way for all kinds of experimentation—and not only in the beef business. Although ranchers may prefer male animals, their colleagues in the egg and dairy industries favor females. Since bulls can’t make milk and roosters can’t lay eggs, it’s cheaper to destroy them than raise them to adulthood. But if you could ensure that only heifers and hens are born, the carnage wouldn’t be necessary.

The Davis team wasn’t yet sure what had gone wrong with the pregnancies. They’d done their work with such care. First they located a target area on the bovine genome and created a custom set of Crispr scissors to cut the DNA and insert the new gene. Then they took a trip down the interstate to a slaughterhouse in Fresno, where they purchased a fresh batch of ovaries. Back in the lab, they aspirated the eggs, fertilized them, and set their Crispr scissors loose. They let the resulting embryos grow for a week, biopsied them to make sure the edits had gone as planned, then froze them until the cows were ready for implanting.

Perhaps, Van Eenennaam thought, the arduous process had simply knocked the life out of the embryos. “Science is a bitch,” she said with a shrug. But there was a more troubling possibility—an issue with the gene edit itself. On a map of the bovine X chromosome, the location where they’d inserted SRY seemed to be within a stretch of extraneous code, far from any life-critical genes. But then again, the map they currently had was about as accurate as a 16th-century atlas of the New World, full of unknown and mislabeled territories. Maybe, by tinkering in the wrong place, they had arrested development in the womb.

Alison Van Eenennaam at the UC Davis Beef Barn.

Christie Hemm Klok

Twenty-five years ago, Van Eenennaam was a student at Davis in the early days of the GMO craze. Animal scientists, long limited by the pace of traditional trial and error breeding, could now mix and match genetic traits from different organisms, giving their livestock strange new powers. At Davis, for instance, they engineered a line of goats that carried a human protein called lysozyme in their milk. (Later on, researchers realized that, when fed to children in the developing world, that milk could prevent diarrhea.) As a young faculty member at Davis in the mid-2000s, Van Eenennaam explored a method for modifying cows to produce milk with extra omega­-3s. Then, just as she prepared to begin experiments in actual cattle, she says, the money dried up.

Around that time, the Food and Drug Administration had decided to classify genetic modifications to food animals as veterinary drugs. At specific issue were transgenes—DNA ported from one species into another—which, in the agency’s view, altered “the structure or function” of the animal. This meant that scientists would have to submit to an expensive approval process before anything reached the grocery store. There were calls for reform, but policymakers lacked the will to implement regulatory changes that would both promote research and assuage people’s growing fears about GMOs. With no path to commercialization in sight, and with the looming threat of a public backlash, the institutions that had funded the work ended their support. Only one animal from that period, the AquAdvantage­ Salmon, has since been approved for human consumption, though no one in the US is eating it yet, owing to regulatory hand-wringing over how it should be labeled. The lysozyme goats still amble, idly, around a pasture on the Davis campus.

Van Eenennaam argues that Crispr experiments like hers—those not involving transgenes—should be treated differently. As she sees it, the technology is just a faster, more precise version of what farmers have done for centuries, because it makes changes that could have occurred in the organism on their own. The US Department of Agriculture, which oversees gene editing in plants, appears to share this view; in March 2018, it decided, in most cases, to regulate this use of Crispr like it does traditional breeding methods. But the most recent guidance from the FDA, issued in January 2017, seems to lump gene editing in with the old GMO techniques. That’s because, as the agency sees it, both approaches present similar risks, not only to people but also to animal welfare—something the USDA doesn’t have to consider. Van Eenennaam worries that the same fears and heel-dragging as before could scuttle the field before it has a chance. “The engineering debate killed my career,” she says. “Now this editing debate has the potential to kill my students’ careers.”

LEARN MORE

The WIRED Guide to Crispr

For all the anxiety and ambiguity surrounding Crispr, there’s little doubt that it could revolutionize farming as Van Eenennaam hopes. In January, British researchers announced plans to raise chickens with an immunity to influenza. A small genomic incision, they hypothesized, could prevent the virus from infecting its hosts. That would not only save chickens from untimely demise but also cut out a likely conduit for a devastating human pandemic. You may not like the idea of Crispr meddling with grandma’s chicken pot pie recipe, but would you relent if it could stop the next Spanish flu?

“I’d hope so,” says Randall Prather, a geneticist at the University of Missouri. His lab has raised pigs that are resistant to porcine reproductive and respiratory syndrome, or PRRS, an untreatable disease that costs the US swine industry more than half a billion dollars each year. The solution, he says, comes down to modifying as few as two DNA base pairs out of 3 billion. Prather licensed the technology to a British company called Genus, which says it expects to spend tens of millions of dollars on the FDA approval process.

Yet not all Crispr experiments in livestock offer such unambiguous benefits. Many merely aim to improve efficiency, speeding up the process that gave us broiler chickens four times the size they were in Eisenhower’s day. That fuels perceptions that gene editing will only encourage the worst inclinations of factory farming. In Brazil, for example, scientists recently bred Angus cattle that carry a heat-tolerance gene called Slick. While this could eventually be a path to readying the global cattle industry for climate change, for now it likely means that the Brazilian Amazon will have to support even more cows than it already does.

Related Stories

Gregory Barber

Thousands of Unstudied Plants May Be at Risk of Extinction

Megan Molteni

Gene Editing Is Trickier Than Expected—but Fixes Are in Sight

Stephen S. Hall

Crispr Can Speed Up Nature—and Change How We Grow Food

Robbie Barbero, who led efforts to modernize biotech regulations in the Obama White House, says that it’s time for the FDA to offer some clarity. “In the absence of a regulatory path that’s rational and easy to understand, it will be almost impossible for any animals to make it to market,” he says. With transgenes, he argues, it was possible to wrap your head around the logic of regulating changes as drugs. “But when you’re talking about regulating changes to the genome that could’ve happened naturally, you’re asking to stretch the imagination,” he says. The draft guidance, Barbero notes, was intended as a starting point, not the final word.

If and when the FDA decides to weigh in, says Hank Greely, a bioethicist and professor of law at Stanford, it will have to reckon with the unique risks of gene editing—that an edit might produce new allergens, for example, or spread from livestock to their wild cousins. His underlying fear, however, is “the democratizing nature of Crispr.” An argument against GMOs was that the expense of creating them would consolidate power in the hands of wealthy multinationals; a company such as Monsanto would spend millions engineering a new transgenic crop, then sell it to struggling farmers at an exorbitant price. But the remarkable ease of gene editing, Greely says, could have the opposite effect. It could push certain rogue actors—say, “a guy with a dog kennel or a biologically sophisticated rancher”—toward cavalier, DIY experimentation. That’s why Greely thinks researchers should be required to register their edits.

For now, though, political momentum appears once again to have stalled. That’s left nascent projects, like Van Eenennaam’s, waiting for answers.

If there is a purgatory for gene-edited cattle, it can be found in the Davis Beef Barn, which is home to six young penitents. About five years ago, their father, a bull, was genetically dehorned by a Minnesota-based company called Recombinetics. Just as egg farmers prefer hens, dairy farmers prefer polled, or hornless, cows. Often they’ll prevent the horns from growing by burning them off with a hot iron or applying caustic chemicals. So, using a Crispr-like technology known as Talens, Recombinetics gave the bull two copies of the polled variation, in the hope that none of his descendants would have to undergo the procedure.

Five of those hornless descendants turned out to be male, which meant they wouldn’t be much use to the dairy industry anyway. Van Eenennaam has asked the FDA for permission to sell them as food. “They’re either all going to be incinerated or they’re all going to become steaks,” she explains. One of the bulls gently sniffs her fingers through the wooden slats of the pen. “Sorry to talk about this in front of you guys.”

Princess, the lone polled female, is hanging out a few pens away. Before she and her brothers can be introduced into the food supply, the FDA requires that they pass a range of tests, both genetic and physical. Their gene-edited uncle supplied the meat for quality testing; now Princess will be bred so that, when her milk comes in, it can be analyzed. But Van Eenennaam says the agency hasn’t told her clearly what results it is looking for, almost as though it’s searching for the risks it wants to regulate. For instance, the FDA asked her to confirm, via full genome sequencing, that there had been no unintended edits that jeopardized the animals’ safety. But sequencing the same genome 20 times over, as Van Eenennaam did, will turn up slightly different results with each pass. And besides, she says, even if you could pinpoint any errant edits, what would they tell you about the animal’s health? She advocates a wait-and-see approach: “There’s a natural evaluation process called ‘living’ that will weed out anything that’s weird.” (The FDA does not comment on pending applications.)

SIGN UP TODAY

Sign up for the Daily newsletter and never miss the best of WIRED.

Even as Van Eenennaam and her calves are hung up in regulatory limbo, she is looking ahead to the next step in the process: scaling up genetic improvements on the ranch. Unlike pigs and chickens, whose reproduction is strictly controlled, beef cattle tend to procreate unsupervised, out on vast grazing ranges. This makes it hard to ensure that desirable traits, like swift growth or well-marbled meat, get passed down. Van Eenennaam thinks she’s found a solution. She plans to take a group of bulls, knock out the gene that allows them to create sperm, and swap in a replacement from a superior animal—perhaps even one that carries the edits for hornlessness or all-male offspring. The result would be ordinary bulls with, as Van Eenennaam puts it, “excellent balls.” Rather than spreading their own mediocre genes, they’d spread the elite genes of others—and they’d do it faster than ranchers could manage on their own.

Van Eenennaam and her colleagues are also focused on getting their earlier experiment working. After the disappointment of the pregnancy checks, they soon came up with two possible explanations for what went wrong: Either they inserted the SRY gene in the wrong place or they damaged the embryos in the lab—perhaps during the biopsy, when they were checking to see whether the edit took. In the next stage of the project, they’ll investigate both possibilities at once. First, they will insert SRY into a completely different chromosome, at a location where other researchers have successfully dabbled in mice. But this edit will be different from the last one: It will include a gene, borrowed from a jellyfish, for red fluorescence. If the insertion is successful, the cells will simply glow, no biopsy required.

It’s not an ideal solution. If all goes well, Van Eenennaam won’t have gene-edited cattle, as she originally intended; she’ll have a transgenic herd. So while she’d hoped to get the FDA’s blessing to sell the animals at the end of her research, she now plans to incinerate them instead. Even the mothers, which naturally share small amounts of genetic material with their offspring, could be considered tainted. “I’ve been resisting putting a transgene in,” she says. “But we’re just going to have to bite the bullet and kill them and their mothers and everything that touches them.”

Van Eenennaam does the math: $15,000 to buy 10 cows from a local rancher, plus $8 a day, each, to pasture them until a Christmas birth. Her grant will have ended by then, and she worries she won’t get another one.

Gregory Barber (@GregoryJBarber), a WIRED staff writer, wrote about selling his personal data on the blockchain in issue 27.01.

This article appears in the April issue. Subscribe now.

Let us know what you think about this article. Submit a letter to the editor at This email address is being protected from spambots. You need JavaScript enabled to view it..

Original author: Gregory Barber
Continue reading
  6 Hits
  0 Comments
6 Hits
0 Comments

The Read/Write Metaphor Is a Flawed Way to Talk About DNA

The Read/Write Metaphor Is a Flawed Way to Talk About DNA

In 2014, chemist Floyd Romesberg, of the Scripps Research Institute, synthesized a new pair of artificial nucleotides and got a cell to accept them as part of its genetic code. In metaphorical terms, he extended the alphabet of life.

To review, the DNA molecule is built from four nucleotides, or “letters”: adenine (A), thymine (T), guanine (G) and cytosine (C). Each letter is one half of a pair—A always goes with T, and G with C—and each pair forms a single rung of the molecule’s twisted ladder. Romesberg’s team, after years of work, synthesized a third pair—X and Y—and inserted it successfully into the code of a bacterium, which then reproduced, maintaining its synthetic code. Life on Earth depends on a four-letter code; Romesberg had invented a life-form with six. In 2017, he updated the accomplishment, optimizing and stabilizing the cell. More important, he showed that the cell could express a novel protein. “We stored information, and now we retrieved it,” Romesberg told The MIT Technology Review. “The next thing is to use it. We are going to do things no one else can.”

Discussing his project with The New York Times in 2014, Romesberg used a metaphor: “If you have a language that has a certain number of letters,” he said, “you want to add letters so you can write more words and tell more stories.” In his TED Talk, he extended this metaphor, asking the audience to imagine a typewriter with only four keys. Wouldn’t six keys be better? Couldn’t you say more? The metaphor seems flawed to me. It may be that new nucleotides = new amino acids = new organisms, but it does not follow that new letters = new words = new stories. I know lots of writers, but none of us have been thinking, “If only there were 30 letters in the alphabet, then I could finish my novel, The Story of Jimβθ!”

George Estreich is the author of The Shape of the Eye: A Memoir. His writing has appeared in Tin House, The New York Times, Salon, and other publications.

The MIT Press

Romesberg is careful to separate himself from wings-and-ultraviolet-vision transhumanism. His declared goals are squarely, soberly medical: a six-letter alphabet could code for a larger complement of possible amino acids, which could be assembled into proteins not found in nature, which might be useful as medicines, which Synthorx, the company Romesberg cofounded, hopes to develop for profit. And yet Romesberg’s metaphor points to a tension between expansive and restrictive views of technology. On the one hand, to “tell more stories” can be glossed as “creating as many novel life-forms as possible.” On the other hand, Romesberg defined those “stories” in familiar, clinical terms—curing disease, including cancer—and the cells, Romesberg noted, would remain obediently in the lab, dependent on a diet of artificial nucleotides to stay alive. Their meaning would be circumscribed, contained, their lives kept safely in vitro. Romesberg’s rhetoric walks a familiar line between old and new, “natural” and “synthetic.” In this, it mimics the application it serves, which splices old and new nucleotides, natural and artificial, together. But more significant is the fact that Romesberg uses metaphor at all: that he uses literary techniques to persuade and does so self-consciously, reflecting on the materials of meaning. He behaves as, is, a writer.

For high-profile scientists, the ones who speak to lay audiences, write popular books, and deliver TED Talks, metaphor is a key persuasive tool. The right metaphor can soothe fears, explain the recondite, and familiarize the unfamiliar. It is scary to say, “We want to create, not only new life, but a new kind of life, one fundamentally different from every single organism that has ever lived.” It’s less scary if creating new life-forms is just like telling stories. We associate stories with entertainment, meaning, and self-expression. Like the vaguely positive keywords anchoring ads for noninvasive prenatal testing (health, choice, empowerment) or de-extinction (revive, restore), story shines a rosy, apolitical light on a technological development, familiarizing the new.

Beneath the metaphor of a story is another, one so ubiquitous as to go unnoticed: that DNA is a language, one which we “read,” “write,” and “edit.” This is closely related to other information metaphors: that DNA is a code, or software. As Hallam Stevens explains in Biotechnology and Society: An Introduction, “[h]istorians have documented how ‘information’ and ‘code’ came to be powerful metaphors in molecular biology in the 1950s and 1960s.” Stevens notes the pervasiveness of the metaphor—“It is hard to imagine it any other way”—but notes that it is not inevitable: “After all, the As, Gs, Ts, and Cs are not like English and Japanese. They are not really a language. Nor are they really a code … it is important to remember that information and code are metaphors rather than literal descriptions of how biology works on a molecular level.” Historian of biology Lily Kay, writing in 1998, noted both the use and limits of the metaphor:

There is no way of avoiding metaphors and analogies as heuristics in the production of knowledge, biological or otherwise, and the information discourse has been particularly powerful and productive. But metaphors have their limits, and analogies should not be confused with ontologies.

A first step toward clarity, toward disentangling the categories of life, language, and code, is to not take metaphors literally: to recognize when metaphor is being used, to explore its implications, and to recognize its limits. One limit of the DNA = story metaphor has to do with the way reading works. Reading is linear and one-dimensional. When we read a novel or poem or essay, we read one word at a time, in order, and even when we reread books, we understand them in terms of the sequence of the whole. It matters that chapter 1 comes first and chapter 23 comes last. But in the Book of Life, the linear paradigm doesn’t hold. Nathaniel Comfort writes, “The old metaphor is not wrong; it is incomplete. In the new genome, lines of static code have become a three-dimensional tangle of vital string, constantly folding and rearranging itself, responsive to outside input.”

If DNA is three-dimensional, in constant interaction with its cell, if it is constantly being “read” in different locations simultaneously, and some of the “reading” affects other “reading” in a dynamic, at-present-incalculably-complex set of interconnected feedback loops, then the “reading” looks very different from human “reading.” So the metaphor has explanatory value, but its value is limited. (In a short article on genes and metaphor, John C. Avise acknowledges the practical use of seeing genomes as information. He also offers other metaphors—ecosystem, community, city—and argues that “metaphors can and should evolve to accommodate new findings.”)

There are other differences. In books, the definition of every word is known; we do not know the function of every gene. Further, we expect books to be densely coherent. Even when they stretch and sprawl and wander, they still make sense—however “sense” is defined—at every level of form, from word to sentence to paragraph to chapter. They don’t typically contain long, random character strings. But as Comfort writes, the book of life is a mess—“[I]f a genome is text, it is badly edited. Most DNA is gibberish.” Lily Kay notes that the parts that aren’t gibberish remain difficult to interpret:

Once the complexities of DNA’s context-dependence—genetic, cellular, organismic, and environmental contexts—are taken into account, pure genetic upward causation is an insufficient explanation … And when epigenetic networks are included in the dynamic processes linking genotype to phenotype (e.g., post-translation modifications, cell-cell communication, differentiation, and development), genetic messages might read more like poetry in all their exquisite biological nuances and rich polysemy.

Every metaphor breaks down somewhere. To have a story, and to be one, are not the same. George W. Bush can have a story, and so can Lassie, or a tapeworm. But none of these creatures is a story, something designed deliberately and in molecular detail by a single creator, written into existence, letter by letter, word by word. So when Romesberg says, “[Y]ou can write more words and tell more stories,” his metaphor assumes (and normalizes) the idea that it is acceptable to design new creatures in the first place. It lessens the difference between the evolved and the designed: all are “stories.” And it subtly shapes the hearer’s sense of the technology in question, making it seem more powerful and more certain. The scientist, at his “typewriter,” taps out a new “story.” The metaphor emphasizes human intention and interpretive certainty, a message with a clear meaning, reliably reproduced.

Turning Romesberg’s rhetorical you to a literal one, I would ask, If a new story is a new creature, then what stories do you want to tell? We have no cultural limit on stories, on their complexity or intricacy: will there be any limits on the stories told with the new letters, or on their ability to replicate, or on the ability of the designed creatures to interact with the evolved? Who will be our storytellers, and what will they believe?

In Romesberg’s formulation, new organisms are new stories, a conceit made possible by the root metaphor that DNA is a language. But a project completed around the same time by the J. Craig Venter Institute took the metaphor a step further. In his book Life at the Speed of Light, Craig Venter himself—the brash, iconoclastic scientist and entrepreneur, and the institute’s founder—described his project as the first “synthetic cell”; it was named Mycoplasma mycoides JCVI-syn1.0, but it acquired the nickname “Synthia.”

You can tell a lot about a biotech application about the way it’s named (“noninvasive,” “de-extinction”), and Venter’s new cell is no different: its formal name highlights the merging of the biological and digital. By hybridizing Linnaean and digital terminology, Venter indicates his view that we are at “the dawn of digital life,” when life, because it can be translated into digital code, can “move at the speed of light.” The name also denotes authorship and intellectual property: Venter’s initials are inscribed in the organism’s taxonomical name (JCVI, for J. Craig Venter Institute).

As many have pointed out, Venter did not synthesize an entire cell. Instead, his team began with the genome sequence of one bacterial species (M. mycoides), altered the sequence on computer, built it from scratch, and implanted it in the cells of a different bacterial species; the assembled genome then took over the new cells. The synthetic genome, over a million base pairs long, was assembled from pieces ordered from a DNA synthesis company, which took Venter’s digitally composed sequence—the string of bases, or “letters”—and then chemically synthesized it and delivered it in short, overlapping stretches called oligonucleotides. Venter’s lab painstakingly stitched these together into larger pieces, which were themselves stitched together into a full genome: a synthetic chromosome, which was then transplanted into a cell. The project took 15 years. Venter emphasizes the precision required in the experiment: the transplantation failed repeatedly because of a single typo, a single misplaced letter in a key gene. When the transplantation finally succeeded, the DNA at the cell’s heart had been human designed and human assembled, but the cell divided and reproduced as if it were natural. As The Guardian reported, the cell “paves the way for designer organisms that are built rather than evolved.”

Announcing the cell’s completion, Venter demonstrated an instinct for publicity, as The New York Times reported:

At a press conference Thursday, Dr. Venter described the converted cell as “the first self-replicating species we’ve had on the planet whose parent is a computer.”

“This is a philosophical advance as much as a technical advance,” he said, suggesting that the “synthetic cell” raised new questions about the nature of life.

In the same article, Nicholas Wade reported the misgivings of leading scientists who found Venter’s technical achievement remarkable, his hype distasteful. Leroy Hood used the word “glitzy.” Nobel laureate David Baltimore granted the technical achievement, but added, “To my mind Craig has somewhat overplayed the importance of this … He has not created life, only mimicked it.” Gerald Joyce similarly noted the “power” of designing a genome letter by letter, but rejected the idea that the cell was “a new life form”: “Of course that’s not right—its ancestor is a biological life form.” The public rivalries of scientific frenemies are a popcorn-worthy combination of Mean Girls and Pacific Rim, but beyond the gossip, the arguments are as rhetorical as they are scientific: how should scientists represent their work to the public? To my mind Craig has somewhat overplayed the importance of this.

In the Science paper unveiling the project, Venter is relatively restrained, but in his press conferences and in his book, his claims lie somewhere between science, philosophy, literature, and guru-like prophecy. Depending on the audience, the same synthetic cell is communicated in radically different ways. This rhetorical divide is characteristic of new biotechnologies—think, for example, of the difference between an ad for NIPT and a consent form signed by a patient—but is also traditional. Like many of his scientific forebears, James Watson in particular, Venter is understated in scientific publications and hyperbolic before the press.

In Venter’s case, the hyperbole takes the form of metaphor. In Life at the Speed of Light, Venter’s description of his synthetic organism is exuberantly synthetic, splicing together elements of life, writing, publication, software, and the Internet: “We were ecstatic when the cells booted up … It’s a living species now, part of our planet’s inventory of life.” Throughout the book, Venter treats metaphor like an engineer stress-testing a metal, pushing it to the point of failure. His point is that the metaphor is not metaphorical:

[DNA] is in fact used to program every organism on the planet with the help of molecular robots. [Emphasis mine.] …

All living cells run on DNA software, which directs hundreds to thousands of protein robots …

Digital computers designed by DNA machines (humans) are now used to read the coded instructions in DNA, to analyze them and to write them in such a way as to create new kinds of DNA machines (synthetic life).

To drive his point home, Venter encoded messages in Synthia’s genome. These, described as “watermarks,” distinguished the creature as synthetic. Venter used a code, with triplets of DNA letters equivalent to letters of the alphabet, to spell out messages, including the names of contributors to the Science paper announcing Synthia’s existence. Also included were three quotations, in all caps (the code didn’t include lowercase): one from James Joyce’s Portrait of the Artist as a Young Man (TO LIVE, TO ERR, TO FALL, TO TRIUMPH, TO RECREATE LIFE OUT OF LIFE); a saying attributed to J. Robert Oppenheimer’s teacher, SEE THINGS NOT AS THEY ARE, BUT AS THEY MIGHT BE; and WHAT I CANNOT BUILD, I CANNOT UNDERSTAND, a misquote from the physicist Richard Feynman. (The original: “What I cannot create, I do not understand.”) These were initially presented as a puzzle to solve: also encoded in the genome was an email address, so the DNA machines (human) who’d figured out what the DNA machine (Synthia) was saying could contact the DNA machines at the J. Craig Venter Institute and let them know. Like Romesberg’s “stories” written in six-letter DNA, Synthia is conceived as a kind of message, but it takes that vision to a literal extreme.

To me, Synthia is an elaborate, clever instance of biological wordplay—more sudoku than poetry, but suggestive nonetheless, a puzzle that remains mysterious even on decoding. But according to Venter, Synthia’s meaning is clear. It—she?—has two lessons to impart. First, it disproves “vitalism,” the idea that something nonmaterial—spirit, a “life force”—is necessary for life to exist. Life is material. And second, Synthia proves that life is information. Venter stresses the point, saying that “[t]hese experiments left no doubt that life is an information system.” His work offers “the proof … that DNA is the software of life.”

Mycoplasma mycoides JCVI-syn1.0 is a quasi-literary text, inscribed in a cell. For that reason alone, it seems to me that its potential interpretations are more varied, more uncertain, and more interesting than the ones advanced by its author. Among Synthia’s all-caps proverbs is the declaration WHAT I CANNOT BUILD, I CANNOT UNDERSTAND, which implies that its builders best understand its meaning; however, if the history of literature teaches us anything, it’s that the author is the last person you should turn to when seeking the meaning of a work. What an inventor wants something to mean matters less than what the world chooses to make of it.

If Synthia were just a really short book, no one would bother with it. It’s a cereal box version of Bartlett’s Familiar Quotations: three quotes, a list of names, and an email address. It wasn’t written by the author whose initials enclose the whole, and one of the quotes is incorrectly transcribed. Worse, it performs the remarkable feat of making James Joyce sound like a bad motivational speaker: TO LIVE, TO ERR, TO FALL, TO TRIUMPH, TO RECREATE LIFE OUT OF LIFE! And yet a closer look is rewarding, because the more you look, the more the cell’s meanings splinter into uncertainty, beginning with its name.

Synthia, I’d thought at first, was a clever bit of wordplay on Venter’s part, the cell’s name expressing its nature: syn for synthetic biology; the letter S substituted for C, suggesting the editing process by which words were encoded in its sequence. It turns out that the name was coined back in 2007 by the ETC Group, a Canadian civil society group fiercely opposed to the project. The name was intended as mocking, like “Obamacare,” but the effort backfired: it was catchy, so it stuck, and soon it became a handy generic name, if only because “Mycoplasma mycoides JCVI-syn1.0” does not exactly roll off the tongue.

If the mockery misfired, it may be because Synthia fits easily into the rhetoric of invented life, the specific kind of whimsy of those who, playing God or not, enjoy playing with words: Dolly, the cloned sheep (named after Dolly Parton, because the sheep was cloned from a mammary cell); cc, the cloned cat; Hercules, the genetically engineered, supermuscular beagle; “Eau d’E. coli,” a variant of E. coli engineered to smell good. In books like Venter’s Life at the Speed of Light and George Church’s Regenesis, the lighthearted, catchy names sit oddly beside the grand claims about life, science, and the future. It is as if someone had stuck a limerick into the Odyssey.

This dissonance points less to science than to a saturated media environment, in which extremity and novelty are rewarded. The two rhetorical registers of biotech futures—stentorian announcements of a New Epoch and catchy names for new animals—are simply two forms of novelty, two ways to distract an audience from distraction. You can do that with entertainment or wisdom, a joke or a truth, a witty slogan or a sonorous prophecy. To delight and instruct, in the age of social media: our new pitches map onto an old poetic goal, made more urgent by the sheer quantity of information we have to sort through.

What is the best way to read a cell? Synthia’s declarations tend more toward prophecy than wit, but even ignoring the substance of its all-caps wisdom, the very fact of quotation is suggestive. Plucking sentences from context mirrors the project itself, which tears genes from previous contexts and installs them in new ones, both digital and biological. And who is quoted matters as much as what is said: pointing to Oppenheimer, Feynman, and Joyce elevates the idea of the iconoclastic genius, clearly implying that Venter belongs in their company. By implication, it is Venter who has TRIUMPH[ED] by RECREAT[ING] LIFE OUT OF LIFE, who SEE[S] THINGS NOT AS THEY ARE, BUT AS THEY MIGHT BE. And with the quotation WHAT I CANNOT BUILD, I CANNOT UNDERSTAND, Venter lays claim to a superior understanding of life—and causes the built object, the living cell, to ventriloquize the claim.

It is an odd thing for a cell to say I, odder still when that cell’s existence is a team effort. But Synthia hovers between individual and group achievement. Venter’s initials occupy the taxonomic name, but as part of an institute; the names of his coauthors are inscribed in the “watermarks”; the cell, with Joyce, elevates an individual ideal of achievement, but at the same time distributes credit, albeit less prominently, to the group. As a digital creature, its uncertainties reflect a digital age when the author is in decline; when old ideas of intellectual property battle with new ones; when everyone on social media is writer, reader, and publisher at once; and when so much of what we share is curated, appropriated, and snipped from one context and repurposed for another. The quotations illustrate this tendency, but their real function is to identify the cell as synthetic and not natural. The quotes are claims that stake a claim. Of course, there’s no end of irony in using appropriated text to establish intellectual property rights, especially when one sentence is a misquote. A string of random characters would have been simpler.

To me, the altered quotation from Feynman is endlessly suggestive. WHAT I CANNOT BUILD, I CANNOT UNDERSTAND can be read at face value, as a declaration of the synthetic biology principle that life must be constructed to be understood, and yet, as a misquote, it reads as a flaw in construction. Human achievement is undercut by human error. Given Venter’s insistence on the precision of his genome editing, the proofreading error is especially ironic. At the same time, it reveals which kinds of precision matter. DNA may be, in Venter’s accounting, an information system, but some forms of information are clearly more important than others. Bacterial genomes are proofread down to the base pair; human sentences—eh, close enough.

Implied in Synthia’s text, and made explicit by Life at the Speed of Light, is an idea about which kinds of knowledge matter most. There’s a paradox in books like these: scientists take on the role of artists, but the arts are distinctly secondary. Even as the scientist is portrayed as a storyteller with an automated sequencer, a painter with a palette of nucleotides, the arts come off as a sort of third wheel of civilization. They aren’t ways of understanding the world, loci of transcendent shared loneliness across time, the set of practices every culture has, and without which life would have no point; they’re just a source of iconic explanatory examples. Whatever is cited tends to be famous and big. Venter notes Joyce; George Church, in Regenesis, explains that he wants to make new genomes synthetically, not simply copy old ones, because “[p]hotographing the Mona Lisa is not as impressive as creating it in the first place.” In this figure, art is more decorative than structural. It is the trim in the house of science, the false columns in front, substantial looking but not load bearing.

In both Regenesis and Life at the Speed of Light, this attitude to art is rooted in an engineer’s approach to the world. What matters is doing something, making something useful. This idea resonates with me, but not when it’s used to demote other human endeavors that are necessary and useful in their own ways. George Church, for example, defines synthetic biology in opposition to something “self-indulgent”:

Synthetic biology is mostly about developing and applying basic engineering principles—the practical matters that help transform something academic, ivory-towerish, pure, and sometimes self-indulgent or abstract into something that has an impact on society and possibly even transforms it.

Venter, too, defines himself as a problem solver, setting himself in opposition to a fictional ivory tower:

Richard Feynman issued a famous warning about the dangers of attempting to define anything with total precision: “We get into that paralysis of thought that comes to philosophers … one saying to the other: ‘You don’t know what you are talking about!’ The second one says: ‘What do you mean by “talking”? What do you mean by “you”? What do you mean by “know”?’”

Like Church, Venter—though he touted Synthia as “a philosophical advance as much as a technical advance”—implies a clear contrast between practical doers and philosophical yappers. And yet: isn’t disproving vitalism, the point of Synthia, kind of … philosophical? Doesn’t transcribing a message in a cell raise questions about language? And don’t Venter’s own metaphors, like saying that people are DNA machines or saying the cells booted up, raise questions about what it means to talk, to know, to be a “you”— about, in other words, language, knowledge, and people? These questions are raised by transformative biotechnologies. That they can be pursued to dead ends is no reflection on the questions themselves: any line of inquiry, in any field, can be sterile and pointless. The key is to consider the questions in a fruitful way. That begins with questions of power: who gets to speak, who is considered authoritative, and who is spoken about.

These are ancient questions, but the digital age renews them. The existence of a “programmable cell” blurs life and nonlife, organisms and messages. Language like the dawn of digital life or E. Cryptor or H. sapiens 2.0 celebrates the blur, playing with it in words, but beneath the play is a message of control, the ability to build a cell to order, to make it serve a task. Both meanings, it seems to me, are evident in the word watermark, which embodies the durable and ephemeral: it could stand for life (its code enduring, its forms changing) or the internet, where the folk belief that data live forever is belied by the fact that data tend to disappear, either drowned by the sheer wash of new data or simply lost. But watermark is a printer’s word, not a poet’s. A sign of ownership. A mark of intellectual property, inscribed in fluid life.

It took a scientist to point out another irony to me: the very fact that cells change as they evolve means that the watermarks will change. Left to their own devices, Synthia’s descendants will evolve. Since the “watermarks” are embedded in noncoding regions, the cell has no need to preserve them. Therefore the list of authors, the quotations, and the email address to contact will begin to degrade. They will be no more permanent than marks in stone; they will weather from the inside out, the author’s intentions fading, letter by letter. As an organic book, one that can reproduce independently, Synthia is self-publishing, but it is self-revising too.

As a living creature, Synthia is a chimera, an engineered blend of two species. But as a living book, it is a chimera of minor forms, a vanity-press amalgam of title, aphorisms, contributors’ notes, and copyright page. These, Russian-doll style, are all enclosed in the “watermarks,” which are in turn enclosed in a brainteaser: the entirety of Synthia’s legible text was presented as a puzzle for smart, science-oriented people to solve. It is, in other words, an intelligence test, but it emphasizes one kind of intelligence, selecting for those who—like Synthia’s inventors—have a problem-solving mind-set and a brain for code, and who are digitally savvy and connected enough to contact the J. Craig Venter Institute with their solution. This does not describe most of the humans in the world. It is technology that divides, not technology that embraces. I prefer a different view of technology and a different voice. A voice that is open and questioning, and that begins and ends with people and thinks about how tools might fit, rather than beginning with the tool and assuming that people will find a place.

Adapted excerpt from Fables and Futures: Biotechnology, Disability, and the Stories We Tell Ourselves by George Estreich, © 2019 George Estreich.

When you buy something using the retail links in our stories, we may earn a small affiliate commission. Read more about how this works.

More Great WIRED Stories

Original author: George Estreich
Continue reading
  4 Hits
  0 Comments
4 Hits
0 Comments

Instagram's New Shopping Feature Works Like a Digital Mall

Instagram's New Shopping Feature Works Like a Digital Mall

The mall of the future is not a sprawling metropolis of stores, punctuated by the occasional soft pretzel stand and big-box movie theater, but a platform your phone. Imagine: a million stores made just for you, selling only the things you're likely to buy, based on what you've bought in the past or how you've behaved online. Plenty of platforms are trying to steer shopping in this direction. Amazon anticipates when you'll need to re-stock on paper towels. Pinterest predicts what you'll want for your home remodel. Now, Instagram is taking a big step toward surfacing the stuff you might buy, and making it easier for you to buy it.

For brands, it's like a opening a storefront in shopping center where the customers who are most likely to buy from them are automatically directed to their front door.

Starting today, Instagram will enable in-app check-out for its shoppable posts. By streamlining the process of purchasing things within its mobile app, Instagram hopes to become your own personalized, digital mall.

One year ago, Instagram made it possible to “shop” posts in your feed. If you follow a brand like Zara, you might see a post showcasing a new shirt—part branded content, part advertisement—with a tag that shows the item name and pricing. Instagram also introduced a dedicated space for shopping through Explore. Enter the Explore tab and you’ll find, in addition to personalized interests like “Food” or “Travel,” a section for “Shopping.” It’s filled with shoppable posts from brands or accounts you follow, plus ones Instagram thinks you’ll like based on your browsing behavior. Tap on a post and you’ll find more details about the product for sale—a sectional couch, or a pair of sunglasses—with pricing and a shopping link.

Arielle Pardes covers consumer technology for WIRED.

Instagram says that over 130 million people tap on these tags each month, browsing through posts from brands big and small. But the actual purchasing process can be annoying. Want to buy that shirt from Zara? A link navigates you to the mobile site, which opens up inside of the Instagram platform, and then brings you through a clunky check-out process. It's easy to lose your place, and awkward to enter credit card information on a mobile platform layered on top of another mobile platform. “Now, instead of having to go through this clunky mobile web flow and checking out, you can now check out directly on Instagram,” says Ashley Yuki, the product management lead on the new feature.

Instagram

The in-app check-out feature will roll out with a set of 20 initial retail partners, including Nike, Burberry, Uniqlo, and a handful of internet-native brands, like Warby Parker, Outdoor Voices, and Kylie Cosmetics. Yuki says the platform plans to include more retailers soon. These vendors will pay a cut of their sales to Instagram, though Instagram declined to share what that fee would be.

Instagram counts over a billion users on its platform every month—and it has a detailed dossier on each of them. The Facebook-owned company makes note of which brands the users follow, what categories they're interested in, and which targeted advertisements their eyeballs linger on. For brands, it's like a opening a storefront in shopping center where the customers who are most likely to buy from them are automatically directed to their front door.

The new feature also brings Instagram one step closer to its vision of becoming the only shopping mall you'll need. Yuki says Instagrammers are increasingly turning to the platform to discover products they love, in addition to connecting with friends and family. “We’re excited to take it one step further and say, what would happen for our community who comes to Instagram and just wants to be shopping?” says Yuki. “It’s personalized for you, based on brands you follow and that you might never have discovered in another way, and you can shop them all in one place. That’s pretty interesting for us at Instagram.”

More Great WIRED Stories

Original author: Arielle Pardes
Continue reading
  4 Hits
  0 Comments
4 Hits
0 Comments

Game maker Rovio ventures into augmented reality with new Angry Birds game

image

FILE PHOTO: An Angry Birds game character is seen at the Rovio headquarters in Espoo, Finland March 13, 2019. Picture taken March 13, 2019. REUTERS/Anne Kauranen

(Reuters) - Finnish game company Rovio released an augmented reality game called Angry Birds Isle of Pigs, developed together with Swedish game studio Resolution Games for Apple’s mobile devices, the Finnish games developer said on Tuesday.

The release announced at the Game Developers Conference in San Francisco came in addition to the two games the company had promised to release this year, with one of them already out.

Reporting by Anne Kauranen; Editing by Edmund Blair

Our Standards:The Thomson Reuters Trust Principles.

Continue reading
  4 Hits
  0 Comments
4 Hits
0 Comments

Aluminum producer Hydro hit by cyber attack, shuts some plants

OSLO (Reuters) - Norsk Hydro, one of the world’s largest producers of aluminum, was battling on Tuesday to contain a cyber attack which hit parts of its production, sending its shares lower.

image

FILE PHOTO: An aluminium coil is seen during opening of a production line for the car industry at a branch of Norway's Hydro aluminum company in Grevenbroich, Germany May 4, 2017. REUTERS/Wolfgang Rattay/File Photo

The company shut several metal extrusion plants, which transform aluminum ingots into components for car makers, builders and other industries, while its giant smelters in countries including Norway, Qatar and Brazil were being operated manually.

The attack, which began on Monday evening and escalated overnight, affected the company’s IT systems for most of its activities.

“Hydro is working to contain and neutralize the attack, but does not yet know the full extent of the situation,” the company said in a statement.

It added that the attack had not affected the safety of its staff and it was too early to assess the impact on customers.

The event was a rare case of an attack on industrial operations in Norway. The last publicly-acknowledged cyber attack in the Nordic country was on software firm Visma, when hackers working on behalf of Chinese intelligence breached its network to steal secrets from its clients.

Companies and governments have become increasingly concerned about the damage hackers can cause to industrial systems and critical national infrastructure following a number of high-profile cyber attacks in recent years.

In 2017, attacks later blamed by the United States on Russia and North Korea caused millions of dollars of damage to companies worldwide, crippling computers in industries from shipping to sweet making. Moscow and Pyongyang have denied the allegations.

In Ukraine, meanwhile, authorities have seen hackers knock electricity grids and transport systems offline, and an attack on Italian oil services firm Saipem late last year destroyed more than 300 of the company’s computers.

FROM CARS TO CONSTRUCTION

Hydro makes products across the aluminum value chain, from the refinement of alumina raw material via metal ingots to bespoke components used in cars and the construction industry.

“Some extrusion plants that are easy to stop and start have chosen to temporarily shut production,” said a Hydro spokesman.

The company’s hydroelectric power plants were running as normal on isolated IT systems unaffected by the outage.

The Norwegian state agency in charge of cyber security said Hydro contacted them early on Tuesday and that it was assisting the company.

“We are ... sharing this information with other sectors in Norway and with our international partners,” said a spokeswoman for the Norwegian National Security Authority (NSM). She declined to comment on the nature of the attack.

Norsk Hydro’s main website page was unavailable on Tuesday, although some of the web pages belonging to subsidiaries could still be accessed. The company was giving updates on the situation on its Facebook page.

FILE PHOTO: Concrete pipes connecting the bauxite residue deposit to its water treatment station are pictured at the alumina refinery Alunorte, owned by Norwegian company Norsk Hydro ASA, in Barcarena, Para state, Brazil March 5, 2018. REUTERS/Ricardo Moraes/File Photo

“Hydro’s main priority now is to limit the effects of the attack and to ensure continued people safety,” it wrote in a Facebook post.

Hydro’s shares fell 3.4 percent in early trade before a partial recovery to trade down 0.9 percent by 1121 GMT. It was still lagging the Oslo benchmark index, which was up 0.9 percent.

Hydro, which has 36,000 employees in 40 countries, recorded sales of 159.4 billion crowns ($18.7 billion) last year, with a net profit of 4.3 billion crowns.

Additional reporting by Nerijus Adomaitis in Oslo, Jack Stubbs and Barbara Lewis in London; editing by Keith Weir, Emelia Sithole-Matarise and Kirsten Donovan

Our Standards:The Thomson Reuters Trust Principles.

Continue reading
  6 Hits
  0 Comments
6 Hits
0 Comments

Instagram adds new feature to let U.S. users shop via app

image

FILE PHOTO: Silhouettes of mobile users are seen next to a screen projection of Instagram logo in this picture illustration taken March 28, 2018. REUTERS/Dado Ruvic/Illustration

(Reuters) - Facebook Inc’s Instagram will now let U.S. users to shop products directly from the photo sharing app by adding a ‘checkout’ feature on items tagged for sale, the company said on Tuesday.

The move is in line with Facebook’s plan to monetize higher-growth units like Instagram, especially as the company’s centerpiece product, News Feed, struggles to generate fresh interest.

Instagram said it has partnered with more than 20 brands, including Adidas and H&M, on the new feature.

The photo sharing app has more than 130 million people tapping to reveal product tags in shopping posts every month, up from 90 million in September, it said.

Reporting by Munsif Vengattil in Bengaluru and Katie Paul in San Francisco; Editing by Arun Koyyur

Our Standards:The Thomson Reuters Trust Principles.

Continue reading
  4 Hits
  0 Comments
4 Hits
0 Comments

Make your dumb bulbs and devices smarter with these killer TP-Link deals

If you don't have hundreds of dollars to spend on new bulbs, hubs, and thermostats, you can still make your home a whole lot smarter. And today's a great time to start: B&H Photo is selling the TP-Link HS300 smart Wi-Fi power strip for $55Remove non-product link when applying the 31 percent coupon on the listing,, along with a three-pack of TP-Link's HS200 smart Wi-Fi switches for $57Remove non-product link, nearly 50 percent off its list price of $105.

The smart Wi-Fi power strip allows you to plug in up to six devices at once, and control them individually with their individual switches or by using the connected app. There are also three handy USB ports for charging up extra devices. Using TP-Link's Kasa app, you can set schedules and turn devices on and off remotely, as well as monitor energy usage. Built-in Wi-Fi connectivity also means you can use this power strip without a hub, and you can also connect it to Amazon Alexa, Google Assistant, and Microsoft Cortana for voice control.

The smart Wi-Fi switch pack comes with three switches, so you can set up their smarts in multiple rooms. These switches, which also connect to Wi-Fi hub-free, can function just as regular light switches, but when connected to the Kasa app. Using your mobile device, you'll be able to set schedules, plan away times, and turn lights off and on from anywhere. In addition, compatibility with Alexa, Google Assistant, and Cortana means you'll be able to add some voice control in addition to mobile-controlled smarts.

[Today's deal: TP-Link smart Wi-Fi power strip for $55Remove non-product link and three TP-Link smart light switches for $57Remove non-product link]

To comment on this article and other PCWorld content, visit our Facebook page or our Twitter feed.

Original author: Alexandria Haslam
Continue reading
  3 Hits
  0 Comments
3 Hits
0 Comments

Best Google Home add-ons and accessories

Welcome to the Google Home shadow market, a symbiotic ecosystem of third-party products that add a bit more versatility to the Google Home Mini and original Google Home smart speaker. Some grant true portability to Google’s otherwise-tethered speakers. Others help you place the speakers in a more convenient position. Intrigued? Let’s jump in.

Updated March 18, 2019 to add a link to our review of the Toast Google Home Wood Cover

Mount Genie Google Home Mini Outlet Wall Mount

The Google Home Mini is already small enough to place just about anywhere, but its USB power cable is gangly and unsightly, and Google provides no in-box adapter to plug the Mini directly into a wall outlet. Luckily, the Google Home Mini Outlet Wall Mount from Mount Genie is an effective widget for slapping the speaker directly on your wall, though its build quality aligns closely with its $8 price.

mount genie google home mini outlet wall mount 2 Jon Phillips/IDG

The Mount Genie Wall Mount has a low enough profile to accommodate my hair dryer plug.

This plastic mount comes in either black or white, and when inserted into an electrical socket, it left enough room to plug in either my electric toothbrush, beard trimmer, or hair dryer plug pictured here. The far edge of the speaker extends less than 5 inches from the edge of the outlet cover, so it can fit in tight spaces whether your outlet is oriented horizontally or vertically, and whether you plug the Mini upside down or right side up. There’s a hole in the mount to reach the mute button, but be forewarned: If you orient your Mini above the outlet, as shown here, you’ll need to reverse your touch volume controls.

mount genie wall mount underneath Jon Phillips/IDG

It can be a struggle to get the power cord tucked underneath the Wall Mount’s clips.

You install the Mini by mounting Google’s power adapter, snapping in the speaker, and then threading Google’s power cord underneath the mount’s plastic clips. It’s difficult to get the USB connector underneath the clips, and I found I had to raise these flimsy plastic pieces to fit in the cord. One clip broke during the process, but the remaining clips kept the cord in place. Sure, build quality could be better, but the mount costs only $8, and it solves a big problem.

Dot Genie Google Home Mini Backpack

It looks like the folks at Mount Genie have continued to evolve their technology for slapping a Google Home Mini directly on a wall. We have no idea why the $15 Google Home Mini Backpack is sold under the Dot Genie brand, but the box is labeled Mount Genie, and the assembly process is a vexing as the Outlet Wall Mount covered above.

dot genie google home mini backpack beauty Jon Phillips/IDG

The Google Home Mini Backpack looks low-profile, but it still takes up enough space to crowd the neighboring outlet plug.

Ostensibly, assembly should be easy. First you pop in the included power adapter, which replaces the adapter Google ships with its smart speaker. Next you insert the speaker into the Backpack chassis—it’s supposed to click together with an obvious snap. Finally, you link the power adapter to the Mini with a bundled USB connector. This teeny, tiny cable replaces Google’s gangly cable, completely obviating the question, “What do I do with this stupid cord?”

Just one problem: It took six or seven attempts to get the speaker to snap into the Backpack. I was about to give this widget a bad review, complaining the plastic bracket was molded too short to actually grip the speaker. But I gave it one final try, and, alas, the Mini snapped in.

dot genie google home mini backpack Jon Phillips/IDG

The Backpack has a plastic ground pin that helps keep your speaker stuck to the wall, but forces a right-side up orientation.

The Google Home Mini Backpack comes in black, white, or coral to match Google’s colors. Its outlet footprint is bigger than the Outlet Wall Mount, and while my electric toothbrush plug settled in nicely, the hair dryer plug was a very tight fit, and the beard trimmer plug wouldn’t fit in the neighboring outlet at all. What’s more, the Backpack can only be inserted into three-prong outlets thanks to its plastic “dummy ground” pin, which keeps the unit stable in your wall. And because of this plastic pin, the unit can only be oriented right-side up—but at least that positions your volume touch controls correctly.

The Backpack uses thicker plastic than the Outlet Wall Mount, and I think it’s the better choice, despite costing a whopping $15 (that is, unless you have big plugs that need to share the same outlet). Just make sure you pop it all the way in, even when it refuses to snap.

JOT Portable Battery Case for Google Home Mini

The concept is simple: Pop your Google Home Mini into the JOT Portable Battery Case, and it will let you use your speaker anywhere within Wi-Fi range, completely untethered from Google’s power adapter. Obviously, a cord-free Google Home doesn’t offer any benefits unless you need portability. But by using the $30 JOT, I’m able to take my Mini outside and listen to music and podcasts anywhere in the backyard. You may have other uses in mind. Consider the possibilities.

google home jot battery case 2 Jon Phillips/IDG

The JOT Battery Case gives you about eight hours of portable Google Home Mini power.

Of all the Mini accessories I’ve tested, the JOT Portable Battery Case has the best build quality and easiest installation. To insert the Mini, you depress two buttons on the thick plastic shell; align the speaker’s USB port with JOT’s built-in micro USB plug; snap the shell back together with a satisfying click, and... that’s it. Now it’s time to charge the battery case with Google’s power adapter. It takes about three hours to charge fully, but once it’s done, you can enjoy the Mini for a claimed eight hours of run time. Four LED lights indicate charge levels, and the unit leaves a hole for you to access the speaker’s mute switch.

It’s a simple product that gets the job done. Available in carbon or silver.

Mount Genie Google Home Mini Pedestal

The Mount Genie Google Home Mini Pedestal lifts your wee smart speaker off a flat surface and orients it more or less upright. In this position, you can better see its colorful Google lights, and the audio soundscape should improve too. The pedestal is just a single piece of molded plastic, but that’s what you get for $11.

mount genie google home mini pedestal Jon Phillips/IDG

The Google Home Mini Pedestal is exceedingly simple but performs an important task.

The plastic feels rather thin and insubstantial, but once you thread Google’s USB cable through the pedestal’s base, and plug it into the speaker, the Mini sits confidently inside. Just make sure to reverse your touch controls, because, technically, your speaker will be sitting upside-down. The pedestal comes in charcoal, silver, and white.

Ninety7 Loft Portable Battery Base

Finally, we have a little accessory love for Google’s original smart speaker, the full-size Google Home. The Loft Portable Battery Base ($39.95) replaces the existing base on Google’s speaker, and gives you up to 10 hours of cord-free power, thanks to its integrated Lithium-Ion battery. (Battery life will really depend on how hard you push the speaker itself.)

techhive loft battery base 3 Alexandria Haslam

Ninety7’s Loft is a must-have accessory for Google Home users who want true portability.

We were impressed by the Loft’s lightness, and how well its design integrates with Google’s speaker. And it’s easy to remove Google’s base and quickly snap the Loft base into place. You charge the Loft with Google’s own power adapter, and the total charge time is about four hours.

If you’re looking to take your Google Home anywhere within Wi-Fi range (the backyard perhaps), the Loft is your ticket. We wish it came with a USB port to charge other devices, and it seems a bit pricey considering the Google Home itself costs only $99, as of press time. But other than that, we really can’t complain.

To comment on this article and other TechHive content, visit our Facebook page or our Twitter feed.

Related:
Original author: Jon Phillips
Continue reading
  2 Hits
  0 Comments
2 Hits
0 Comments

About Terminal Madness

Terminal Madness started out as a Computer Bulletin Board, ( BBS ) back in the early 90's. Fascinated that one could get all the information they ever wanted "on line", for FREE, the "BBS" was named Terminal Madness.

Now, about 22 years later, that fascination with computers and information continues.

From the USA, to the Dominican Republic, to Curacao and back to the USA.

© 2016 Terminal Madness. All Rights Reserved. Designed By Terminal Madness

Search