Should We Engineer Future Humans?
Weekly Article
Dec. 17, 2015
Imagine a technology powerful enough to fight off cancer without chemotherapy. Suppose it could also make people especially immune—or vulnerable—to deadly viruses like Ebola. The technology might prevent a child from experiencing a devastating inheritable disease—and also allow the wealthy to design babies with high IQs, blond hair, and blue eyes.
These are the hopes and fears sparked by the freshly honed techniques of gene editing. As scientists have gained the extraordinary ability to change the genetic sequences that encode human cells, they have ignited a fierce ethical debate in recent months about how far we should go in engineering ourselves and the common gene pool of humanity.
The most profound question raised by this technology, dubbed CRISPR cas9, however, is not the one prevailing in the public eye, which is how far we should tread on our genetic code to help the living or the dying. It’s rather about what we owe to future generations: Do we owe them as many scientific tools as possible to stop suffering from disease? Or are we more obligated not to screw up the species and the world in our quest to have better lives? To answer those questions, we need to open our eyes to how the technology could unfold over the long term, without overindulging dystopic visions.
Gene editing, in one sense, is nearly as old as life on Earth; scientists discovered CRISPR in the inner workings of bacteria that edit their genetic sequences to adapt to and fend off viruses. But in the hands of humans, its capabilities are unprecedented. As research progresses, many scientists envision that one day soon, doctors could prevent and treat intractable diseases by precisely splicing and dicing patients’ genetic sequences. (In the interest of full disclosure, I'm on leave from the Broad Institute, which conducts CRISPR research; views here, however, are mine alone.)
What kind of medical progress could gene editing drive? Biologists point to a gene known as CCR5, which is missing in working form in about 1 percent of the U.S. population, giving them natural resistance to HIV. With CRISPR, scientists might be able to edit that gene out of broader populations, stopping the spread of HIV infections. That feat alone would be nothing short of miraculous.
It's worth noting that unforeseen consequences, and even known tradeoffs, can come with editing the human genome. The same genes that confer protection to one disease can create susceptibility to another. The mutations of the CCR5 that protect people from HIV also increase the risk of contracting West Nile Virus. The same genetic mutations that cause sickle cell anemia protect patients from dying from malaria. And these are just the cases where we actually know the tradeoff—what will actually happen to patients when we edit their genetic sequences is a vast, uncharted terrain.
Yet even with such downsides, it's easy to see why gene editing should be pursued to treat diseases in patients who are suffering or to prevent the spread of deadly epidemics. All medical interventions have risks, whether they involve tweaking the genome or not. In a world absent of gene editing, certain cancer or ALS patients are willing to accept greater potential side effects from experimental drugs, depending on how sick they are and their tolerance for uncertainty. People could choose whether the risks of gene editing are worth the possible dangers. What they decide will depend on how sick they are, how old they are, what they know about the risks, and how they feel about them.
Where it gets tricky is the prospect of editing the sequences of human embryos. Parents can already ask for tests to determine whether their embryos carry some inherited diseases that could make their children’s lives short and brutal. But changing their embryos to fix genetic traits in what's known as the germline would result in changes that get passed along to future generations. Such changes could irrevocably change the genetic makeup of the human species. Tweaks to many embryos could add up to large shifts that make the human gene pool less robust or more prone to diseases that are worse than the ones we face today.
There are also social consequences. Unequal access to gene-editing technology could exacerbate the polarized legacies of being rich or poor, with the former choosing qualities for their children that further secure their destiny, and the latter falling further behind. Given ample commercial offerings and unbridled technology, it’s likely that many people would try to mold their children to a common ideal, compromising the genetic diversity that makes our species strong and more likely to survive and persist—while also making society more boring and less beautiful. Lest you think this scenario unrealistic, it’s worth noting the history of eugenics movements and the current proliferation of plastic surgery in South Korea, where surveys estimate more than half of women in their 20s undergo procedures such as sewing their eyelids back to conform to an ideal dictating rounder windows to the soul.
What's troubling here is not that such risks exist. All technology and all progress pose possible dangers. It’s that we lack tools to think about and weigh the ultimate consequences. There is no way to do a trial run; the problems raised by gene editing require us to think long term to determine how far the research should go.
Individuals and institutions in our society, however, do not reckon with the timescale of intergenerational problems. When faced with crises from long-term deficits to climate change to nuclear waste, we are often paralyzed by the political imperatives and the impulses of the present. We don't make the tough decisions to make Social Security and the National Flood Insurance Program solvent or to invest in rebuilding 19th century infrastructure. We don't build Yucca Mountain but we keep making nuclear waste. We won't put a price on carbon. But the dysfunction is even more basic: We don't pose the most critical questions about the future.
In the case of CRISPR, the critical question is this: In our urgent efforts to eradicate disease, how much should we weigh what could happen to future generations who lack the ability to consent to our experiments?
To their credit, the U.S. National Academy of Sciences, the Chinese Academy of Sciences, and the UK’s Royal Academy convened a global summit in early December, where they held open debates about the ethical issues raised by gene editing and converged around a set of norms: They decided that research with CRISPR should continue, but that a moratorium should be placed on engineering human embryos that result in pregnancies. Theirs is a wise move that buys time to deliberate on what editing in embryos could mean, while allowing progress in using CRISPR to continue so that patients might benefit from it in the near future.
The effort to create global norms—if enforced within countries—is promising. As with nuclear nonproliferation and gun control, it is worthwhile to guide the good actors and create bright lines for enforcement, even if ultimately we cannot stop the terrorists or the truly desperate from seizing the technologies to engineer viruses or save their children.
What’s remarkable about the summit is that the scientific community created an open public forum to air the ethical dilemmas posed by gene editing before opening Pandora's box to engineer embryos. The absence of a robust, transparent global dialogue about moral conundrums posed by other technologies—from genetically modified organisms to geoengineering— has in turns alienated the public in ways that are counterproductive and limited technological development. Too many scientists and companies in these realms have ignored the need to set ethical boundaries for the use of these technologies and communicate them with the public. While it’s still early days, the gene editing global summit could become a blueprint for future technologies, showing how we might debate ethical issues and converge around norms that balance scientific progress with the moral imperative to do more good than harm.
For that to happen, scientists and policymakers contemplating the future of CRISPR need to go much further in grappling with its implications for future generations than they have to date. Detailed scenarios of what the technologies could accomplish as well as unleash over different time horizons, transparently shared with the public and broad ranges of patient groups, could begin to shape a more robust debate. Scientists who want to speed ahead may balk at this idea, worrying that it will alarm people more than inform them. But the impulse to protect the public from overreacting to future scenarios could backfire, arming them with ignorance to imagine the dystopic futures portrayed in Gattaca and BladeRunner rather than the actual ways democracies could deal with the technological future.
The debate about gene editing surfaces a deeper challenge of better weighing the rights of future generations in our decisions. The gene editing summit kicks the can down the road when it comes to editing embryos to a time when there is “broad societal consensus.” That’s unlikely to emerge on our current path. Economists who offer social discount rates to calculate the dollar value of risks posed to future human beings cannot help us decide what is moral when it comes to future people. Nor can philosophers offer us the practical tools to weigh future people in our policies.
While we can never know the future for certain, we do know that too many times we have ignored it at our peril. As we forge ever-more powerful tools to engineer the future of artificial intelligence, our species, and our planet, our ethical obligations to the future are expanding. In deliberations about technology or policy, we can no longer simply invoke future generations of humankind in the abstract, to serve our guilty consciences. We need to stop and actually imagine how future people and societies might experience and reflect upon our legacy. They will know what we knew. Will they commend or curse us for our choices?