Policymakers have known for roughly a decade that synthetic biology could be misused to develop cheaper and more effective bioweapons. During that time, a great deal has been written and said and something like a policy community has emerged. Even better, there are now consensus viewpoints around many issues. Those of us who study this issue have, in short, reached the point where society can reasonably demand our advice on practical problems.
And practical problems are becoming more salient. Ten years ago, synthetic biology was mostly a laboratory curiosity. Since then, however, biologists have shown that artificial DNA can be used to make functioning polio and 1918 influenza viruses. It is only prudent to assume, then, that still more elaborate organisms – say, smallpox or genetically-engineered weapons – are not far off.
Furthermore, artificial DNA can now be purchased over the Internet from commercial “gene synthesis companies.” All of which raises an intensely practical question: What, if anything, should these new companies do to make sure they do not inadvertently sell dangerous genes to terrorists?
The good news is that most gene synthesis companies already act responsibly. If you ask them, these firms will invariably agree with the principle that companies should make a reasonable effort to find out what the requested sequence does before shipping it to customers. In practice, this is almost always done by comparing customer-submitted sequences against the U.S. government’s exhaustive Genbank database of known genes. The companies then pay human screeners to find out what functions the closest Genbank matches (“homologues”) code for.
If these functions are benign, the process stops and the order is filled. If they are dangerous, companies know that they will have to examine the customer much more closely before filling the order. The process is not cheap and can consume hours of high-paid labor. Most companies fund it anyway.
Still, there is a problem. Current practices were developed in isolation and tend to be uneven. In April 2008, the industry’s leading trade group, the International Association for Synthetic Biology (IASB), held a meeting where members agreed to negotiate a universal code of conduct that would make screening practices uniform across the entire industry. Even more importantly, members agreed to draft a code that reflected today’s highest standards instead of a common denominator. Nature’s editors called the first draft "laudible."
Even so, members continued to revise the document for another year. Earlier this month, IASB members approved the final code at an open meeting in Cambridge, Mass. Five gene synthesis companies have already said that they will sign and more are expected to.
Needless to say, not everyone agrees. Last summer, shortly after the Cambridge meeting was announced, two companies – DNA2.0, of Menlo Park, Ca., and Geneart, of Regensburg, Germany – suddenly announced a competing proposal built around a pre-defined list of known threat genes. Using such a list, they stressed, would allow companies to automate work that had previously been done by humans. This, in turn, would make screening “fast” and “cheap.” What DNA2.0 and Geneart did not say was how reliably their system detects threats.
Since the DNA2.0/Geneart “threat list” is secret – and, apparently, still under construction – no one can say for certain how good it is. In order to match the performance of human screeners, however, it would have to contain a detailed threat assessment for every gene in Genbank. I do not know a single knowledgeable expert who believes that such a list can be built any time soon.
This does not mean that the DNA2.0/Geneart list is worthless: Indeed, it is probably good enough to detect terrorist orders for, say, the complete smallpox or 1918 influenza genomes. But it almost certainly cannot spot the many thousands of genes that could be used to make the sorts of biological nightmares that Soviet scientists pursued in the 1980s and early 1990s. And this matters. Most biosecurity experts I know believe that Soviet-style threats are – if not probable – sufficiently plausible to worry about. Such people uniformly prefer IASB’s standard.
For now, it is still not clear which standard the industry will pick. Indeed, DNA2.0 and Geneart continue to hold closed meetings to discuss their standard with other companies. The substance of these discussions (and even the participants’ names) remains secret.
Nevertheless, one large American company, Blue Heron, has already told Nature that it will “seriously consider switching” to the DNA2.0/Geneart standard. There have also been rumors that the DNA2.0/Geneart standard will morph into something that more closely resembles IASB’s proposal. Clearly, the situation remains very much in flux.
This is a critical moment. Any one who has encountered DNA2.0, Geneart, or Blue Heron knows that these companies care deeply about public opinion. If enough biosecurity experts speak out, they will almost certainly be swayed.
Naturally, I do not expect unanimity. For example, I would not be surprised if scholars who are skeptical of Soviet superbug stories would occasionally decide that the DNA2.0/Geneart procedures are adequate. Nevertheless, my impression is that such viewpoints would be a distinct minority. In a show of hands, most of us would endorse the IASB standard.
Readers who have followed this essay so far are, practically by definition, passionate about the intersection of synthetic biology with biosecurity. For a privileged few of us, however, this passion is also our job. Governments and private foundations have spent the past decade building a professional policy community around synthetic biology. For better or for worse, we are about to find out whether their money has bought something more than lectures and white papers and trips to distant conferences.
The issues at stake in the current standards war between IASB and DNA2.0 could hardly be clearer or more fundamental. It would take very little effort for those of us in biosecurity community to write an open letter that expresses our concerns. Moreover, industry and government would likely find such a letter extremely persuasive. Our opinions would, in short, matter.
We have an obligation to speak out. This is where we earn our pay.
Stephen M. Maurer is the director of the Information Technology and Homeland Security Project at the Goldman School of Public Policy and Boalt Law School, University of California at Berkeley.