Lately I’ve been describing myself as “having over a decade of experience in human genetics research,” which makes me feel rather old (I recognize that older people will scoff at this and younger people will smirk and nod). Nevertheless, it’s true: I started working in human genetics research at Duke University right after finishing my undergraduate degree, in the winter of 2005. In the summer of 2009 I moved to a genetics research group at the University of Washington, where I still work.
The developments I’ve witnessed in just this relatively short time demonstrate how quickly the field of genetic research has been changing. This is no coincidence, as one reason I was drawn to genetics was the promise that I wouldn’t be stuck doing the same thing my whole career. And it’s proven true thus far, because my job path has been partly shaped by different phases or waves of genotyping technology. Each is a way of looking at DNA – to tell which of the chemical bases A, C, T, and G exists at specific places in the human genome. I’ll walk you down memory lane, stopping at three signposts along the way to discuss these different technologies. But I’ll also start and end with a detour….
Detour 1: One summer during my undergraduate
One summer at UNC-Greensboro where I was doing my undergrad, my genetics professor hired me to do a small summer project. It was only a few hours a week and a little bit of money, but I was thrilled to have something to supplement my job at Panera’s. The task was to use a software program to help design bits of DNA that can be used to genotype single genetic variants. These bits of DNA are called primers, and they are made up of typically ~20 to 30 DNA bases that are located near the variant of interest. The primers are used to bind to the nearby DNA and then make many copies of it so that it’s easier to measure the variant. This whole process is called polymerase chain reaction, or PCR, and it basically launched modern biotechnology.
I was tasked with designing primers for a few dozen variants that my professor and his collaborators wanted to study in relation ADHD. So I was going into genetic databases, finding the flanking sequences, and then plugging them into this primer design program to find the optimal bits of DNA to use. This was the whole summer project. Now, I haven’t done this type of work since, but my guess is that current bioinformatics tools would enable one person to do that whole project in an hour. Maybe even 30 minutes and still have time to get a coffee.
Phase 1: Single variants (Duke)
When I started working at Duke, things were pretty far along. We had PCR, we had the Human Genome Project and thus a database of the complete human genome sequence. The projects I worked on initially were genotyping single variants at a time, via something called a TaqMan assay (“taq” is a special type of the enzyme polymerase – yes, the same polymerase of PCR fame!). A single person working in the lab could push through a dozen or so TaqMan assays in a day, if they were wearing their headphones (with no music) just so other people wouldn’t bother them on the lab floor (I know for a fact this is done). This single variant approach was pretty standard at the time. Before I left Duke, however, this trickle of genetic data was starting to turn into babbling stream.
Phase 2: Microarrays (Duke -> UW)
In the early 2000’s, companies were starting to develop ways to multi-plex these genotyping assays. Called microarrays, or DNA “chips,” these were small surfaces on which you could array hundreds of thousands (now millions) of genotyping experiments at once. One of the Duke projects I worked on did a microarray experiment during my last year there. I remember that it was too much data to go through our normal database process, so our senior programmer had to manually force it in. All of a sudden there were 300,000 more variants than before. Of course then there’s the data cleaning, which was then required on a much larger scale. And that’s what brought me to UW….
I came to UW to work on a new set of projects instigated by the National Institutes of Health to look at gene and environment interactions in a series of complex human diseases. Each of these projects was using microarray technology, so they needed a lot of manpower and brainpower (and Sarah power!) to help do quality control and assurance for all that microarray genotyping data.
Phase 3: Sequencing (UW -> now)
While my work at UW is still primarily with microarray datasets, our center is starting to work more and more with DNA sequencing data. Recall microarray experiments involve looking at a million or so pre-defined places in the genome. DNA sequencing, on the other hand, is going base by base to look at almost every site. Even though sequencing has gotten must faster and cheaper in the past few years, it’s still too pricey to be the de facto approach for every research project. But give it a few years and it will likely have supplanted DNA microarrays.
Detour 2: Doby-Croc
When I first started dating my husband a few years back, there was some lore generated about what I did for a living. During a conversation I was not present for, my now husband told his uncle that I worked in genetics and inevitably their conversation ended with the decision that I should make the “Doby-Croc.” Half Doberman Pinscher, half crocodile, slogan “the ultimate in homeland security” (don’t tell Trump!). Clearly they envisioned me tinkering away at a lab bench with a white coat and safety goggles, bioengineering the species mash-ups of tomorrow. (Had I been there I would have headed off this misconception at the pass by clarifying that I work at a computer, in what otherwise looks like your typical office job).
DNA technology isn’t quite there yet, though with CRISPR who knows – but that’s a story for another day!