Molecular Explorers
Conversations with three leading specialists from the Babraham Institute
George Francis Lee | | 10 min read | Interview
In the years following the end of World War II, the UK was a tired island. Years as a major player in a global conflict and all production’s pivot toward the war effort had diminished the nation’s output in goods, clothing, and, most importantly, food.
When the war ended, the country needed more effective ways of farming; fatter pigs, larger cows, greater yields. This is how the Babraham Institute – then named the Institute of Animal Physiology – came into existence. The country was hungry, and the scientists assembled in labs retrofitted into an 1830s stately home were going to feed it.
Over 70 years later, the Institute is best known for its work on aging, and its explorations into the molecular processes of how we all get older. Fitting, really. The Institute’s initial goals were shaped by the needs of a malnourished post-war Britain. Today, they are shaped by the needs of a global population living for longer than ever.
We visited the Babraham Institute and spoke with a number of its leaders – including its Director – on what makes the place tick.
Simon Andrews, Facility Head, Bioinformatics
The bioinformatics group combines computing and biology. We are there to help people deal with the massive amounts of data that modern experiments generate. Everyone these days is operating on genome-scale datasets – oftentimes datasets that go down to the resolution of single cells in organisms. And it just gets bigger and bigger and bigger. But without significant computational power, you can’t start properly interpreting the data. And that’s where we come in.
When I came to Babraham and met the guy that was running bioinformatics, I went home that night and said, “I found the person with the best job in the world. And I want it.” Luckily, he offered me a job and I’ve been here 20 years. We work on all kinds of different research projects at once, and we get a big overview of the research that happens across the Babraham Research campus.
We do both analysis (data analysis, statistical analysis, data processing) and development; after all, we have a lot of people doing innovative research and generating types of data that nobody has seen before. We also do a great deal of training; we are really focused on giving scientists the skills and tools they need to analyze their own data. We did somewhere between 60–70 training courses last year teaching people practical skills.
Understanding the basic biology of aging is the exciting part for me. It’s a bit of a culture shock for people to come out of university and ask a question that nobody has an answer for. And that’s brilliant! Babraham as a research institute is a little unusual; people here ask key biological questions and are focused on making fundamental discoveries that sometimes take decades before the true impact is realized. At the same time, the coordination between all the areas is fantastic; it feels like everyone is pushing towards a common goal. You can easily find overlaps in work among colleagues that creates inter-group collaborations.
A major challenge for us is the scale of operations you now need to understand biology. We’ve got imaging, flow cytometry, mass spectrometry for proteomics, sequencing for transcriptomics, and so on. And the biggest studies aim to integrate all these different strands – and bringing all that data together to deliver confidence in what you’re seeing is not easy.
One really positive development in biology over the last few years is the need to deposit all raw data into a public repository when you publish a large-scale study. Now, there’s a massive resource of data from people’s experiments that you can go in and reanalyze. We’ve had students run projects where we’ve been able to process hundreds of public datasets to look for effects which we’d have struggled to see from smaller sample numbers, and which would have been uneconomic to perform if we’d had to generate the data ourselves. It’s this kind of data reuse that allows the field to progress more rapidly and research can only be stronger for it.
Gavin Kelsey, Program Leader, Epigenetics
I’ve been at Babraham for 26 or so years – an exciting period because epigenetics has really revolutionized in the past three decades! When I started here, it was before the human genome had been sequenced. And so experiments inevitably focused on single genes (or a small number of genes). It was hard work. You would end up with a bit of DNA, and then you had to work out where it came from within the genome. Sometimes you were lucky, and you would hit something that was already in a database. Other times you had to work out where it was and do an awful lot of exhausting walking across chromosomes to get the context. Now these things are so much easier – it’s a completely different landscape. Knowing what is going on across the whole of the genome, although computationally challenging, does help us understand how it all works. So having such an excellent bioinformatics team here on hand, who have developed some really user-friendly tools, has been a real game-changer for us.
We have two themes in the program. One is to understand how epigenetic states are set up at some of the key early developmental time points. A really fascinating thing about epigenetics is that some of the information can be truly long-term and can provide an important memory of earlier events. When a cell becomes a nerve, muscle, or any other cell, part of that decision – and part of what is locking that state in place – will be an epigenetic mechanism. And so, epigenetics is very much tied up with lineage decisions. We’re keen to understand what is needed to make sure that the right information is set up properly.
The second area we focus on is the impact that metabolism has on epigenetic states and epigenetic stability. All of the epigenetic reactions that a cell needs, like methylation of DNA, depend upon metabolites in those cells – and many of these metabolites come originally from our diet. We’re very interested in how metabolites differ in the cytoplasm and in the nucleus – which is where it matters most for epigenetic reactions – and how diet can affect this balance.
We also want to use epigenetics as a way of identifying disease mechanisms. As soon as the human genome sequence was produced, there was a great deal of interest in using large genetic studies to identify genes associated with common disorders. The next level was epigenome-wide association studies, which, instead of looking for genetic differences, sought epigenetic differences to explain risk of common diseases and whether there could be environmental or lifestyle contributions.
The combination of those focus areas is that our research is meaningful for understanding the mechanisms that maintain and support lifelong health, which feeds forward into the development of informed interventions to promote healthy aging and treat disease.
What’s always been appealing is the concentration of people in this area at the Institute. Very few places have such a rich cohort of passionate epigenetic researchers. And now, we’re bringing in different expertise that we haven’t had in the past – for example, chromatin biochemistry in Maria Christophorou’s lab and metabolism in Sophie Trefely’s lab – and it’s really wonderful to see how people are working together within the program and across the Institute as a whole.
Though the groups are diverse at Babraham, the leaders share one thing in common: we all have a commitment to wider engagement. I’ve thoroughly enjoyed taking exhibits on the epigenetic influence on aging to the Cambridge Festival, for example. And though we are playing an educational role, it’s a two-way street; sometimes you get asked unexpected questions that open up new ways of looking at things.
Simon Walker, Facility Head, Imaging
At the Imaging facility, we focus (no pun intended) on light microscopy, although we recently acquired our first electron microscope, and are rapidly expanding our capabilities in this area. Within light microscopy, fluorescence methods are a mainstay. People will be bringing samples here with some kind of fluorescence labeling – and we have various platforms that then can interrogate those samples to get the information they need. Within one room we have two confocal microscopes, as well as a system with capacity for multi-well plates so we can prepare samples and run experiments in parallel. Those plates sit inside an inverted microscope that takes pictures of the cells in each well.
At the moment we’re running an experiment looking at a fluorescent protein and anticipating that it may form aggregates in the cells under certain conditions. Because it’s a confocal microscope, the team needs to take a number of images on a number of focus planes. Over the course of two days, the machine will take a stack of images, move to a different position, take another stack, and so forth.
Generally, we facilitate other people’s work. And we’re becoming increasingly important to a number of companies based on the research campus who use our services, as well as academic institutions who don’t have some of the specialist equipment that we have. That’s not to say we don't get involved in research; we operate at different levels. We may train someone to use a microscope and then have very little interaction with them beyond that initial training phase, or we may work with someone on other projects on a weekly basis, interpreting data, processing samples, giving them a great deal of support.
I started here nearly 20 years ago now, which seems slightly bizarre – and the facility has changed enormously during that time. When I joined, it was basically one room with a couple of microscopes owned by the research groups. I was brought in to try and make the facilities more accessible, and we came to realize that the instruments we had back then weren't really appropriate for a lot of the work the other groups wanted to do. Over the years, we’ve been expanding the range of technologies, growing the facility, and slowly adding additional people; it used to just be me, but now there are four of us covering different areas. My expertise is more on general light microscopy, confocal microscopy and live cell imaging, Hanneke Okkenhaug covers our high content imaging services and provides expertise with image analysis, Isabel San Martin Molina is developing our spatial imaging capabilities, and Kirsty MacLellan-Gibson is our electron microscopy specialist.
And we’re not just here to make pretty pictures! As pathologists will be all too aware, once you’ve got an image, you need to extract information from it.
Much of the technology is similar to 20 years ago, but it has become more sensitive – you get better resolution and you gain more capability. As the number of tricks grows, so do the number of potential applications – particularly when it comes to live cells. In the past, you’d have to blast samples with a huge amount of light to get a decent signal from them, which isn’t very good for live cells; today, we can use very gentle illumination.
Automation is something that changed suddenly, with systems operating independently. Most recently, we’re working on adopting machine learning and artificial intelligence to help identify the relevant features within a given image – again, something that will be familiar to those transitioning into computational pathology.
Interested in how disease interacts with our world. Writing stories covering subjects like politics, society, and climate change.