I came across this and decided to reblog it as it aligns with many of my views on GMOs in our food supply.  But let ME be clear, I’m not a crazy hippy, but some of my best friends are.  I’m not a vegetarian, and I don’t think we need to adopt any particularly extreme diet (ie. cutting out dairy, or carbs, or animal protein) to save our health or planet.  I do however think that we’re not short on evidence to suggest we should modify our agricultural methods to incorporate sustainable practises.

I don’t think that blanket-vilification of GMOs is the way to go.  I do not advocate the model that corporations like Monsanto are using, and think that globally we should not allow the patenting of the sequence of unmodified genes (irrespective of their sequenced/species origin).  But as a molecular biologist, I think that there will be modifications that could be exploited safely with great potential benefit to humanity and the planet. Frankly, genetic modification, done safely needn’t be any more dangerous than selective breeding, and warrants objective investigation without being tarnished by a smear campaign aimed at one or two biotechnology corporations whose practises we find reprehensible.

 A Liberal’s Defense of GMOs

Let me get a few things out of the way.

    I’m a crazy hippie.  I go to Burning Man every year.  I teach yoga.  I live in a co-op.  For the past two years I’ve been delivering organic vegetables for a local delivery service.  I’ve been eating vegetarian for years, and vegan for the past four months.

    I’m also fascinated by genetics.  I read every book that comes my way on evolutionary theory, population genetics, and mapping the genome.  I took several classes on the subject at the University of Pennsylvania.  All told, I have a pretty solid understanding of how genes work.

     And ultimately, I’m just not that scared of GMOs.

    Now don’t get me wrong.  I understand where my liberal friends are coming from.  I share the same desire for a safe and healthy food supply.  There’s a LOT that disturbs me about the state of food production and distribution in America.

    I think Monsanto is evil, that patenting seeds and suing farmers is unethical, and that some GMO crops (like Roundup Ready Soybeans) lend themselves to irresponsible pesticide use and cross-contamination.

    But I’m also not going to let my anti-corporate sentiments get in the way of a diverse and promising field of research.

    When genetic engineering is used to decrease pesticide use, to add nutrients to crops in malnourished countries, and otherwise improve the quality of our food products, then it’s a valuable tool that can contribute to a safe and healthy food supply.

    I want to address three points that are often brought up by anti-GMO advocates that are either simply untrue, or a lot more nuanced than we’ve been led to believe.

1. GMOs create more “unnatural” mutations than traditional breeding methods.

    Genetic manipulation is nothing new.  Humans have been breeding plants and animals for thousands of years.  Many of our staple crops (wheat, corn, soy), would not exist without human intervention.  The same goes for domesticated farm species.

    Whether we’re using genetic modification or selective breeding, we’re playing God either way.  But some people seem to think that selective breeding is “safer” — that it allows less opportunity for damaging mutations than genetic engineering does.  This couldn’t be more wrong.

    The entire process of evolution is dependent upon mutation.  UV radiation changes the structure of the DNA code in each individual organism.  Most of these mutations aren’t beneficial.  Some leave out necessary proteins.  Others add useless information.  And yet, a percentage of these “errors” are helpful enough that they’re passed along to future generations and become the new normal.

    If there’s any danger with genetic engineering, it’s that we can be too precise in our manipulation.  We can ensure that each new generation of seeds contains the exact same DNA sequence, double-checked for errors and mutations eliminated.  The “unnatural” process actually produces less mutations, not more.

2. GMOs contain animal DNA that has been “spliced” into plants.

    One of the most enduring myths about genetic engineering concerns a GM tomato which, as legend would have it, contained flounder genes spliced into tomato DNA.  While it’s true that Calgene experimented with a freeze-resistant tomato, they used a “synthesized … antifreeze gene based on the winter flounder gene” — not a cut-and-pasted copy of the gene itself.

    Those freeze-resistant tomatoes never made it to market, but a different version called the Flavr Savr did.  Tomatoes contain a protein called polygalacturonase (PG), which breaks down the pectin in the cell walls, causing the tomato to soften as it ripens.  To create a tomato that would ripen more slowly, Calgene took the gene that encodes for the PG protein and reversed it.  This backwards strand of DNA, known as an “antisense” gene, binds to the forward-running strand and cancels it out.  Without PG, the pectin (and therefore the tomato) breaks down more slowly.  The simplicity of the process is remarkable.  No toxic chemicals, no mysterious bits of DNA.  Just a simple tweak of the tomato’s own genetic code.

    But hold on a minute.  What if they had used a gene from a fish in creating this tomato?  Would the tomato taste fishy?  Would you have to watch out for fish bones in your pasta sauce?  Not unless you’ve added anchovies.

    Genes are basically bits of computer code that are interchangeable from species to species.  When you isolate a tiny bit of gene, it doesn’t retain the essence of whichever species it came from.  You might have a bit of DNA that says simply, “Grow appendage X on the abdomen,” but doesn’t specify what kind of appendage.  If you put that code into a fly, it activates the part of DNA that grows a wing.  Put that same code into a mouse and it grows a foreleg.  It doesn’t make the mouse any more like a fly.

3. GMO’s are radioactive, cause cancer, and are bad for the environment.

    This is a trickier question to answer, and I’ll be the first to admit that we need more research into the health effects of GM products.  But I’m going to bet that the answer turns out to be something like this: some GMOs are safe, and others are not.  Lumping all GMOs into the same category is like lumping all fertilizers or all pesticides into the same category.  Genetic changes are only as dangerous as the proteins they encode for — just as in any plant.  Consider how many “natural” plants have genes that produce poisons and toxins. 

    In the case of the Flavr Savr tomato, I wouldn’t be too worried.  It simply blocks a protein that the tomato itself produces.  In the case of herbicide-resistant soybeans, I’d want to know more.  What kind of herbicide is being sprayed on the plants?  Are traces of the herbicide still found in the food when it reaches our plate?

    While I voted for the labeling act that was on the California ballot last year, a simple “contains GMOs” label would be of little use to me.  I want to know what specifically about the organism was modified so I can reach my own conclusions.

    Personally, I think the GMO scare is a distraction from far more important issues going on in the food industry:

    — A factory-farming system that’s abusive to the animals we raise and results in unnatural, highly-processed meats

    — An obesity epidemic resulting from subsidized corn crops and unchecked fast food marketing

    — A glut of “natural and artificial” flavorings, sweeteners, and colors

    — Lack of access to quality produce in urban “food deserts”

    If we really want to do something about public health issues, then these are the problems we should be focusing on.  I’m not going to object to something that could have a positive effect on the world’s food supply because there’s a chance that something I eat might give me cancer ten or twenty years down the line.

    That risk already exists.  I’m just as likely to get cancer from the un-modified, but highly-processed foods that are already in the market.

    In the meantime, I’m going to support those GM efforts that might actually do some good for the world.  There’s the Golden Rice Project, which fortifies rice in developing countries so as to combat micronutrient deficiencies.  There have been attempts to genetically modify trees both to fight pollution and to decrease fossil fuel dependency.

    And then there’s the banana vaccine for Hepatitis B, which, due to regulatory restrictions, may be reworked into a non-edible vaccine in the tobacco plant.

    I don’t know about you guys, but these sound like pretty liberal objectives to me.

    GM crops, combined with sustainable and organic agriculture, might do more to advance our cause than any other scientific advancement of the modern era.

    By all means, let’s March Against Monsanto.  But then let’s put genetic engineering into the hands of forward-thinking, progressive scientists so we can start a real agricultural revolution.

This article is reprinted from Saul Of-Hearts’ blog.


I came across this post on The Node with some fantastic imaging from Zelzer and colleagues at the Weizmann of the cells that connect tendon to bone.  The paper sheds light on the developmental origins of the cells that form the eminences; the units that connect tendon to bone. As it turns out, these do not originate from cartilage (chondrocytes), as previously thought, but instead from a separate lineage specified slightly later in development.  These progenitor cells can be identified by the genes they express.  In the image below, the arrows point to these cells, represented in green [Sox9-positive (green), Col2a1-negative (red)].

Blitz et al. 2013 Development 10.1242/dev.093906

It’s job search time. I’ve been looking into both academic and non-academic (read: academic-adjacent) posts.  So far I’ve even had a few interviews that would have taken me away from academic research, well, the front-lines anyway. What can I say, I’ve been in some manner of school, either as a student or an employee since I was five. A thirty-five year addiction is tough to break. Even while I’ve been courting the grey side (remember…adjacent), I must confess to having been keeping a steady eye on academic posts.

The paths through academia are many and varied.  As the adage goes, there are many ways to skin that (academic) cat. A common factor in whatever path you take through academia is the large amounts of work it will take to secure and maintain the position (for usually less money than you think you deserve). Work-wise, for every bit that makes it into print (read: exists as non-published research, hasn’t been peer reviewed and is not in the public domain for scrutiny), there are days, months, years of optimisation experiments as well as failed attempts and avenues of investigation that never see the light of day, so to speak.  Watching the first couple of years of my Ph. D. lab work get reduced to a supplementary figure and two or three lines of text in the expanded methods section of the supplement of a publication wasn’t the easiest pill to swallow. Cambridge showed me that the celebratory haze around a Nature or Cell paper fades quickly and those impactful publications need to keep happening throughout one’s career. The money issue, well, that’s a whole other story.

For many, a tremendous element of luck is involved. Perhaps the first optimisation just works and gives you what you need to make reproducible observations. Some people are just naturals and manage to quickly develop the Jedi-like skills to just get things to work. Others achieve results through a lot of repetition.  This isn’t a luxury that everyone is afforded though as post-doc grants don’t last forever, and as I was repetitively reminded, don’t often provide for more than salary.

Working in an embryo microinjection lab in Cambridge let me witness this first-hand. For me it was a steep learning curve, and all things considered, my work was made easier by being able to inject when the cells are nice and big, comparatively.  Some of my colleagues astounded by injecting several stages later when the cell sizes  were MUCH smaller.  Some colleagues opted to focus on other aspects of their research, nicingup to the Jedis when it came to the microinjection work.  Now I sometimes work with single stem cells–IT’S FIDDLY WORK!  The point is that just like in every other profession, there are naturals in science too, who work their ass off, publish well, and become instant academics.  Sometimes these naturals are kept on as long-term research associates and lab managers because their gifts are too great to let go.  Others move on to start their own groups here, there and everywhere, and the academic big wheel keeps on turning.

Also making that wheel turn, there are just as many (if not more) very good scientists who work just as hard, (if not harder), but luck, in whatever gender they wish to personify it, fails to smile on them. They produce, but maybe they’re secondary or tertiary authors on lower-impact publications, so it takes a few more of them.  You can play the quantity v. quality game in academia too, but limited funding is forever forcing institutions to demand more higher profile publications within their researchers’ bibliographies.

Then there’s teaching and scholarship. My training is a mishmash of the Canadian, American and British academic frameworks. There are a lot of similarities, but just as with the cultures in general, there are also some reasonably fundamental differences.  The tenure track concept more common in North American institutions doesn’t function the same way in British Unis. And with universities increasing the emphasis of their research programmes, meaning if you’re not publishing well, and bringing in grant funding, you don’t have tenure to fall back on and could find yourself out of work after a seemingly good career.

There are usually one of three paths you can be on.  You can be a teaching-only lecturer; your role is primarily teaching, and possibly some other commitments to the university through committee involvement. You can also be on a teaching-scholarship path where you also primarily teach, but you take on administrative and strategic responsibilities that effectively help run and direct the department’s teaching programmes.  The final path is a teaching-research path, where you can fulfil teaching and departmental obligations, but then you have to also maintaining a viable research programme (read: a good track-record of publication and funding)–the triple-threat of academic science, if you will.

The post I’m currently chewing on appears to be rooted in the teaching-academic path, but I spent a day speaking with various people within the department, trying to get their take on the position, perhaps provide an insight that I couldn’t. I was repeatedly advised to keep an eye out for the possibility to move back into the research pathway.  This is an especially important consideration to bear in mind in a university that has a mandate from its principal to increase the profile of its research. And let’s just say that I get the feeling that the only real notion of tenure in the British system comes from wanting to keep the ratio of triple-threats to others on the higher side.

So why not just go after an academic-research position? While I’ve got a couple of projects that are about to bear published fruit, they’re not there yet, so I wouldn’t be the most competitive candidate for early career fellowships were my application to go in now.  Then there’s the fact that the ideas I do have in the works for grant applications still need some preliminary investigation, another requirement to be competitive for research funding in this day and age.  Good ideas and a track record aren’t enough, you also need to demonstrate that your line of investigation is both feasible, and to some extent going to succeed. One person I worked for in the past put it as needing to virtually do all of the work you were proposing, submitting it as preliminary data, just so you could get the funding necessary to dot the i’s, cross the t’s and send it out the door to prepare for the next one. Cat skins, cat skins, everywhere.

However there is yet another factor to consider with this post I’m pondering.  The teaching is at Nanchang University in China. The lecturing would be in English, but the teaching takes place over four two-week(ish) stints,  jetting back to London in between.  A bit more of a loaded decision.  It’d be an opportunity to immerse myself in a culture I’ve only experienced through the the various immigrant communities I’ve come into contact with in Canada, US and the UK. Maybe I’ll pick up a smattering of the language.  All pluses so far.

The post involves a strategic role; developing the burgeoning joint-programme between QMUL and the pre-med programme at Nanchang University. Plus.

One thing I’ve started to think about is how that I may have to change my teaching tactics for a predominantly Chinese student group. Hmmm. I know this programme has been modelled on and modified from another similar joint programme with another Chinese Uni, and there have been  lessons learned and improvements made.  I’ve asked a few Chinese colleagues here at QM and elsewhere if they could generalise the student-experience in China versus the one here or in another English-speaking country. One answer I didn’t expect was, “Chinese students tend to be quieter [and] need to be inspired.”

I write this with great caution, not wanting to earn the label of gweilo. My secondary school in Canada became an ESL-haven who saw its Cantonese-speaking student population explode in the years leading up to 1997.  Being one of the few white ghosts amongst a group of Chinese kids wasn’t uncommon.  From a friend, it was usually meant in jest, but from a stranger the term tended towards its more pejorative meaning. At any rate, generalising that Chinese students are quiet and withdrawn is just the sort of thing that could get a Caucasian person in hot water, but that’s what he said. The hot water bit comes in admitting that these are descriptors that would apply to some of the Chinese postgraduate students (studying in English countries) who I’ve met in my career. Not wanting to generalise, I always assumed that the lack of gregariousness came from issues in communication in English.  OK, one opinion, one person, but still…interesting.

But can I give up the rush of discovery that comes with research?  That is the question. Another thought instilled in me  from the various faculty members I’ve discussed the position with was: distancing myself from research too much by taking on such a teaching-heavy role brings with it the other consequences. Past publications, even impactful ones, have a window of five years (generally) in which they are considered to be constructive to your publication record.  After that, you’re not considered active if you haven’t produced more.  Makes sense, as competition for positions and funding is fierce.  So leaving the lab behind risks not being able to make it back, even without having left academia, as by that point there will be plenty of others who have been productive during my (potential) downtime.

And finally, there’s this.  I’ll work with researchers, probably even have a say in the recruitment of future researchers, but I may not be one myself. This last bit is a concept I’ve been wrestling with for some time. And from what I’m lead to believe, this process–this decision–is not uncommon amongst academics, early on in the skinning process.

So it’s decision time.  Here kitty kitty.

Brains are made up of many different types of cells that come about through common stem cells called neuroblast progenitors. These progenitors can produce the number and diversity of cell types necessary to form a central nervous system. As neuroblasts divide to make more cells, they can potentially specify different types of cells with each round of division, and depending upon the stage of development. 

Lineage development from larva to adult. Cartoon diagram highlighting a single neural lineage in the larval brain (left) and in the adult brain (right). Note the decreased contribution of primary neurons in the adult brain compared to the larval brain. GMC ganglion mother cell. Spindler and Hartenstein, Dev Genes Evol. 2010 220: 1–10. doi: 10.1007/s00427-010-0323-7

In the fruit fly, most of the cells in the adult central nervous system have been mapped back to their embyonic neuroblast progenitor, of which there are about 100 or so. As such, the fruit fly brain is a popular model for understanding neural circuitry and development. One of the better-studied neuroblast lineages, referred to as NB7-1, gives rise to seven progenitor cells, the first five of which give rise to lineages of motorneurons that will wire the nervous system to the muscles. The remaining two neuroblasts give rise to interneurons; the cells that connect neurons within the nervous system.


Though not innervated by NB7-1 motorneurons, these are  developing flight muscles of the fruit fly (visualised in GREEN), which are remodeled from larval muscle templates by the fusion of adult muscle precursors (visualised in BLUE). The motor neurons (visualised in RED) of the larval muscles are re-specified to innervate the flight muscles. These muscles and motor neurons are a unique example of how adult structures are refashioned from embryonic counterparts during metamorphosis in animals. Imaged by: S. Roy (Institute of Molecular and Cell Biology, Singapore; Cover image, EMBO J, 08.08.2007)

Each of the five motorneuron-specifying neuroblasts gives rise to a different lineage of cells. The lineages specified in the first two neuroblasts, called U1 and U2, are governed by a gene called hunchback (hb), which is normally only active in the first two progenitors. The hb gene codes for a protein that regulates the activity of other genes called a transcription factorTranscription factors work by recognising DNA sequences in a gene’s promoter; the bit immediately preceding the coding sequence to a gene. Once bound to the promoter, they summon the molecular machinery (or are sometimes already pre-assembled to it) necessary to decode that gene into a template to make a protein. In some cases, like with hb, transcription factors (the proteins) bind to their own gene promoters. This is usually done to ensure that a lot of that particular transcription factor is produced, as it may be necessary to activate a great number of downstream genes. In fact, some transcription factors are responsible for turning on a whole cascades of gene activity; in some cases altering the identity of a lineage of cells.

In fact, if hb is experimentally turned on in any of the first five progenitors, it will turn on the same U1/U2 motorneuron identity in these cells. Curiously though, hb’s affect is limited to those cells as the remaining two (interneuron-specifying) progenitors and their resulting neurons do not respond to the hb signal. When a sub-group of cells is capable of responding to an inductive signal like this, it is said to be competent.

Chris Doe’s group at the Howard Hughes Medical Institute utilises this predictable progression of neuronal specification to understand how competence is regulated in cells. As the identity of each cell has been mapped from the embryo through larval stages and on to the adult, NB7-1 gives us a system to understand how cellular identity is fundamentally achieved at the genetic level. So the next time someone flips off fly research within earshot, enlighten them.

So if hb‘s job is to turn on the genes involved in the U1/U2 motorneuron program, why can the first five cells respond, but not the other two? Remember that the DNA is very highly packaged and organised within the cell. There are a number of ways that the cell can organise a specific region of DNA on a chromosome in order to ensure it is ready for activity when necessary. One of the mechanisms involves specifically positioning different regions of chromosomes in different parts of the nucleus. This filing system is not entirely different than that used by most first year university students in determining if clothes are still wearable (I’ll speak for the men here):  the pile of clothes on the floor in the middle of the room (though subject to a smell-test) are fair game. The clothes on the edges of the room; still on their shelves and hung up in the closet, like that sweater your lovely Aunt [so-and-so] knitted you for Christmas last year, are out of the way and generally not (read: never, in the case of the sweater) used. In a similar way, by tethering regions of a chromosome to the edge of the nucleus, to a structure called the nuclear lamina, they’re effectively inaccessible, no matter what transcription factor proteins are available to decode those regions.

Chris Doe’s group recently published findings that show that in the first two progenitors, the hb gene is physically positioned in the interior of the nucleus, a region that promotes its activity. In the 3rd, 4th and 5th progenitors, the gene is silent but remains poised in the interior (ie. it failed the smell-test; it remains on the dorm floor) and responsive to hb-induction. After the 5th division, hb gets repositioned to the edge of the nucleus (right next to the sweater) where it is silenced. This positioning is inherited by any of the future cells in that neuroblast’s lineage, ensuring that they hb protein is no longer produced in these cells. Experimentally interfering with this repositioning enables more cells to respond to the hb signal, resulting in the production in a greater number of U1/U2 motorneurons. This adds to the growing body of examples of genes that are regulated in a developmentally-relevant and heritable manner by tethering to the nuclear lamina. This also provides a mechanistic frame work to understand developmental competence at the molecular level in cells. 

My sister-in-law just sent me this. This objet d'[food]art is the work of my nephew Marcus’s. He’s 13.

My nephews have given me a few proud uncle moments already. The first of which involved them singing the lyrics to “police on my back”, without prompting, on a drive back from the Toronto Science Centre when they were six and eight. I can’t take credit for that or anything, it was their parents’ influence. In fact, I can’t really take credit for this either, but I figured I’d share. It remains a proud (biologist) uncle moment, nonetheless.

Rice Crispy Cell, by Marcus Paule-Martins

Rice Crispy Cell, by Marcus Paule-Martins

We should have this whole biological-ball-of-wax figured out in about 100 years, or so (that bit is generally always implied in these sorts of assertions). At least that was a brief point that Professor Adrian Bird made at the start of his lecture at the Royal Society before accepting the RSGSK Prize last Tuesday night. It’s an intriguing prospect, but one that largely fell out of the scope of his talk. That said though, if biologists do find themselves out of work a century from now, the blame can be pointed squarely at the likes of Professor Bird. His research has contributed a significant amount to our understanding of one of the mechanisms involved in silencing the activity of genes across the genome.  In the process, he worked out the cause to a severe autism-spectrum disorder affecting one in 10,000 female births and developed a mouse model to help understand it, and potentially reverse it.

Twelve years ago the first draft sequence of the human genome was published and we are only just scratching the surface of understanding all of the information encoded within it. The genome’s publication and the tools born of the genomic era changed the way we approached molecular biology; we turned to more targeted approaches. However it quickly became apparent that there was a lot more information there and a lot fewer genes than we anticipated. The 21,000 (or so) genes that code for proteins proved to be just as numerous as animals we used to stare down our supposedly evolved noses at (read: flatworms).  These, while central as the blueprints to the various parts that make up the cell, are only 1% of the story.

The other 99% is composed of swathes of genetic information strewn throughout the genome that carry instructions on how and when to use the 1%.  It’s a shame economics couldn’t take a page from this arrangement, right? Consortium projects like ENCODE have significantly furthered our understanding how some rest of this genomic DNA works, but large gaps remain in our understanding of how all of this information is organised and utilised inside cells.If you could take all of the DNA that is inside a single human cell, strip it of its protein packaging and arrange the bits from the 23-pairs of chromosomes end-to-end, you would have something in the neighbourhood of two metres in length. All of that fits into a cell nucleus that has a diameter in the order of 10s of microns (that’s one-one hundred thousandth of a metre). Needless to say, one incredible folding act needs to take place to get all of it in there. DNA wraps around a spindle of proteins called histones, forming the repeating unit of DNA architecture that packages those two-ish meters of double helix into chromatin. This nucleosomal structure coils back on itself almost the same way a phone cord will if you start to twist it, packing all of that chromatin in the nucleus rather tightly. The big question is then, how does the cell access the right gene, at the right time, because the process of turning on genes (aka. transcription) requires that they are physically read, and therefore accessible amongst all of that coiling.

Part of this is explained by a field called Epigenetics that really took off in the post-genome era. Its origins go back at least as far as the 1940s-50s, when scientists like Barbara McClintock were discovering that the story of how genes worked didn’t stop at the DNA sequence itself.  She went on to win a Nobel Prize in 1983 for her work on mobile DNA elements that regulate gene expression, even though at the time her work was regarded by some as an eccentricity of Maize genetics. But that’s a whole other post!

Colour variation in maize is regulated by mobile DNA elements in their genome called transposons.

The part of the DNA sequence that immediately precedes that of a gene is called the promoter.  Think of this as a staging ground for the molecular machines that read the genes. In short, the promoter promotes gene activity, thus some or all of the following information can be found there:

  • The When – During what developmental stage should gene-X be active; embryo? child? adult?
  • The Where – In what ‘type’ of cell should gene-X be active; muscle? bone? brain?
  • and The How Much – Is a lot or only a little of gene-product-X needed?

This is nowhere near the whole story at the sequence level, but for now let’s assume it is, as I’m about to add a layer of complexity to it. The prefix epi- in epigenetics comes from Greek, meaning over or on top of as it represents the genetic regulation that supersedes that which is imparted by the DNA sequence, itself. Put another way, epigenetics is like a very elaborate filing system for genetic information in a cell.

Think of it this way, if all cells contain essentially the same set of genetic blueprints, then there have to be ways to mark the genes pertinent to a neuron’s function, for example, as active, while leaving the others who could be either irrelevant or potentially detrimental to neuronal function marked as silent. Epigenetics can be thought of as the collective set of molecular sign posts that index an epigenome, ensuring that only those genes are available to that cell for use when and where they are promoted to do so.

There are principally two ways that these  marks work.  One set involves the marking the histones, the proteins that package DNA into chromatin. These are chemically modified in a number of ways, facilitating a rich code of information that marks genes, where their promoters start, where their coding sequences start/end and in what degree of  readiness for transcription genes should adopt for that particular cell type.  How this code is read by the cell and how the cell organises this information in a functional way inside cells are still very actively researched topics.

The other type involves the direct chemical modification of DNA, called DNA methylation, which is generally associated with gene silencing.  It was this, and the mechanisms that the cell uses to detect this modification that Professor Bird’s group works on.  Certain DNA sequences can be chemically modified by the addition of methyl groups, and these tend to accumulate near the start of gene promoters. As is often the case with biology, when molecules get chemically modified, there’s usually another molecule that is able to read  or recognise this modification.  This, in essence, is how biological information flows in the cell.  One molecule is modified, another binds to it, and that binding effects and outcome.  In this case, methylating the DNA in promoters suddenly makes promoters a target for a protein called MeCP2, which leads to a cascade of events that results in the silencing of that gene. Mutations in MeCP2, as it turns out, are the major cause of Rett Syndrome.  During the lecture, professor Bird showed encouraging results that Rett Syndrome symptoms could be reversed with gene therapy treatment in the animal models within four weeks of treatment (and yes, that’s the same mouse).

I’ve been a bad blogger…or depending on how you look at it, maybe I’ve just been a common blogger, given that apparently 95% of blogs are essentially abandoned as internet flotsam.  What can I say?  Something something not-enough-time-in-a-day something something. [Shuffles feet.] Tsk.  [entitled, self-pity moment over. That’s the extent of my ‘I’m going to try harder’ spiel, I promise.

Last week my partner got us tickets to a talk by Sean Carroll at the Royal Institution of Great Britain.  This was a treat on many levels as I’d never been to the RI before.  For those who don’t know, the RI is an, “Institution for diffusing the knowledge, and facilitating the general introduction, of useful mechanical inventions and improvements; and for teaching, by courses of philosophical lectures and experiments, the application of science to the common purposes of life”, by, among others, Joseph Banks (then president of the Royal Society) and Henry Cavendish in 1799.

The Royal Institution of Great Britain, Painting by Thomas Hosmer Shepherd (1793-1864)

The RIGB c. 1838, just “a medium sized townhouse in Mayfair”, apparently.

Amongst some of the most well-known work to have taken place under the RI’s roof was that of Michael Faraday, which lead to the discovery of electromagnetism.  And in the very same lecture theatre that Faraday discovered that electricity was indeed a force, Sean Carroll delighted a (nearly) full house on the story of the Large Hadron Collider (LHC) and the search for The Higgs Boson.  Oh, and did I mention?  He’s got a new book out too.

The point of all of this wasn’t so much to talk about elementary (sub-atomic) particles or a really large, really expensive bits of scientific kit.  Truthfully, what impressed me the most about the whole talk was the talk itself.  Given the venue and a quick Google search on Sean Carroll, I half-expected to be blinded with Bosons (or other matters of theoretical physics, which is what he actually researches).  However Carroll’s talk, though factually dense was still light, funny and chalked-full of pop-culture references—not to mention a talk-within-a-talk (eat your heart out, Hamlet!)  Carroll, concerned that he was mentioning an awful lot of male names while talking about the major players who have contributed to our current understanding of the Higgs Boson, gave a micro-talk about how the numbers of women in academic science are finally starting to come up, despite the persistence of male gender bias when it came to recruitment. (Unfortunately I wasn’t able to find all of the data he showed on the subject, specifically those supporting the ‘finally starting to come up’ point he made…any help here is appreciated).

I was impressed with Sean Carroll, not only as a scientist, but as a communicator.  You could argue that the style of his talk had more to do with his audience (ie. not just physicists).  It may be that if Carroll had been presenting his latest and greatest to a room full of The Physicoglitterati, he may not have invoked ICP’s “Miracles” as an example of how questions about physical phenomena have pervaded pop-culture.  Maybe.  The truth is, science needn’t (and shouldn’t) be spoken of in such an esoteric manner as to alienate an audience member, irrespective of their background.  As scientists, our ability to communicate our progress to our colleagues or to the public is one of the most important skills we can develop.

That said, in this day and age, even the most dedicated scientist gets exposed to pop culture.  Take Stephen Hawking appearing with Orbital at the closing ceremonies of the Paralympic Games in London 2012, for example. Including snippits of life into your work is part of what makes us human, I think.


Next public lecture:  tomorrow, Adrian Bird at the Royal Society, speaking about Genetics, epigenetics and disease.

I caught this post on The Node, the blog/community site run by Development (The Company of Biologists).  John Gurdon and Helen Blau gave a talk on Growing a human Body Part in London, last week at the Royal Insitute.

Growing a Human Body Part

I’m sorry I missed it.  It sounds like an interesting talk.  Apparently, the question was raised:

Suppose a six-month-old baby dies in an accident. Skin cells of the baby are saved and frozen. Assume that the parents can’t have any more children of their own, and that there are no technological barriers to human cloning. Would you be in support of the parents using the skin cells of their dead child to generate a “twin”?

Interesting dilemma, and a question that’s currently being chewed on by the ethicists. The issue of not raising your own children is a big deal for some.  There is also the fact that a clone or a “twin”, no matter how you dress it up, is not a copy of the person that died.  Any expectations to that end or lingering what if’s the parents may carry around with them, could have repercussions for the twin.   Interestingly though, public opinion is considerably more in favour of this sort of scenario, than science is.

Growing up in a university biology programme, things had a certain order: we biologists kept to our corner of the sandbox, playing only occasionally with the chemists and even less with the physicists.  Evenings were another matter, but that falls within the remit of other sites on these here internets. This pedagogically imposed segregation may have been a necessary evil to ensure that burgeoning scientists remained focused while learning the fundamentals to their respective disciplines. And while I’m sure we could argue the pros and cons to this approach in a teaching-then-and-now sense, I can attest to the fact that it has created cohort after cohort of scientists who still adhere to these artificial boundaries, often to the detriment of progress.

So people eventually start to think next-gen; realising that working together helps, and not just for technological progression. INTERDISCIPLINARY becomes a buzz word and suddenly university departments start springing up all over the place; the bastard children of tawdry scientific consorts between pairs of specialties; Bio-Engineers, Molecular-Anthropologists, Mecahno-Compu-Socio-Biologists—oh my! And yes, I know those were all bio-heavy. But come on, this is real science I’m talking about here! [Oh how we still play with each other!]  Most major granting bodies have special categories for partnerships between biology and other facets of science. There I go again. It’s an artificial distinction.

Think of it this way: it’s all about scale.  Physics generally governs the realms that study the interactions of the individual components of matter and energy at the subatomic level. Astrophysicist try to understand how these same principles then apply across scales that describe the cosmos.  As subatomic particles associate to generate atoms of the various elements, we have chemistry.  Atoms begin to interact with each other, interact with energy, and organise into complex structures.  After a sufficient amount of certain types of complexity, you’ll enter the realm of living things; biology. At the moment, I find myself at the nexus of mechanics (physics and engineering) and biology, or mechanobiology.  More specifically, mechanobiology deals with how living systems respond to mechanical forces.

I remember sitting in a talk at Wayne State University given by Neal Pellis when he still worked for NASA about biological experiments being carried out in zero gravity by NASA astronauts thinking, If embryos don’t have gravity, how do they orient themselves; head vs. feet, front vs. back?  The merits of the question itself are less important, but it was probably one of the first times I can remember trying to reconcile biology in the larger context of physics. [SCIENCE BOUNDARY KLAXON!]  Positional information in embryonic development is crucial. This is how embryos can start to build themselves from a single cell.  They form a basic pattern, based often on spacial cues from their environment, in our case a uterus, and build from that basic pattern.  Through migration and interactions with other layers of cells and tissues, cells begin to specify, axes form; front becomes different from back.  But there it is: migration. Cells move. Cells don’t just react to forces, cells generate forces.  [SCIENCE BOUNDARY KLAXON!] Dammit, wish I’d listened more during physics!

My brain began to dole out interdisciplinary olive-branches in ernest when I was in Cambridge, as our group often waxed biophysical, trying to understand how physical forces impacted developing embryos. A friend of mine worked on understanding the rhythm of flows in fluid inside the a fertilised egg. It turns out that these rhythms are established from the moment of fertilisation.  Sperm entry into the egg triggers calcium fluctuations that cause contractions in a network of proteins that force the fluid in the cell, the cytoplasm, to start to flow in sync with the calcium spikes. Moreover, these flows can serve as good predictors of embryo health for mammalian IVF embryos.

But it wasn’t until I came to QM that I started to address some of these questions myself.  I am, at heart, a developmental biologist, so one of the questions I was curious about was how it is that stem cells respond to mechanical forces such as gravity or the pull from a neighbouring cell?  Also, at what point do those mechanical signals start to play a role  in the journey from stem cell to bone, our chief load-bearing tissue in the body, for example.  This is a particularly important question when you consider it in the context of regenerative medicine, where the goal is ultimately starting with a patient cell, reprogramming it into an embryonic-like state, and using that as a therapeutic base to treat disease.  If the patient donates their own cells, there are no issues with rejection.  Currently, though we have the technology to reprogram adult cells into a more embryonic state, theoretically making it possible to create every cell type in the body, we lack the fine control in necessary to coax stem cells into certain fates, such that if we have millions of cells in a petri dish, for example, only a relatively small fraction do what we tell them to do, which raises all sorts of flags about their safety.  This is where understanding how cells respond to all sorts of signals, whether they’re mechanical or chemical, becomes important.

Ultimately what defines one cell type from another is down to its function; usually dictated by the sorts of genes that are turned on, or expressed,  in a cell.  Genes live in the nucleus of the cell. So the general path of information starts with an external (extra-cellular) signal that is transmitted (transduced), that results in some sort of change in what the cell is doing genetically, in the nucleus. A lot of attention has been directed at understanding how this process works biochemically for decades; with molecules binding receptors on the external side of the cell membrane that then trigger signalling cascades that result in a change it he pattern of genes a cell expresses.  As it turns out, mechanical forces may be processed in a similar manner; originating on the outside of the cell and being transduced through the cytoplasm to the nucleus, altering gene expression. This was the subject matter of a recent review that I had the pleasure of co-authoring.

Annual Reviews Biomedical Engineering 14: Mechanical Regulation of Nuclear Structure and Function.

And as a bit of shamless self promotion, what follows are excerpts from the historical section, and one of my figures of the nucleus.  Anyone who’d like a full reprint, please contact me.

Conceptually, the idea that development and differentiation are shaped by extrinsic (mechanical) forces is ancient. Aristotle thought that embryonic development was in part directed by the locomotive component of the soul. This so-called Aristotelian soul differed from the more traditional or spiritual concept of a soul in that it is generated through the (physical) mixing of male and female semen (9), producing a teleological driving force that gradually gestates that individual organism to its final form (10). Teleological reasoning such as this persisted for millennia. However, over time, the notion of the Aristotelian soul as a driving force was either replaced by a divine, perfecting force (11) or obviated altogether by the notion that organisms were preformed and grew with no need for ontological development or differentiation (10). In the mid-1800s, Darwin published his theory describing evolution as a slow and gradual process of descent with modification, resulting in stochastic variability capable of conferring selectable (morphological) advantages to promote survival or increased reproductive fitness of an individual. He included many observations of embryos, as he saw them as “picture[s], more or less obscured, of the common parent-form of each great class of animals” (12, p. 450). Along with this, and more central to this work, he was also among the first to promote a movement away from teleological thinking, suggesting that evolution (and embryology) need not be a process seeking to perfect a morphology, per se (11). This point was later elaborated on by Wilhelm Roux and Julius Wolff (13), in Germany, toward the end of the century.

Wolff was an orthopedic surgeon who conducted research into how the structure of bones changed with alteration of function owing to either growth or pathology. His ideas followed from those of Roux; namely, he maintained that life has two periods, an embryonic one characterized by trophic organ growth and differentiation, and an adult one in which growth and remodeling occurs only when stimulated by healing or (cellular) turnover (13). These stimuli, which Roux collectively referred to as developmental mechanics (11), had an overall effect on tissues irrespective of life period, resulting in their remodeling. Wolff postulated that there was a causal relationship between the physical forces acting on bones and the observed changes to both gross morphology and internal architecture. In particular, he predicted that remodeling of bone trabeculae followed the mathematical trajectories of the forces acting on them. He also went a step further, suggesting that these same mechanical signals could provide one plausible mechanism for Darwin’s theory of natural selection (13). Since the publication of Wolff ’s work, the accuracy of some of his mathematical predictions has been called into question (reviewed in 14). Another common criticism is that his theory cannot be considered a law because other bones and tissues do not exhibit the universality sufficient to explain the phenomena he described in cancellous bone. Despite these detractions, his observations were made well in advance of the discovery of radiography and modern tissue biology techniques. Wolff ’s work is thus deservedly recognized for its pioneering contributions to the burgeoning fields of orthopedics and tissue mechanobiology.

Figure 1, Mechanical Regulation of Nuclear Structure and Function, Annual Reviews Biomedical Engineering 14

Figure 1: The archetypal SUN-KASH domain association in the LINC complex. (a) The LINC complex is typified by an association between SUN-domain proteins ( yellow) on the inner nuclear membrane and KASH-domain proteins ( green, blue, and orange) on the outer nuclear membrane. KASH-domain proteins form a functional link with the networks of cytoplasmic intermediate filaments, microtubules, and actin microfilaments, which compose the cytoskeleton. SUN-domain proteins bind to the nuclear lamina, a network of intermediate filaments composed of varying isoforms of A- and B-type lamins, LAPs, and LRs. The lamina also serves as a tethering point for the genome, with associations reported among various lamins, LAPs, and LRs. (b) KASH domains associate with SUN domains in the perinuclear space, and this association maintains the architecture of the nuclear envelope. SUN proteins can associate promiscuously with KASH proteins and can also form homo- and heterodimers with other SUN proteins. (c) Two competing models explain the 3D organization of CTs in the interphase nucleus. The CT-IC model posits that a largely DNA-free compartment of contiguous spaces between adjacent CTs exists. The second model, known as the intermingling model, maintains that whereas CTs occupy nonrandom spaces in the interphase nucleus, there is a large amount of intermingling of the chromatin between adjacent CTs. Abbreviations: CT, chromosome territory; IC, interchromosomal compartment; KASH, Klarsicht/ANC-1/Syne-1 homology; LAP, lamin-associated protein; LINC, linker of the nucleoskeleton and cytoskeleton; LR, lamin receptor; NPC, nuclear pore complex;SUN, Sad1 and UNC84.


July 2018
« Jul    


%d bloggers like this: