Donate

If you would like to support our clinical research you can make a tax-deductable donation to Ped IMID.

Navigation
Friday
Oct232009

The ethics of biobanking

The University of Leuven hosted two lectures on biobanking today, one by Hainaut from the International Agency for Research on Cancer and the other by Juhl from the biobanking company Indivumed.

Biobanking is a tricky ethical area, with little consensus and vague law. Who owns the material taken from a patient? The patient? The hospital? The surgeon? If someone wants to use the material, what is the default position? Should the patient have to provide consent or is consent assumed unless the patient opts out? Does the patient even have the right to opt out at a latter time point? Hainaut made the case that there is a moral duty on every person to allow access to their biological samples for the good of humanity. His example was that a excised breast cancer not only belongs to that woman, but also to all other women who may develop breast cancer in the future.

This is an attractive argument but has flaws. If the information generated goes into the public sphere, such that new treatments can be developed and accessed, it may be reasonable to use the moral argument, in the same way that organ donation as the default option can be argued on moral grounds. However, to me this argument is flawed if the information generated does not go into the public sphere. If the information is not published (a secretive researcher or company keeping back information for potential future uses) or if it is published with restrictions on use (ie, patented) that information is not open to all of humanity. Isn't it unethical for a biobank to appeal to the moral duty to all of humanity unless legal restrictions are placed on the biobank to ensure that the proceeds of the bank are available to all of humanity? Doesn't informed consent require donors to be told the status of information generated from their samples?

Unfortunately, Hainaut was not able to answer this question when asked, as Juhl (CEO of a biobanking company that only publishes a fraction of the data it generates) jumped in with a rant about for-profit vs not-for-profit. His contention was that every person acts through the personal profit motive, so that whether the biobank made a profit or not didn't matter. His position is that only private companies have the money to put forward to do the research, and they deserve a profit for the research they do. Perhaps, but irrelevant to the ethical question. If the research outcomes are utilitarian then the utilitarian argument should be put to prospective donors - such as DeCode offering all future drugs free of charge to Icelandic people in exchange for access to the medical records and genome of the Icelandic people. Material can be collected for a utilitarian motive using utilitarian appeals, or for a moral motive using moral appeals. What is unethical is to use a moral appeal to collect material destined for a utilitarian purpose.

Hopefully we will see future legislation reflect the ethical considerations of biobanking in more a more thoughtful manner than was presented today. Donations made by the public for the public good should be legally bound to this use. It is illegal for a charity to accept a monetary donation, keep 90% of the money for personal use and spend 10% on charitable works. Likewise it should be illegal for a biobank that accepts material presented as a public donation to only release 10% of the data produced by the donation, and keep 90% to itself.

Monday
Oct192009

Infectious cancer

It has long been known that the several causes of cancer are infectious. Typically a virus contains a number of oncogenes to enhance its own proliferation, and in an infection gone wrong (for both virus and host) a viral oncogene is incorporated into the host DNA, creating an uncontrollable tumour cell. One of the best examples of this is human papillomavirus (HPV), a virus which infects most sexually active adults and is responsible for nearly every case of cervical cancer worldwide (which is why all girls should be vaccinated before they become sexually active).

However these cases are not "infectious cancers", they are infectious diseases which are capable of causing cancer. True infectious cancers, where a cancer cell from one individual takes up residency in a second individual and grows into a new cancer, were unknown until recently. With the publication of a new study in PNAS we now have three examples of truly infectious cancers.

1. In the most recent study, researchers in Japan documented the tragic case of a 28 year old Japanese woman who gave birth to a healthy baby but within two months had been diagnosed with acute lymphoblastic leukemia and died. At 11 months of age the child also become ill and was diagnosed with acute lymphoblastic leukemia. Genetic analysis of the tumour cells in the baby demonstrated that the tumour cells were not from the child herself, but rather maternal leukemia cells that had crossed the placenta during pregnancy or childbirth and had taken up residency in their new host. With this information, retrospective analysis indicates that this is probably not a one-off event, and at least 17 other cases of mother-to-child transmission of cancer have probably occurred.

2. In addition to mother-to-child transmission of cancer, cancer can spread from one identical twin to another. Identical (mono-zygotic) twins have identical immune systems, preventing rejection of "transplanted" cells, unlike non-identical (di-zygotic) twins. Thus a tumour which develops before birth in one identical twin can be transferred in utero to the other identical twin, where it can grow without being rejected. In one improbable but highly informative case, a set of triplets were born where two babies were identical and the third was non-identical. A tumour had arisen in one of the identical twins in utero and had passed to both other foetuses, but had been rejected by the non-identical foetus and accepted by the identical foetus. Of course, with the advent of medical transplantation, transmission of infectious cancers is now no longer limited to the uterus. Transplantation of an organ containing a cancer into a new host can allow the original cancer to grow and spread, as transplantation patients are immunosuppressed to prevent rejection. There is also a single case of a cancer being transmitted from a surgeon who cut his hand during surgery to a patient who was not immunosuppressed.

3. In a medical mystery well known to Australians, the population of Tasmanian Devils has been crashing as a fatal facial tumour has been spreading across the population. The way the fatal tumours have spread steadily across Tasmania and sparing Devils on smaller islands first suggested a new infectious disease that causes cancer, similar to HPV in humans. However a suprising study demonstrated that the cancer was directly spreading from one Devil to the next after having spontaneously developed in a single individual. These scrappy little monsters attack each other on first sight, biting each other's faces. The cancer resides in the salivary glands and gets transmitted by facial bites to the new Devil. Unfortunately for Tasmanian Devils, a genetic bottleneck left all Devils so genetically similar that they are, for immunological purposes, all identical twins. This means that the cancer cells transmitted from one Devil to another through biting are able to grow and kill Devil after Devil. The cancer from a single individual has already killed 50% of all Devils, and it is possible that we will have to wait until the cancer burns out by killing all potential hosts before reintroducing the Devil from the protected island populations. As unlikely as this seems, another similar spread occurs in dogs, where a cancer that arose in a single individual wolf is being spread through sexual transmission from dog to dog around the world. This example also illustrates the point made about cancers being "immortal" - the original cancer event may have occured up to 2500 years ago, with the tumour moving from host to host for thousands of years without dying out.

Saturday
Oct032009

When you eat matters

A very interesting study has just been published in the journal Obesity. The work, by Arble and colleagues in the Turek laboratory, fed mice high-fat food either during the day or at night. The surprising result was that mice fed during the day put on 20% more weight than mice fed at night. In both cases the mice had unlimited access to food yet both groups of mice ate the same amount, so there was no difference in net calories. Instead, what this result suggests is that the body deals with calories differently at different points of the diurnal cycle. During the active phase (night for mice) calories are shifted into burn mode, while during the resting phase (daytime for mice) calories are stored with greater efficiency.

If this result can be translated into humans it would suggest that large meals should be concentrated in the active phase of the day, breakfasts and lunches, and that evening or night meals should be restricted. An interesting proposal is that the American evening-biased eating rhythm compared to the European lunch-biased eating rhythm is partly responsible for the obesity problem in America. Of course it could only ever be a fraction of the problem, as many other correlates with obesity are well recognised. For example, a study by Pickett and colleages has demonstrated that countries with higher income inequality have higher calorific intake and obesity, and another study by Bassett and colleagues points out that Belgians burn 62 extra Calories per day by walking and cycling, compared to a poor 20 Calories per day by Americans.

The other important aspect of this study is that it contributes to the growing body of evidence dispelling the simplistic "obesity = too many calories and not enough exercise" formula. As published by the Segal laboratory, the majority of difference in body mass index (BMI) is due to genetics (64%). Being overweight does not mean that an individual is making worse eating or exercising decisions than a healthy range individual - the majority of the difference in weight just comes down to the fact that different genetics leads to different metabolisms.

Tuesday
Sep222009

Nature attacks peer review

In the latest issue of Nature, the journal has published a rather unfair attack on peer-review. Peer review is the process that most journals use to assess the merit of individual papers - submissions are judged by editorial staff, then sent to scientists working in the field for peer review, then the reports by these scientific peers are judged by the editorial staff to determine whether they warrant publication. While it is the standard today, there has been a lot of resistance to peer review in the past, as the editorial staff of journals exercised their power of selection. Notably Nature, founded in 1869, only moved towards peer review 100 years later, under the direction of John Maddox. Other journals, such as PNAS, are only now scrapping peer review bypasses.

There are certainly problems with the journal submission process, but typically these involve too little peer review, rather than too much. A journal such as Nature typically rejects the majority of papers without review and for those papers reviewed there are only two to three reviewers per paper. Scientists put a lot of effort into reviewing, but as it is an unpaid and unrequited favour, it is not the highest level priority. Even after review, the editorial staff have enormous power to accept or decline the advice of peer review, Nature once famously publishing a paper falsly reporting to show effects of homeopathy. This editorial decision tends to be a combination of ranking the news splash effect (Nature and Science compete for citations in the big newspapers), the "boys club" effect (no longer all male, but certainly the big names have an easier pathway to acceptance) and editorial "gut feeling".

To justify the editorial over-ride, defects in peer review are commonly cited. In this latest editorial piece, Nature presents the results of an unpublished study presented at a conference, reporting that the results show a bias of peer review towards positive results. This may be so, but does the cited study actually show that? What the study did was submit two papers, one with positive results and one with negative results, to two journals, and analyse the peer review results. The results showed that peer reviews at one journal (Journal of Bone and Joint Surgery) had a minor reduction in ranking the negative results paper, while the second journal (Clinical Orthopedics and Related Research) showed no significant difference. Hardly a damming inditement of peer-review.

What are the methodological flaws that could account for the minor differences observed at one out of two journals?

* Different reviewers. Even picking 100 reviewers for each paper does not cancel out this effect unless reviewers were carefully stratified to ensure random distribution.

* The quality of the two papers may have been different. The author of the study tried to make them as identical as possible, but different results need to be presented differently. As the study is unpublished we only have the author's opinion that the two studies were of equal quality.

* Positive and negative results can have very different "impacts". Most journals explicitly request a review which takes into account both scientific validity and scientific impact. Negative results generally have lower impact and hence would get lower review scores, as explicitly requested by the journals. To remove this effect the papers should have been submitted to a journal such as PLOS One, which requests a review only on scientific quality.

* Positive and negative results require different statistical standards. A positive result uses simple statistics to show that the two groups were different. A negative result requires more complex statistics and can only state that the two results were not different above a certain level. A negative result can never exclude that a positive result exists with a smaller effect than would be picked up by the study design.

Certainly the most obvious sign of "positive bias" evidenced by this article is the decision by Nature to write an editorial and broadcast a podcast on a minor unpublished study that denigrates peer reviewers and hence elevates editorial staff. Would they have written a similar editorial on an unpublished presentation showing no sign of bias by peer reviewers? The minor impact observed in one out of two journals tested (with all the caveats above) did not warrant Nature to fill its editorial with phrases such as "dirty", "biased", "more negative and critical" and "biased, subjective people". The worst bias of all is the accusation that peer reviewers from the second study only showed no statistical bias because "these reviewers guessed they were part of an experiment". Surely Nature should have been able to spot that subjective reporting, dimissing negative results and elevating positive results are the very definiton of positive result bias!

Monday
Sep212009

The evolution of sex chromosomes

An interesting study in this week's edition of Nature by Organ and colleagues looks at the evolution of sex chromosomes. While humans use the XY system for determining sex (XX for females, XY for males), this is by no means the only system for determining sex. Most reptiles, for example, determine sex by the temperature at which the young develops. For example crocodiles develop as males if the eggs are between 31.7°C and 34.5°C, and females if the eggs are above or below this temperature.

A chromosome-based method for determining sex has arisen not just once, but several times. Mammals use the XY system, but birds use the ZW system (where ZZ is male and ZW is female). These systems create problems, such as the dosage compensation question (how to stop excess / insufficient production of genes on the X or Z chromosomes in the gender with two copies / one copy), however they have a major advantage. This advantage is most evident in mammals - mammals are endothermic, meaning that we keep a constant body temperature. We also bear live young. Obviously, this combination of characteristics would be fatal to a species with temperature-dependent sex determination - all offspring would be of one sex.

In this paper the Pagel laboratory has used an evolutionary analysis to consider the relationship between bearing live offspring and having a chromosome-dependent sex determination system. There are multiple examples of animals with chromosome-dependent sex systems that lay eggs (all birds) and even examples of animals with temperature-dependent sex systems that bear live offspring (some lizards). However in one group of animals the relationship was very strong - amniotes that have fully returned to the sea (sea snakes, sirenians and cetaceans) are all live-bearing and have chromosome-dependent sex systems. An evolutionary analysis predicts that other extinct lineages of sea reptiles, mosasaurs, sauropterygians and ichthyosaurs, also developed chromosome-dependent sex systems before evolving life birth and spreading out over the ocean.

Like mammals with endothermic body temperatures, the constant temperatures of the ocean would have spelt doom to any species that evolved life oceanic birth before evolving a chromosome-based sex system. This is probably the reason why otherwise entirely aquatic species that use temperature-based sex determination systems (such as crocodiles and sea turtles) remain bound to laboriously climb out of the water to lay their eggs.

Tuesday
Sep152009

Recreating the thymus

I am writing today from the European Congress for Immunology in Berlin. A talk by Thomas Boehm was the highlight of the first day for me.

The Boehm laboratory has been looking at the genetic evolution of thymus development. The thymus is the nursery for T cells, the coordinator of the adaptive immune response. The Boehm laboratory analysed the genetic phylogeny of sample species spanning the 500 million years of thymus evolution and found several key genes that have been conserved through this process. The master coordinator of thymus development, Foxn1, had already been known, but how this master coordinator worked was a mystery, so the Boehm laboratory used the evolutionary analysis to try to recapitulate thymic development in zebrafish and mice.

In zebrafish, Weyn and colleages were able to use live imaging to analyse the genes that the thymus needs to express in order to recruit progenitor cells. This was done by using genetic expression of coloured dyes, making the primordial thymus glow red and the progenitor cells glow green. They found that just two conserved genes, Ccl25a and Cxcl12a, were synergistically acting to draw in all the precursor cells.

In mice, Bajoghli and colleages tried to use the knowledge gleaned from evolutionary analysis to completely bypass Foxn1. The rationale is that if we know exactly what Foxn1 does to drive thymic development then we should be able to recapitulate thymic development in the absence of Foxn1 by simply expressing the downstream genes. So the Boehm team took the four key genes that were conserved over 500 million years of thymic development, Ccl25, Cxcl12, KitL and Dll4, and expressed them in isolation or in combination in thymic cells that were genetically deficient in Foxn1. Normally, these deficient thymic cells cannot attract T cell precursors. However, Bajoghli and colleages found that just as in zebrafish, two genes in mice were able to essentially restore the capacity to recruit precursors, Ccl25 and Cxcl12. A third gene, KitL, allowed these cells to proliferate and increase in number. What these three genes could not do, however, was turn the precursors into T cells. That job required the fourth gene, Dll4, which had no role in recruitment or proliferation but which was essential for the differentiation of recruited precursors into T cells. Through evolutionary genetics the gene network of an entire organ is being unravelled.

Some of this research is current unpublished, other aspects just came out in the journal Cell.

Monday
Sep142009

Faith, post-modernism, science and the approximation of truth

Faith, post-modernism and science all have a different approach to truth.

With faith, the underlying premise (whether articulated or not) is that an Absolute Truth exists, and what is more that the believer has an insight into this Truth. Already knowing Truth, evidence contrary to this Truth must be false and can therefore be ignored. End of debate.

Post-modernism is either the opposite of faith, or just a subset of faith. Under post-modernist thought, there is no objective Truth or Reality, merely individual truths or realities that each person constructs for themselves. Every belief or truth then becomes equally valid, it is just as true to describe the sun as a galactic turnip as it is to talk about hydrogen fusion. Ironically enough, post-modernism does have unquestioning faith in one Truth, the Absolute Truth that there are no absolute truths. The irony is generally ignored.

Science has a third, and fundamentally different, way of conceptualising truth. Interestingly, science uses aspects of both the faith and post-modernistic concepts of truth. Science agrees with faith on the claim that there is an objective truth, or rather an objective reality, that exists independent of any observer. However science also agrees with post-modernism on the claim that an individual cannot grasp objective truth, only subjective truth. The unique contribution of science to the concept of truth is the approach of approximation.

Science does not claim to know Truth the way faith does, nor does it give up on the entire venture as a human abstraction the way post-modernism does. Instead science acknowledges that objective truth exists and attempts to reach the closest possible approximation of truth. Science starts with a model of reality. Scientists then attempt to disprove this model in every conceivable way. Inevitably, every model shows a flaw, an experiment which does not act in quite the predicted manner. The scientific model of objective truth / reality is then forced to change to explain the discordant data. Sometimes an entire model is discarded and a new model is picked up, but far more commonly the original model can continue to stand with a few modified improvements. Scientists then attack this modified model of the truth with renewed vigour. Cycle upon cycle, incremental improvements are made to the model, making it harder and harder to find flaws. Science will never be able to reach absolute truth, but it is extraordinary adept at producing an ever-increasingly accurate approximation of truth. The technology we take for granted today is just one display of how accurate scientific approximations of truth are – the scientific model of the atom does not claim perfection, but our daily use of electron flow (electricity) indicates that the scientific approximation is more functionally useful than any other statement of atomic Truth.

Wednesday
Sep092009

The Placebo Effect

What is the "placebo effect"? The words are bandied around constantly but tend to be poorly understood. Put simply, the "placebo effect" is the medical response of your body to the idea that you are taking drugs, in the absence of actual drugs. How can this occur? There is nothing mystical about this, the effect of mood on brain chemistry is well documented, and the physiological effects of brain chemistry on our body are surprisingly strong. What is more unusual is a question posed by a recent article in Wired - why does the placebo effect appear to be getting stronger in drug trials?

Is this true? Is the placebo effect actually getting stronger? Actually we have no idea. Drug companies never test the strength of the placebo effect. To actually test the placebo effect you need to have three groups: no treatment, placebo treatment and drug treatment. The "no treatment" group measures the spontaneous remission rate (is, the background of how many people would get better over the treated period of time without treatment). The "placebo treatment" group can then measure any additional effects of the patients thinking they are taking drugs, while the "drug treatment" group measures the biomedical effect of the drug. Since drug companies almost never include a "no treatment" group, the increasing effect in the "placebo treatment" group could either be due to increasing spontaneous remission rates or due to an increasing effect of placebos. Changes in spontaneous remission rate are just as feasible as changes in the placebo effect, as the health of the population is generally increasing over time, and a generally healthy person has a higher spontaneous remission rate.

If we assume, however, that it is the placebo effect that is increasing over time, do we have reasonable explanation for this? The answer is probably a lot more simple than drug companies are making it out to be. Changes in the scale of the placebo effect are regionally localised and concentrated in conditions such as depression, epilepsy and pain. The simplest explanation (and hence, according to Occam's razor, the one we turn to first) is that the patient composition of these groups has been changing over time, especially in certain regions. In particular, we have observed large improvements in medical diagnosis, such that more subtle cases are being detected. We have also experienced a "medicalisation" of non-medical conditions, strong moods or emotions being labelled as medical conditions and lumped together with cases caused by biomedical disruptions (ironically driven largely by drug companies seeking to expand their markets). It would be predicted that less severe cases of medical conditions, and emotional/behavioural conditions misdiagnosed as medical conditions, would be more amenable to the effects of placebos on brain chemistry. A simple test for this hypothesis exists - take an existing drug and recruit a patient cohort using identical criteria as the original drug trial. If the "altered patient cohort" hypothesis is correct a new drug trial using past inclusion criteria should show the same level of placebo effect as the original trial.

Of course the real issue for the drug companies is that the drugs being developed and tested are less and less efficacious. The placebo effect is only an issue when drugs have borderline effects. If a drug company invented a new quinine or penicillin there would be no concerns about skating around the edges of statistical significance.

Thursday
Sep032009

A Self-correcting System

The ability of science as a method to understand reality is demonstrated by the countless successes science has had in developing technology. Antibiotics, vaccination, flight, agriculture, all of these advances clearly work. Why is this? People came up with many ideas to prevent smallpox in the past, but they consistently failed. The development of a smallpox vaccine which actually worked does not demonstrate that scientists have any unique intelligence, but rather it is testimony to the power of a self-correcting system.

Hypotheses are worthless if they are not tested and then discarded if they fail testing. The process of science is not just coming up with an idea of how to cure smallpox, many people clung to their ideas of what would cure smallpox even as they died. Rather, science is testing this idea by looking at the evidence. Uniquely, science discards ideas that just don't work. The simple process of keeping ideas that work and discarding ideas that don't work has built an amazing edifice of knowledge.

The real beauty of the scientific method is that it does not depend on any single person being right or wrong, being ethical or unethical. There will always be scientists who lie or cheat, falsify data or hide experiments that disprove their pet theory. But the hypotheses that these people put forward will always be discarded, because they will fail tests by other scientists.

Best of all, scientists have a vested interest in knocking down incorrect theories. Often you will hear from anti-science campaigners that scientists are hiding data that the theory of [evolution] / [global warming] / [insert hated theory here] is incorrect. They believe in a vast conspiracy of scientists each trying to hold up a false theory for some unexplained nefarious purpose, assuming that scientists don't want to prove a theory incorrect. They fundamentally do not understand the system of science.  Personal glory does not come to the scientists who prove yet again that the theory of gravity works, personal glory comes to the scientist who finds an exception, who proves a theory incomplete, who can unravel the fatal flaw in a centuries old dogma! Einstein, Newton, Copernicus, Darwin, these are all scientists who destroyed the prevailing theories of their age. Every scientist today would love to join their glorious ranks.

A scientist who could prove today that the theory of relativity, evolution or global warming was wrong would publish in the highest journals, win the Nobel Prize, earn household recognition and become rich. There are only two ways a theory such as evolution could still stand today:

1) Every scientist working in the field is deliberately concealing data that disproves evolution, despite knowing that breaking the nefarious conspiracy would earn them recognition as a leader of science, a place in the history books and a lot of personal glory;
or
2) There are no experiments that reveal a fatal flaw.

That is the beauty of science, individuals have huge power to make advances but very little ability to make delays, since theories are judged by experimental results. To reject science you have to reject human nature and believe in an alternative reality where everyone acts uniformly against their personal interests. Trust in science is not trust is individual scientists, it is trust in a system that for thousands of years has produced results, a system that is self-correcting, a system that acts as an 'invisible hand' to select only the models of reality that actually work, regardless of whether the individuals involved were motivated by a selfless search for truth or a greedy struggle for personal glory. The scientific method is an emergent phenomenon which self-corrects the activities of individual scientists to develop only the most robust theories that have so far resisted every attempt to knock them down.

Page 1 ... 18 19 20 21 22