Strong objectivity, research funding, and Conflict of Interest (COI) policies

06 September 2013

Sandra Harding's Strong objectivity is one of those concepts that has simply stuck with me for ages. Without any pretense of philosophical accuracy (I'm a hacker, not a philosopher), my layman's interpretation is simply that in order for you to evaluate what I tell you, you have to know where I am coming from.

In other words, if I have a puppy called Chuck who is by my side literally 24/7, and all of a sudden I become a strong animal's right activist, you can probably infer that most of my statements won't have esoteric sources. Look at the way I relate to Chuck and you will understand where my stance comes from. If you understand my statements in light of my liking Chuck the bulldog, then you will have a better, strong objective understanding of my animal activism. This probably has implications on, for example, whether Heidegger's thought can be understood without taking Nazism into account, but this is beyond my scope and breadth of understanding.

Now, the same strong objectivity framework can be used within research funding and conflict of interest. As I mentioned in a previous post, biomedical research is currently in a cross-roads where it is needed more than ever to boost technological advancement, but at the same time the idea that the government will have funds to continue expanding funding without any limits is untenable, even if the US were not spending huge amounts to attack other people and countries all over the world. In other words, academic research necessarily has to go after funding from industry to survive. But then, academic integrity is put at risk and conflict of interest kicks in by necessity.

Conflict of interest policies are in their infancy, and in my opinion right now they are utterly ineffective. Not only the insiders know that data sets are tortured to say what the funding organization wants it to say, but much earlier study designs are manipulated to be myopically focused on the benefits of a given product.

Now, here I am only focusing on the integrity of specific studies, not even going in the direction of whether academicians are still fulfilling the role of being critical in relation to societal matters. For example, what is the last time you heard universities being truly vocal regarding war and international inequality? Tons of papers and strategic meetings asking for more funding, but rarely any advocacy leading to true change.

What is the solution then? While solution is too strong of a word, it seems to me that the current incentive structure or societal gamification framework is just simply twisted. Academicians are placed in a rat's wheel where most of them no longer compete for the best idea that will truly change society in an innovative way, but instead compete for who will attract the most funding. Why? Because this is what will make the rat's wheel counter move up and get their promotion and academic power, a.k.a. the ability to say that I am brighter than you are.

So, which mechanisms could give us a better rat wheel? Below are some random thoughts, many of them completely off, just with the intention to start a conversation:

  1. Strong objectivity will only happen if I can map where you are coming from. Systems that continuously collect information about a person like in the quantified self movement, compile that same information from multiple sources and then dynamically process it to be presented in an understandable way would be ideal. So, for example, my profile might say that indeed Chuck the bulldog are hanging out together most of the time.
  2. Alternative arrangements between university and companies will have to be made, the ivory tower has to be integrated far beyond what it does now, although it is more likely that other organizations will be built from the ground up to fulfill that societal need, now in a more sustainable way.

by Ricardo Pietrobon