Few days ago I was updating my LinkedIN profile with my skills and experience when I decided to do something different: rather than displaying a detailed list of my skills and the description of my current job, I have fleshed out what I LIKE and what I don’t LIKE about my job (science, neuroscience).
It was surprisingly easy to write down this bipartite list. I know exactly what I like about science. To summarize all the things on the positive side I could use just three words: freedom, discovery, control. Freedom to imagine new experiments, freedom to discuss about my ideas. The passion for discovery and designing experiments to control and measure variables.
When I approached the “negative” list I surprised myself coming up in a snap with a list of things i don’t like (dislike? hate?).
The majority of the things on the negative side can be summarized by one main concept: publication system.
My work is judged by its publication in a scientific journal. The impact of my work on the scientific community largely depends on whether I publish in a journal with high impact factor or in a journal with limited diffusion…
I am not going into the detail of the publication system (a number of people and groups have analyzed the topic in depth with competence. I suggest to follow Bjoern Brembs, the #altmetrics group for a in-depth analysis of the publication system and how it can – and it will- change for the better!)
What I want to point out here is that only a small fraction of the hard work we put in our job gets credit in a scientific publication. Experiments may not make it to the final version of a scientific paper for many reasons (human-errors, difficult techniques reduce the success rate, the theoretical framework may take a slightly different route during the course of the experiments, unexpected results change the focus of the project, new emerging techniques make previous experiments obsolete …).
So I decided to design an infographics to summarize the experiments I’ve been running for my main project in the Frankland lab. I went through my lab notes and coded the experiments as success (straight line) or failure (bended line).
(click HERE for high-resolution image)
Once I plotted all the experiments I noted an interesting pattern emerging form this visualization.
I have one main result supported by the experiment labeled as #1, light-blue (sorry.. so far, all my projects are coded). This main experiment is the pillar of the whole project. Results from exp.#1 have been replicated many times with a very low failure rate. ( the thickness of the line represents the sample size used for each experiment, note that exp# 1 is thicker than the other lines).
Others experiments were derived by exp#1 (i.e. exp#1a) and support the same conclusion as exp#1.
Also Exp#1 gives rise to a new direction (inset, bottom-right). This new direction has been accompanied itself by successes and failures but again the main result (thick straight line) stands tall among failures! The high success-to-failure rate of exp#1 and the generative potential (a new direction stemming form #1) makes experiment #1 the central tenet of my whole project. Side experiments #3, #2 and #7 represent control experiments further supporting #1.
Apart from the successful experiments many experiments had to be discarded. For example #3 (red) was a technically challenging experiment with a high rate of failure while #5 (yellow) was conceptually/biologically wrong, but luckily this generated another project (not reported here).
I’ve drawn an arbitrary threshold for article submission (not publication!). Once we reach a critical mass, the experiments make it to the final version of the paper. You see that we generate a lot of data and most of them do not reach the threshold (only ~36% of my experiments will make it to the paper – should we post the remaining ~60% on figshare?)
This is disappointing! I’ve been working hard but I’ll be acknowledged only for ~36% of the work I’ve done in the lab! …then, whether the publication system (and the impact factor) is the best system for giving credit to a researcher and its work… it is another story… (again, follow #altmetrics)!!!