Quantcast
Channel: SciPle.org
Viewing all 16 articles
Browse latest View live

Scientific Publication & Hard Work

$
0
0

Few days ago I was updating my LinkedIN profile with my skills and experience when I decided to do something different: rather than displaying a detailed list of my skills and the description of my current job, I have fleshed out what I LIKE and what I don’t LIKE about my job (science, neuroscience).

It was surprisingly easy to write down this bipartite list. I know exactly what I like about science. To summarize all the things on the positive side I could use just three words: freedom, discovery, control. Freedom to imagine  new experiments, freedom to discuss about my ideas. The passion for discovery and designing experiments to control and measure variables.

When I approached the “negative” list I  surprised myself coming up in a snap with a list of things i don’t like (dislike? hate?).

The majority of the things on the negative side can be summarized by one main concept: publication system.

My work is judged by its publication in a scientific journal. The impact of my work on the scientific community largely depends on whether I publish in a journal with high impact factor or in a journal with limited diffusion…

I am not going into the detail of the publication system (a number of people and groups have analyzed the topic in depth with competence. I suggest to follow Bjoern Brembs, the #altmetrics group for a in-depth analysis of the publication system and how it can – and it will- change for the better!)

What I want to point out here is that only a small fraction of the hard work we put in our job gets credit in a scientific publication. Experiments may not make it to the final version of a scientific paper for many reasons (human-errors, difficult techniques reduce the success rate, the theoretical framework may take a slightly different route during the course of the experiments, unexpected results change the focus of the project, new emerging techniques make previous experiments obsolete …).

So I decided to design an infographics to summarize the experiments I’ve been running for my main project in the Frankland lab. I went through my lab notes and coded the experiments as success (straight line) or failure (bended line).

(click HERE for high-resolution image)

Once I plotted all the experiments I noted an interesting pattern emerging form this visualization.

I have one main result supported by the experiment labeled as #1, light-blue (sorry.. so far, all my projects are coded). This main experiment is the pillar of the whole project. Results from exp.#1 have been replicated many times with a very low failure rate. ( the thickness of the line represents the sample size used for each experiment, note that exp# 1 is thicker than the other lines).

Others experiments were derived by exp#1 (i.e. exp#1a) and support the same conclusion as exp#1.

Also Exp#1 gives rise to a new direction (inset, bottom-right). This new direction has been accompanied itself by successes and failures but again the main result (thick straight line) stands tall among failures! The high success-to-failure rate of exp#1 and the generative potential (a new direction stemming form #1) makes experiment #1 the central tenet of my whole project. Side experiments #3, #2 and #7 represent control experiments further supporting #1.

Apart from the successful experiments many experiments had to be discarded. For example #3 (red) was a technically challenging experiment with a high rate of failure while #5 (yellow) was conceptually/biologically wrong, but luckily this generated another project (not reported here).

I’ve drawn an arbitrary threshold for article submission (not publication!). Once we reach a critical mass, the experiments make it to the final version of the paper. You see that we generate a lot of data and most of them do not reach the threshold (only ~36% of my experiments will make it to the paper – should we post the remaining ~60% on figshare?)

This is disappointing! I’ve been working hard but I’ll be acknowledged only for ~36% of the work I’ve done in the lab! …then, whether the publication system (and the impact factor) is the best system for giving credit to a researcher and its work… it is another story… (again, follow #altmetrics)!!!


Scientific RSS feeds: how to rank them?

$
0
0

How to build a web App to retrieve and rank scientific RSS feeds : part 2

I have posted an updated version of the code (see code page) to retrieve and rank feeds from scientific journals. The frequency of keywords extracted form a set of interesting (for me) papers have been used as a weighting scheme to rank new feeds

The advantage of working with scientific RSS feeds relies on the fact that titles and abstracts are highly informative and structured. Ideally we could read just the last sentence of the abstract to get a sense of the background and conclusion of the paper. Embedded in every abstract there is the condensed structure of the paper (introduction, methods, conclusion). Moreover, there is a limited set of sentences, words that scientists use in the abstract to ‘announce’ their conclusion: taken togetherin conclusionthese data showthese data suggest...

All these features make text analysis (condensation) surprisingly easy to do! We can write few lines of code to extract a set of features and build a document-term-matrix (a rectangular matrix containing the  documents (rows) and the most frequent words (columns) with the relative frequency of each word in each document). Then we  generate a n-dimensional  space where every document has its own location (Latent Semantic Analysis)  and we group the documents together by similarity. This is one of the simplest way of analyzing text information (this google search is enough to get you started with R and its excellent text mining libraries -tm, LSA).

This whole process works pretty well on scientific text! I was amazed by the ability of this clustering technique to group together similar papers. Sometimes it also promotes the discovery of new connections between distant papers.

Ideally we could have a set of interesting papers and build a space out of this set, then every new feed is mapped onto this space. Our intelligent feed reader should  visualize only the papers that fit in this semantic space.

One possible approach to discover interesting RSS feeds may also incorporate a “spam-filtering” technique: interesting papers are good (ham), uninteresting papers are bad (spam).  … is it really so??? I don’t see uninteresting papers are spam. Especially if we focus our search on highly specialized journals (e.g. neurobiology of learning and memory). 99% of the papers may be of interest to me. A different scenario applies to journals like Nature, Science or Cell: most of the papers are not related to my field.

Right now the code (see it here) ranks papers on the basis of sensitive keywords extracted from a limited set of papers I classified as interesting. Every keyword has its own weight (frequency obtained form the set of interesting papers). The rank is given by the sum of the weight (frequency) of keywords detected in each single feed. This is pretty naive, but it is a first attempt. It works very well when the paper is really interesting (several keywords are detected in the abstract). THe major disadvantage is that we may miss important links: even if just one keyword is detected in a feed, a critical link between this keyword and a new [undetected] keyword may be missed.

The next step is to extract the feeds with the highest rank form each journal: setting a global threshold has the disadvantage of excluding Journals with minimal abstracts (like science) which tend to have very low ranks.

I have several ideas I want to implement in the next future. It is a stimulating  playground!

Soon I will also post a clean version of the code I use for extracting keywords and their frequency.

posted by LR

RSS feeds, Drowning in Literature & SfN12

$
0
0

Good news on the front of the development of the scientific feeds retriever: @neuromusic transferred the python code to his github account! There is now a repository named sciple-rss.py that you can play with. Now that people with a “serious” background in computational/software development will be able to put their hands on it, I feel a re-grown hope for this project! I am a weekend-self-tautght-coder and I feel I can’t push this project any further (.. or at least not on a human time-scale..). @neuromusic has played with the Mendeley API and I am sure that he is up to something really interesting!

I can’t stress enough the point that we (scientists) need an intelligent tool for retrieving scientific feeds. All we need is a tool that let emerge useful information among an exponentially-growing mass of publications.

Exponential growth of dendritic spines: number of papers on pubmed

Pubmed search has implemented a widget showing the amount of papers published per year. I haven’t tested it yet, but I am pretty sure that whatever keywords we look for: the trend is going to be exponential.

At least this is what I found while I was  working on my memory consolidation text-mining project. Every keyword follows an exponential pattern. De-trending the curve brings up something interesting: even though the absolute number of papers is growing, you can detect some fluctuations in the enthusiasm for specific sub-topics (e.g. reconsolidation).

but the trend never stops!

 

…………………………………………………………………………………………………………………………………………………………………………………………………………………

So here is a task for you: Can you spot a topic whose trend is SUBSTANTIALLY different from an exponential trend (see  the picture above)?

…………………………………………………………………………………………………………………………………………………………………………………………………………………

Number of papers published on the topic cerebellum. Enough of cerebellum?

So far, I have found the keyword CEREBELLUM that seems to have reached a plateau. but most of the topics show this scary exponential trend. We are literally drowning in this vast ocean of scientific publications!

There is another place where we are ‘physically’ submerged by scientific information: the annual meeting of the Society for Neuroscience.

Thousands and thousands of presentations spread across 4-5 days. There is no way we can attend/read all of them and even if we use the Itinerary Planning  we constantly have this paranoid feeling that we are missing something important.

How many times happened to you to bump into a friend/colleague telling you about that poster you missed at the end of the poster hall?

Twitter may be of help here (see our previous project as Neurobloggers), but still this is not what we need to catch up with the science going on during this 40K-attendees-conference!  We miss so much during the meeting. That is annoying! and that’s why we are taking care of this!!

Stay tuned for more information about our new exciting project for SfN12 !!!!!

 

 

 

The app team

$
0
0

The team working on the app for scientific conferences. From left to right: Adam Santoro, Blake Richards, Chen Yan, Leonardo Restivo (@scipleneuro) and Jason Snyder (@jsnsndr)

Hold your horses, neuroscientists!! … app has nothing to do with the Alzheimer Precursor Protein.

We are talking about something different: app in the sense that “there is an app for that“! App(lications) are everywhere, they help us to find, to play, to discover, to book and -sometimes- to waste an unbelievable amount of time…

There is an app for everything. is that true?

Not quite! I want an app that helps me to navigate scientific conferences. A tool that helps me discover interesting scientific content among thousands of presentations, an app that can potentially help my networking  during the conference.

We couldn’t find such an app so we gathered this dream team (see picture) to create one! And we will be launching it very soon at the Society for Neuroscience Meeting (SfN12).

stay tuned!

HackSFN

$
0
0

As you may now, the society for neuroscience meeting – SfN (2012) is about to start.

The app team has been very busy working on the app (HackSFN), launching the app, reading the feedback from twitter (@hacksfn, #hacksfn) and implementing new features.

The app is simple, neat and brilliant: you can upvote #sfn12 presentations , view the abstracts and attach a comment/note to every presentations.

The app itself can be found at hacksfn.org. You will be redirected to a desktop or a mobile site depending on the device you are using (desktop, phone or tablet) .

We put a lot of hours of hard work in this app, we are doing our best to deliver the best app possible to let you (and ourselves) enjoy this huge huge neuro-conference. The whole concept of the app is based on the collaborative filtering concept (multiple users vote and discover useful content, the user can sort his own queries on the basis of the number of votes, views or comments)

But still a lot has to be done (polishing interfaces, catching bugs and adding new shiny features)

I personally had a lot of fun working on the UI/UX of the mobile version.

 

how to display all the meaningful information on such a small screen?

The mobile screen poses a number of challenges. the screen size is the first one (how can we fit a minimum amount of useful information in there? what is a minimal amount of information)

While asking yourself these questions you realize how important it is to establish a visual pathway in designing for such small screens:

what is the main function of the app, what are the relative wieghts of the elements on the screen?

Are we hiding the function behind too many clicks?

What is the minimum number of clicks acceptable for that particular function?

Not easy at all. I’ve been working on the interface, changing it every day (actually, every night… I have a day job in the lab …) and I am not quite satisfied.

Every single new element we add (every new feature) is likely to mess up the visual pathway, change the priority and the way elements are arranged on the screen. We are rushing to complete the mobile app in time for Sfn12 .

 

A new exciting feature we recently ported from the desktop version to the mobile version is the “comment” feature.

Users can leave a comment/note to any presentation. This will become a networking area: every presentation will be a microcosms where users can interact, comment the abstract, contact the author, leave a note or simply request a pdf to the poster presenter…

Comments came to the mobile version relatively late. The interface was pretty set-up…. how could we provide an easy access to this new cool feature?

One idea is to place a circular red tag on the top right side of the abstract title (see picture above) showing the number of comments/notes associated with the abstract. One click/touch on the red tag provides a direct access to the comments.

Yes … it really looks like an Apple’ email-message-notification … but this is the whole point about usability and UX (in my opinion): you cannot build from scratch a new way to interact with an app. Many gestures, many symbols and click pathways have to be repeated over and over across many apps in order to feel familiar, in order to let the user know what is their function.

You can access comments in this mobile version using this circle-red tag or from the abstract view there is a button to open the comments section.

People are already using the comments and some of the conversations are rather … unusual? (see picture below)

I didn’t see this coming…

The desktop version is up and running, it is doing great, few glitches to fix, new exciting (beta) features to add, but we already had an overwhelming flow of positive feedbacks.

Keep up with the tweets #hacksfn !!!!

 

PS

I will post  in the future more thoughts and screenshots about the development of this mobile app … there are so many challenges in this development phase… it is really exciting to develop a tool and its UX/interface (how people can easily get the most out of this tool).

and more UX is definitely what we need for scientific software (read this paper)!

 

Hubbian.com: save and print MY-LIST

$
0
0

A brand new name for our web app: HUBBIAN.com (formerly HackS*N)

HUBBIAN is a mix between HUB and HEBBIAN

Hubbian does exactly the same job as HackS*N did, but does it better! We now can save a list of preferred abstracts and transfer this to the server using a username and password. and best of all…. you can export your list to a printer/environment friendly format!

Have a look at the video with brief instructions for creating and print a temporary list of your preferred abstracts!

The CONFERENCE-OME: how to deal with 17,000 abstracts

$
0
0

20121016-111846 m..jpg

I am a scientist (well I was trained as a scientist, thank God I am much more than a scientist!).

But today I AM a scientist attending a big conference (The society for neuroscience annual meeting – #sfn12).

My biggest concern is to efficiently navigate this conference: how can I discover the sessions, posters and exhibitors that I DO want to find?

Most of the tools out there are made for planning my session, but I need something that make me feel connected to the conference. The problem with the tools out there is that they fall for the cartesian fallacy: mind and brain, planning and attending the conference.

The sfn12 app was a big improvement this year: maps, alerts and so on… but there is still one thing that is missing (and that is what we @hubbian are currently developing): the conference-ome.

We -scientists- are one unitary organism when attending conferences (the conference-ome)!
The conference system works only because WE attend the conference, WE network, WE interact, WE help each other to find out what is hot and trendy now. WE create the buzz.

This year we had a great response from you (~2900 people visited and used the Hubbian as of today, the third day of the conference) and about 50% of them were coming back to the app (maybe due to our dynamic and sortable lists?)

And I am sure that with further development the hubbian will be really helpful to experience conferences, and this is because what we have in mind is the CONFERENCE-OME!!

Who read “Marr’s influential work”?

$
0
0
lowMag

Selected keywords (color-coded) in Marr’s 1971 paper form clusters in different sections of the manuscript

“let me do something that I’ve never done before: I want to dedicate this blog post to @jsnsndr. we shared the same lab for a very short but intense period. it was fun! I will miss him and his questions during lab meeting. I wish him all the best with his own brand-new lab in UBC .. this sentence is crying for a link to his lab web-page.. I can’t wait to see the  link to his lab page!!!!”

It is Christmas time, it is time for something light and colorful!

from Wikipedia page:

David Courtnay Marr (January 19, 1945 – November 17, 1980) was a British neuroscientist and psychologist. Marr integrated results from psychology, artificial intelligence, and neurophysiology into new models of visual processing. His work was very influential in Computational Neuroscience and led to a resurgence of interest in the discipline.

I have created an interactive page showing the «influential» work of David Marr (“Simple memory: a theory for archicortex.” Phil. Trans. Royal Soc. London, 262:23-81. – 1971).

In this page you can click on selected key items to light them up (exaclty how you would do with Christmas light) and highlight their occurrence in the text (click again on the selected keyword to revert it to its original status).

highMag

Why doing this?
well, during a recent meeting my PI Paul Frankland  brought up something really interesting.. everyone reference to the “influential work of Marr” but how many really read his work??

Making reference to Marr lets you shine of reflected light.. you feel cool (as cool as a neuroscientist can be, obviously). Indeed citing Marr brightens up the morale… you are saying something incontestable, something that even the evilest/angriest reviewer would agree upon.

.. I am one of those that never read Marr (but  I have never invoked his name during my science talks… but some of his key concepts -of course pervade my work and talks)

graph

Key concepts emerge from the text after the text-analysis. Line thickness represent the frequency of the indicated bigram.

So I decided to let my python script read the ‘influential’ Marr’s 1971 work on simple memory for me.

The most important concepts and their relations have been plotted in this disjointed graph on the right.

then I had a look at something that really intrigues me: the spatial distributions of these concepts in the text.

key things emerge:

the word THEORY can be found all over the text, while the anatomical concepts clearly occur in the last third of the text.
not surprisingly the anatomical words (CA3, pyramidal, collateral) occur together in this part of the text.

highlight

THe keyword CA3 (hippocampal field) only emerges in the last part of the text where the frequency of other anatomical terms is increased relative to the first and second part.

 

 

And there is more to discover with the patterns generated by  just these few words… I’ll let you discover these relations / text structure !

you can play with this tool here.

 

 

 

disclaimer:
[1] this tool is not highly interactive, it is limited, it can be improved a lot… but then again, it is Christmas time and I spent a couple of hours on it..
enough..
[2] the Christmas-light words look like small grave stones. This was not made on purpose, but it makes a lot of sense: words on a printed paper from the 70′s get buried by time and the relentless process of scientific publication…
is there any need to resuscitate these words? if so, How can we improve the discoverability of these words (and related concepts?)
But this is another story!


Using graphics in your CV

$
0
0
academic papers

Click to enlarge – using graphics to list my academic publications in the CV

It is about time for me to look for a job (…a serious one).  I am applying for PI position, group leader or “some kind of scientist who does science in a science lab”.

I am working on my CV and I just noted how boring it is to read (proof read) a list of publications, the list of skills or (even worse) a complete list of conferences I have attended in the past (is the “conference attended” a useful section?!?!)… anyways, I came up with a different way to summarize my work (and experience).

I will use the usual (plain, boring ) list of papers, conferences and skills but I will also provide a visual summary. Let’s say something like: “my skills at a glance” or “graphical publication record”. It may make the interviewers life easier.

For example, the diagram of my academic publications is color coded, so you can easily find out how many papers and what journals I have published in during the stages of my career (undergrad, phD student and post-doc/research associate)

The nice thing about this diagram (made with the free software Xmind) is that you can add hyperlinks to the nodes. Ideally I could have inserted an excerpt of the paper’s title and hyperlink to the original reference. Nice idea.. too bad that you need the PRO version of xMind to export the pdf. but the same result (or even better) could be achieved with D3.js (for an online CV though)

I also plotted all of the conferences I have attended during my career… I couldn’t believe my eyes! I have attended a lot (a lot) of conferences.

.. I don’t know if these diagram will increase my chances of getting the job but I DO NEED to put some graphics in my life (and job)!

 

meetings

If you plan a career in science you may get to travel a lot

 

 

Posted by Leonardo Restivo

 

Alice Proverbio

$
0
0
Alice Proverbio
Associate Professor of Psychobiology and Physiological Psychology
Dept. of Psychology, University of Milan-Bicocca

Lab website

Twitter

Sci.Ple: What is your background?
I graduated in Experimental Psychology at University of Rome “La Sapienza” back in 1987, where I worked with Spinelli and Mecacci.  At that time my supervisors were the first at recording Visual Evoked Potentials (VEPs) in healthy controls to study spatial frequency sensitivity. After the Degree I won a PhD fellowship from the University of Padua, where I started a collaboration with neurologist Bisiacchi and neurophysiologist Carlo Marzi, from Berlucchi’s lab (a Moruzzi’s pupil), and my research interests moved from Visual Evoked Potentials to Event-Related brain Potentials (ERPs). In 1991 we published the first Italian study on the electro-cortical indexes of visual spatial attention. After my doctoral dissertation (about the role of the left and right hemisphere in selective and sustained attention) I moved to Davis-California to work with Ron Mangun at the Center for Neuroscience directed by Michal Gazzaniga, during my 2 years of post-doctoral training. I learned many of the things I know about ERPs and met several outstanding neuroscientists including Bob Knight, Steven Hillyard, David Woods and Bob Fendrich. I currently work in the field of Cognitive electrophysiology and I lead the cognitive electrophysiology lab at University of Milano Bicocca where I am associate professor of Psychobiology and Physiological Psychology.
 

Sci.Ple: Among your published papers, which one is your favorite???
This is a difficult question, because all the papers have a different history, and I loved them for different reasons. However, the one I remember with a greater emotion is one of the oldest and less famous paper, but my first important one, coauthored by my mentor and by Michael Gazzaniga on split brain patient J.W.
Proverbio AM, Zani A, Gazzaniga MS & Mangun GR. ERP and RT signs of a rightward bias for spatial orienting in a split-brain patient. NeuroReport, 1994; 5 (18): 2457-2461.
 

Sci.Ple: Why is it your favorite?
Because the data were very exciting and it was the first time for me as a P.I. of that particular investigation. We discovered not only the presence of  a rightward bias for spatial orienting (a sort of neglect for LVF stimuli) but also evidences that the left hemisphere has a stronger attentional vector, as predicted by Marcel Kinsbourne’s theory. Most notably, we found indexes of a subcortical transfer of visual information. Plus I wrote the article by myself.  I was extremely proud when it was finally accepted

What was the most challenging part of this paper??? The notion that there might be a subcortical transfer of visual information in the split brain, possibly mediated by the superior colliculus (which has ipsilateral connections) was not particularly favoured at that time, particularly by Michael Gazzaniga. However the data seemed really compelling, and we published the less problematic part of the study, the one dealing with the attentional bias, leaving somewhat in the shadow the problem of the subcortical transfer.
 

Sci.Ple: What drives you in your day-to-day job?
I remember getting really frustrated and sad, when I was a young researcher, and things were not going well, or for example a paper got rejected. Now things are very different, sometimes this really surprises myself, but I have become really good at overcoming difficulties. The energy comes from inside, I am very benevolent with myself as I know how hard I try. I rarely feel guilty.
 

Sci.Ple: What is the most exciting part of your job?
Something really unexpected comes up, as a very nice surprise. This happened for example when we discovered that the face fusiform area FFA is right sided only in men, while face coding is bilateral in the female brain.
 

Sci.Ple: The least exciting??
The least exciting are exams, thesis correcting, Faculty reunions and councils, administrative matters, grant paperworks.
 

Sci.Ple: Name a scientist whose research inspires you.
Salvatore Aglioti, he is an extremely rigorous and very brilliant Italian Cognitive Neuroscientist.
 

Sci.Ple: What are the next frontiers in neuroscience?
Honestly I do not know, I think we are still in a transition period. I believe that discipline like cellular neuroscience, molecular physiology, genetics, epigenetics are more than ever contributing to neuroscience discoveries. The other thing is that, progressively, animal experimentation, especially on primates or mammalians, has to be substituted or abandoned.
 

Sci.Ple: Why science?
I could not possibly be other than a scientist in life. I dreamed about being a scientist since I was a child. My father is a scientist, and my two older brothers are Academics.
 

Sci.Ple: If not science?
If not science I would have loved being an orchestra conductor. I love very much music since I studied piano, organ and composition  at Conservatory Giovanni Pierluigi da Palestrina. I always had a special admiration [and still have] for orchestra directors since they have to master all the different instrumental parts at one time, and have the responsibility of conducting and creating a new and exiting performance. Definitely that.
 

Sci.Ple: Why?
Why humans are so cruel? I cannot stand human suffering, especially children’s pain. Whenever I hear a child’s cry my brain immediately enters in an alarm state.

Temporary disruption

$
0
0

I am currently updating the website (new url, new directories) therefore the website is not at its best… :(

It should be back to normal (or even better!!!) in a couple of days!

thanks!

Leo

PS
In the meanwhile, please enjoy some mossy-fibers (red) innervating the CA3 field (DAPI: blue)

Leomossy-fibers

Friday Science-Links Roundup

$
0
0
LR

A weekly summary of links and news I discovered during my web-pilgrimages. Leonardo Restivo

 

Publish your data with Nature

Nature has started a website for sharing data (and get credit for it!!). It will open for the submission starting autumn 2013. and it will be launching in Spring 2014

 

Cell analysis software

Cell profiler is the BEST cell imaging software I have tried out so far! it is modular, powerful and (of course free / open-source). I am using it every day. it has a steep -but short- learning curve. excellent!

 

Image analysis basics

Good introductory paper on cell image analysis and cell Profiler

 

Automated microscope based High Content Screening.

the guidelines from NIH

 

Scientific Collaboration made Easy!

Scigit is a service for sharing documents using a “git engine”. Now we can work on the same paper without sending back and forth by mail those horrible Microsoft word documents titled “Doe_J_et_al_version_1.1.0.0.3_John_edit”…

 

Making a transparent brain using ScaleA2

The protocol is rather detailed and it works! (This for you if you can tolerate the pretty huge tissue expansion deriving from the clearing process)

 

Stats / Data Visualization

The new version of R (the statistical computing language) is out (3.0..0)!!

Heathvis is a “newborn” library for easily integrating R and D3.js.

 

Slides on the Web

Reveal.js  is a great tool for making fancy presentation on the web.

 

Writing in a distraction free environment

If you like to write and focus on what you are writing then FocusWriter is the software for you. Easy, clean, distraction free writing.

 

—————————- bonus track —————————-

If you are still reading this page… then here is a bonus track for you:

[1] I really don’t have the time now to write more detailed posts… so I decided to post some of the news and links that I met during my web-pilgrimages.

[2] The drawing associated with this post was made by myself a while ago. And should depict my lonely pilgrimage along the infinite web routes…

I am planning to post new links every week… so you’ll see a lot of these lonely snow-walkers in the next future (hopefully once a week!)

 

 

 

 

CLARITY: from the brain to the DATA-brain

$
0
0
Text mining on the introduction and conlcusion section of the CLARITY paper

Text mining on the introduction and conlcusion section of the CLARITY paper

Karl Deisseroth has published yesterday one of the most revolutionary technical papers in the history of modern neuroscience:<<Structural and molecular interrogation of intact biological systems>> Nature 2013.

Chung and colleagues describe a revolutionary tech for clarifying brain tissue (here is the protocol). Previous attempts have been made to render the tissue optically clear but all of them achieved a partially clear tissue and at the expenses of tissue integrity.

The Technique described by Chung and colleagues – named CLARITY- presents several key advantages that make this a technique that will catalyze a paradigm-shift in the neuroscience (read also the brain activity map initiative)

Here are some key features and future challenges posed by CLARITY

 

Key features of CLARITY:

[1] Intact brain: The brain and its structures can be accessed without sectioning. No tissue deformation and more accurate 3d reconstruction of processes and cellular compartments.

[2] Reduced protein loss (only 8% of the proteins are lost during the process)

[3] Macro-molecule permeable. THe whole brain can be stained with regular immuno-protocols. Chung et al were able to stain (and visualize!) synaptic puncta (psd95-dendritic spines) in the whole brain.

[4] Multiple-round molecular phenotyping. This is really exciting. The whole brain can be stripped (similarly to what you would do on a membrane after western blotting): Use and reuse the brain to stain for different markers. and this leads to the next point.

[5] Reduce, reuse. The very same brain can be used and reused, possibly reducing the number of subjects needed for the experiment.

[6] Fixed brain tissue. CLARITY works on fixed human brain tissue. Accurate reconstruction of neurites/projections will be finally available on human tissue. CLARITY was used on stored tissue that has been sitting in formalin for years!

 

Key challenges posed by CLARITY

This  technique is extremely exciting and I truly believe it bears the potential for a paradigm shift in neuroscience. Of course this great potential comes with a number of technical and theoretical challenges:

1. Treating the whole brain as a source of information. From the brain to the Data-Brain.

As Chung and colleagues state in the paper (page1, end of introduction) the whole point of CLARITY was to «physically support the tissue and SECURE BIOLOGICAL INFORMATION>>. The whole brain is now a bank storing a wealth of information. We can now access this unprecedented level of multi-layered/multi-scale data (from subcellular to sytems level) about neurons and their activity.

But are we ready? We need to develop computational tools to deal with the whole brain at this expanded scale (3D registering of brains at the cellular scale, 3D image segmentation, automatic neurite tracing… see an example here). We will also need a new generation of microscopes to acquire/store all this information.

And once we got all of the brain mapped in its detail? Do we have the theoretical frameworks to deal with this wealth of information? Do we really need to map the WHOLE brain at the cellular scale to understand behavior and brain pathologies?

Are we approaching the BIG DATA era in science? storing immense databases of cellular/cellular compartments data of the whole brain. An army of data miners going through the data-brain will understand how the brain produces behavior? I find the idea of transforming a living-wet-biological brain into a data-brain extremely exciting. it is a new framework, a new approach to (or point of view on) the brain and its biology. I am just saying that now we need to develop the tools (computational, statistics..) to deal with the richness that we are about to approach.We need to change the way we conceive experiments. And this is were the paradigm shift happens.

 

2. Clearing the brain and preserving your own brain.

A mixture of passion, excitement and dedication to our work (and in some cases stupidity…) often makes us to sacrifice our own safety in favour of scientific discovery (read the gastritis story). I have spent many years in several labs and sometimes happens to see things that should not happen (dealing with chemicals without gloves, handling chemicals totally ignoring the risks they pose to your own health). Bad, very bad. Dangerous, very dangerous. So, back to CLARITY, caution when dealing with perfusing with the Hydrogel (nice name for a «lethal» mix of Acrylamide, Bis acrylamide, PFA and SDS too). Chung et al clearly state in the methods section that hydrogel is neurotoxic and carcinogenic. To conclude: Kids DO try this at home (lab) but your health (and your fellows’s, too) comes first! I don’t think that reading your amazing science/nature/neuron/cell papers will ever be of comfort to you if lying on a hospital bed with lung cancer. (sorry, it is an awful image, but lightness in lab-safety procedures really pisses me off)

 

4. What next at allen brain institute?

The allen brain Institute was set to provide a comprehensive map of the brain (see its recent connectivity project). The institute has an impressive automatic workforce (and pipeline) to process sliced tissue. Are they adopting CLARITY in the future? (luckily for them, brain slicing was the only step that they couldn’t automate…)

 

5. When will CLARITY become a standard for brain studies?

As I said before, this is revolutionary. It will change the way we deal with the brain (data-brain). So my question is: when will it be a common procedure in neuroscience labs? 2-5 years? 10-20 years? and this leads to the next question.

 

6. Can I borrow 2L of your primary antibody?

OK this is an exaggeration There is no need to use liters of antibody, increasing the incubation time will do (I think). But nonetheless, how many labs will have the infrastructures to acquire-store-process the data-brain? I am not clear yet if CLARITY procedure can be applied on brain parts (say the hippocampus, striatum or amygdala). This will make the approach more tractable (in terms of both money and complexity).

 

5. Reduce, Reuse, Reduce, Reuse…

Maybe we will no longer need large sample sizes. We can use and re-use the same brain. this may mean a reduction in animal usage for research purposes.

Maybe we will embrace a full paradigm shift and move from NULL hypothesis testing to a pure bayesian approach, finally testing our hypothesis and achieving higher stats power with smaller sample sizes.

 

Healthy brain research – is it?

$
0
0
The brain research is living one of its most productive moments in history. When we plot the number of paper published each year (pub med search: BRAIN or NEURON) as percent change relative to each previous year  we clearly see … Continue reading

Hubbian at SfN13?

$
0
0
Exciting news!!! Hubbian is currently working on an abstract use license agreement with the Society for Neuroscience! if everything turns out well we will be able to use Hubbian for #SfN13! A lot has changed since Hubbian was born (the … Continue reading

Automatic cell counting

$
0
0
Finding neuronal activity traces Cognitive functions are the result of neuronal activity occurring at different levels of scale (from sub-cellular compartments to brain regions) and on different time scales (from microseconds to years).  Therefore, this neuronal activity leaves a multidimensional trace spanning different levels of scale and time. The neuroscientist’s job is to develop tools/techniques/procedures […]
Viewing all 16 articles
Browse latest View live




Latest Images