Sonification of biomolecular dynamics

My colleague Alex Jones and I have been looking into extending the ways scientists display their results with sonification, or audio display.  Audio display is using sound to provide information on a data set. Our group is primarily concerned with chemical dynamics and so our datasets are usually molecular dynamics (MD) trajectories. In my research I’m concerned with systems in which the dynamics is approximately metastable (the classic paper on protein metastability is here). In the case where the dynamics is metastable the system is well approximated with  hidden Markov model (see here for a paper making the link between biomolecular dynamics and HMMs).

We recently took one of the simpler cases of metastable systems, Alanine Dipeptide (AD) and made an audio-visual display of the its dynamics. The goal of the sonification was to allow the scientist to look at visual information (the structure and the dynamics) while hearing information on what metastable state the system is in, how stable that state is and when it moves between the metastable states.  These qualities are very difficult to display visually (although it can be done) and so the question is – does this sonification bring out these features effectively?

The results are here so you can judge for yourself.

ad_free_energy_sonification from Alex Jones on Vimeo.

All the code can be found on on our Open Science Foundation  page here.  There’s some pretty funny hydrogen motion (the white atoms) but this is due to the 3ps Buttersworth filter applied to the trajectory. We did this to smooth the motion to make it easier on the eye as the model frame rate (1ps simulated time : 0.05s physical time) is quite low.

The synopsis of our approach is that we have taken a publicly available AD dataset  and projected a random sample of 500 of these trajectories onto a set of 500 discrete states (the two 500s are coincidental) and then fitted a Markov state model and a hidden Markov model using the 500 discrete chains as data. We used parameters of the model to synthesise sound using Max/MSP.

The HMM we created had four metastable states and these were mapped to four different note clusters.  The more stable the state, the deeper the fundamental of the note cluster.  When AD entered a transition region between metastable states there is a noisey effect.  The kick drum/pulse sound is tied to the molecules overall stability.  There are also some synthesized tones which correspond to the fast (non-metastable) dynamics of AD.

The audio-visual display was created by taking an example trajectory and then sending the parameters of the model as they pertained to each frame of the trajectory as a message to the synthesizer. This sound was exported and overlaid to an animation – which is what you see above.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s