Monday, December 14, 2009

A Bayesian model of Attentional Load

Last monday, 7th December 2009, I came to another talk in the Cognium building of the University of Bremen. The talk is presented by Prof. Dr. Peter Dayan, from Gatsby Computational Neuroscience Unit, Alexandra House, London. The talk title is "A Bayesian Model of Attentional Load".

I did not understand the talk much. It was about statistic, mostly Bayesian (of course). Basically, we have attention. EEG signals are then classified, based on Bayesian method. I am sorry I really couldn't get the idea of the talk.

I keep the presenter's name and the talk title in order to have a contact if I want to continue Ph.D. next year (2010).

Saturday, December 5, 2009

A High-Throughput Screening Approach to Discovering Good Forms of Biologically Inspired Visual Representation

"A High-Throughput Screening Approach to Discovering Good Forms of Biologically Inspired Visual Representation" is the title of a paper from MIT and Harvard researcher. The paper can be downloaded from The PLoS Computational Biology. It is about brain modelling. They want to model the how our brain process visual information. The hardware used is GPU (graphical processing unit).

They publish the video which can be seen here. In the video you can see their comments about IBM cat brain. They say implicitly that IBM one do have the power of cat brain but it is not successful (yet) to model how the cat brain works. The same news from Smart Planet can be read here.


Finding a better way for computers to "see" from Cox Lab @ Rowland Institute on Vimeo.


Other interesting news recently is about Intel processor with 48 cores. Actually it is 24 dual core connected in mesh network. Beside GPU, this 48-core processor can be used also for brain modeling.

Are we closed to Singularity?

Brain, Movement, and Space-Time Perception

Last Monday, 30th November 2009, I went to a talk in Cognium again. Unlike the previous talk, I was early then. I could pick a good seat.

There were 2 presenters: Prof. Dr. David Burr and Dr. Maria C. Morrone.
Both are from Instituto di Neuroscienze del CNR, Pisa, Italien.

Maria C. Morrone presented "Time & Space in the brain for different frames of reference".
David Burr presented "Cross model sensory fusion & calibration: evidence from development, On Bishops & Babies".


The first presentation was from Dr. Morrone about time perception, the space perception of our surrounding and the posture of movement of our body and body part, e.g. hand or head. I am not a neurobiologist, so I could not comprehend a lot of vocabularies: retinal snap shot, retinotopic map, allocentric map, ipsilateral vs controlateral, spatiotopic vs retinotopic, craniotopic vs dermotopic and so on. There were many graphics showing correlations and areas of brain. Also the presentation was too fast and in english with italian accent. It was awful for me.

In the end, I can understand only the conclusion about the link between action and time (perception).
Time perception of human brain depends on the posture. If we change our coordinate (e.g. posture), our "internal clock" change. Time perception is highly plastic.

I remember a quote about time perception:
"Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. THAT's relativity." (Albert Einstein)


The second presentation was more interesting. Prof. Burr talked about the way we use our sense to have a space perception. Is our perception robust? How do we develop robust perception through the year?

We use haptic and visual information to explore our world. We see something to guess the size and we touch itu to have the idea of the size. Based on two sensors, our brain process information about the size of a thing. We have more senses to explore space by adding auditory information: sounds. Not only size, we also get idea of space based on our sense of orientation.

Prof. Burr showed an interesting research about the robustness of our perception.
What happens if we see something blurry but we can still touch it?
What happens if we have conflict of direction between eyes and ears?

Fusion of our sense can make better precision. Precision means that the resulting position from our visual sense and our haptic sense (and also other sense) is closed each other. We use all of our senses to get a precise space (and time) perception.

Calibration can make better accuracy. Accuracy means that our sense points close to the targeted position. Which sense calibrate other sense?

A subject should differentiate size of many boxes. Two boxes are put separately by a piece of wood. The subject only see one side of wood with one box. The other box is behind. The subject should see one box and touch the front box and the back box. Combinations of boxes are changing. The subject is supposed to tell which one is big and not.

And then, we make a blurry obstacles so our visual perception is disturbed. Is the subject still good with the task?

After the size, the subject should differentiate orientation. The boxes can be twisted. If we twist the front box, the back box also twist because both are connected. The conflict arise when those two boxes have different angle so it is not parallel. So front and back boxes have orientation conflict. What is the effect with and without blurry obstacles then?

Another experiment is to see a circular dot on screen of television. There are also two speakers: left and right. The dot can be move on screen. Sometimes it is blurry. The speaker can blip. Sometimes it blip consistently, which means if the dot on the left, the left speaker make a sound. But there is also visual and auditory conflict.

The subjects are from different ages.

What is the result?

The 5-year-old subject have problems with conflict of sense and with obstacles. And then as human grows our perception get more robust. A ten-year-old and grown-up subject has a robust perception. They can have a consistent (robust) sense of size and orientation although blurry obstacles are put or audiovisual conflict is put.

If we calibrate position to have a time-space perception, we use the most robust sense. The most robust sense calibrate the other sense.

Other question is how about blind people.

It turns out that blind subject can do well to sense size but they are bad with orientation. Prof Burr concluded that the lack of calibrating sense (vision) at early age impacts on touch.

It is hard to understand this presentation. Both presenters were really fast. One used italian accent and the other used australian accent. Pictures shows better than words and unfortunately I can only write in this blog.


Next monday, there is another presentation: "A Bayesian Model of Attentional Load"
Can't wait to see what it will be.