Monday, December 14, 2009

A Bayesian model of Attentional Load

Last monday, 7th December 2009, I came to another talk in the Cognium building of the University of Bremen. The talk is presented by Prof. Dr. Peter Dayan, from Gatsby Computational Neuroscience Unit, Alexandra House, London. The talk title is "A Bayesian Model of Attentional Load".

I did not understand the talk much. It was about statistic, mostly Bayesian (of course). Basically, we have attention. EEG signals are then classified, based on Bayesian method. I am sorry I really couldn't get the idea of the talk.

I keep the presenter's name and the talk title in order to have a contact if I want to continue Ph.D. next year (2010).

Saturday, December 5, 2009

A High-Throughput Screening Approach to Discovering Good Forms of Biologically Inspired Visual Representation

"A High-Throughput Screening Approach to Discovering Good Forms of Biologically Inspired Visual Representation" is the title of a paper from MIT and Harvard researcher. The paper can be downloaded from The PLoS Computational Biology. It is about brain modelling. They want to model the how our brain process visual information. The hardware used is GPU (graphical processing unit).

They publish the video which can be seen here. In the video you can see their comments about IBM cat brain. They say implicitly that IBM one do have the power of cat brain but it is not successful (yet) to model how the cat brain works. The same news from Smart Planet can be read here.


Finding a better way for computers to "see" from Cox Lab @ Rowland Institute on Vimeo.


Other interesting news recently is about Intel processor with 48 cores. Actually it is 24 dual core connected in mesh network. Beside GPU, this 48-core processor can be used also for brain modeling.

Are we closed to Singularity?

Brain, Movement, and Space-Time Perception

Last Monday, 30th November 2009, I went to a talk in Cognium again. Unlike the previous talk, I was early then. I could pick a good seat.

There were 2 presenters: Prof. Dr. David Burr and Dr. Maria C. Morrone.
Both are from Instituto di Neuroscienze del CNR, Pisa, Italien.

Maria C. Morrone presented "Time & Space in the brain for different frames of reference".
David Burr presented "Cross model sensory fusion & calibration: evidence from development, On Bishops & Babies".


The first presentation was from Dr. Morrone about time perception, the space perception of our surrounding and the posture of movement of our body and body part, e.g. hand or head. I am not a neurobiologist, so I could not comprehend a lot of vocabularies: retinal snap shot, retinotopic map, allocentric map, ipsilateral vs controlateral, spatiotopic vs retinotopic, craniotopic vs dermotopic and so on. There were many graphics showing correlations and areas of brain. Also the presentation was too fast and in english with italian accent. It was awful for me.

In the end, I can understand only the conclusion about the link between action and time (perception).
Time perception of human brain depends on the posture. If we change our coordinate (e.g. posture), our "internal clock" change. Time perception is highly plastic.

I remember a quote about time perception:
"Put your hand on a hot stove for a minute, and it seems like an hour. Sit with a pretty girl for an hour, and it seems like a minute. THAT's relativity." (Albert Einstein)


The second presentation was more interesting. Prof. Burr talked about the way we use our sense to have a space perception. Is our perception robust? How do we develop robust perception through the year?

We use haptic and visual information to explore our world. We see something to guess the size and we touch itu to have the idea of the size. Based on two sensors, our brain process information about the size of a thing. We have more senses to explore space by adding auditory information: sounds. Not only size, we also get idea of space based on our sense of orientation.

Prof. Burr showed an interesting research about the robustness of our perception.
What happens if we see something blurry but we can still touch it?
What happens if we have conflict of direction between eyes and ears?

Fusion of our sense can make better precision. Precision means that the resulting position from our visual sense and our haptic sense (and also other sense) is closed each other. We use all of our senses to get a precise space (and time) perception.

Calibration can make better accuracy. Accuracy means that our sense points close to the targeted position. Which sense calibrate other sense?

A subject should differentiate size of many boxes. Two boxes are put separately by a piece of wood. The subject only see one side of wood with one box. The other box is behind. The subject should see one box and touch the front box and the back box. Combinations of boxes are changing. The subject is supposed to tell which one is big and not.

And then, we make a blurry obstacles so our visual perception is disturbed. Is the subject still good with the task?

After the size, the subject should differentiate orientation. The boxes can be twisted. If we twist the front box, the back box also twist because both are connected. The conflict arise when those two boxes have different angle so it is not parallel. So front and back boxes have orientation conflict. What is the effect with and without blurry obstacles then?

Another experiment is to see a circular dot on screen of television. There are also two speakers: left and right. The dot can be move on screen. Sometimes it is blurry. The speaker can blip. Sometimes it blip consistently, which means if the dot on the left, the left speaker make a sound. But there is also visual and auditory conflict.

The subjects are from different ages.

What is the result?

The 5-year-old subject have problems with conflict of sense and with obstacles. And then as human grows our perception get more robust. A ten-year-old and grown-up subject has a robust perception. They can have a consistent (robust) sense of size and orientation although blurry obstacles are put or audiovisual conflict is put.

If we calibrate position to have a time-space perception, we use the most robust sense. The most robust sense calibrate the other sense.

Other question is how about blind people.

It turns out that blind subject can do well to sense size but they are bad with orientation. Prof Burr concluded that the lack of calibrating sense (vision) at early age impacts on touch.

It is hard to understand this presentation. Both presenters were really fast. One used italian accent and the other used australian accent. Pictures shows better than words and unfortunately I can only write in this blog.


Next monday, there is another presentation: "A Bayesian Model of Attentional Load"
Can't wait to see what it will be.

Wednesday, November 25, 2009

About IBM Cat Brain

My previous post is about IBM Brain Simulator. It turns out that there is still hot discussion about whether it can simulate brain activity or not, as written on an article in Spectrum IEEE.

The IBM Brain Simulator can model the neurons, the synapses and the connectivity of them. It is true that the numbers of neurons and so on are equal to cat's brain or human visual cortex. But one researcher of EPFL Blue Brain, Henry Makram disagreed with the method that IBM Brain Simulator from Almaden's lab used to model the synapses. Henry Makram questioned why ion channeling in the synapses was not modeled in the Almaden's cat brain simulator.

In my opinion, It is already a break-through to model neuronal connectivity. To model the synapses functionality for each connectivity is really hard job. It will take more processors and consume more energy than just 1.4 MW.

My interess is to combine brain simulator and brain-computer interface to create a quasi-telepathic between human and computer. I don't really care which brain model they use: the IBM Almaden's or the EPFL Blue Brain's.

Saturday, November 21, 2009

IBM Biggest Brain Simulator in 2009

Three days ago, I read an article from IEEE Spectrum about IBM Brain Simulator. The following day, I read two articles about the brain simulator from the Smart Planet: the first focuses on the technology and the other concerns about what the future will be. Popular Mechanic also has an article about the brain modeling from IBM's Almaden research center compared to others from Stanford University and Neuroscience Institute in San Diego. IEEE Spectrum says that Europe also have similar project called Blue Brain, at EPFL in Lausanne, Switzerland.

The IBM Brain Simulator is featured in Supercomputing 2009 event in Portland, Oregon. This event is about high performance computing. So it is not about brain-computer interface (BCI) or brain robot. There is a possibility to combine BCI and this kind of brain simulator. Maybe I can participate in that kind of research.

There is a concern about the future about the relation between computers and humans. The second article from Smart Planet discussed this. A friend of mine, Mova Al'Afghani, put slides from Karl Fisch in his blog about future prediction, which says that in 2013, super computer will exceed human brain capability and in 2049, a $1000 computer will exceed the capability of entire human species. In 1999, Ray Kurzweil wrote a book: The Age of Spiritual Machines: When Computers Exceed Human Intelligence. In the book, it is predicted that in 2020, a $1000 computer will exceed human intelligence. So you know who makes the prediction. Ray Kurzweil also wrote another book: The Singularity is Near: When Human Trancends Biology. IBM said that in 2019, they can mimic human brain which has 20 billions neurons and 200 trillions synapses.

Singularity (in this context) means that the computer has reached human intelligence and capabilities. TV series Terminator SCC mentioned this singularities. Some people are afraid of this singularity and create Anti Skynet group. (Skynet is fictional "machine" in Terminator). The optimist people build Singularity University to prepare humanity for accelerating technological change.

This is the feature of IBM Biggest Brain Simulator:
  • uses Dawn, BlueGene/P supercomputer
  • uses C2 cortical simulator
  • funded by DARPA (U.S. Defense Advanced Research Projects Agency)
  • spended 40 million US dollars
  • contains 147,456 processors
  • uses 147 TB of RAM
  • consumes 1.4 MW
  • uses 10 rows of racks (and miles of cables)
  • uses 6,675 tons of air-conditioning equipment spouting 2.7 million cubic feet of chilled air
  • uses a universal neural circuit called a microcolumn to mimic a single neuron.
  • exceed cat's brain capability
  • can simulate only human visual cortex capability
  • takes 500 seconds to simulate 5 seconds of real mammal's brain activity (in average).
One more interesting thing. With the same technology, to exceed human brain, the supercomputer will need between 100 MW and 1 GW. It takes a nuclear power plant for simulating only a single human brain. The real human brain takes only 20 watts.

Monday, November 16, 2009

The Gamma-Trait

Today (16th November 2009),

I came to a colloquium in COGNIUM building of University of Bremen. The talk is presented by Prof. Dr. Christoph Herrmann, from the "Institut für Psychologie", University of Magdeburg. Actually, he has moved to the "Institut für Psychologie", University of Oldenburg.

The title of the talk is "Der Gamma-Trait: Inter-individuelle Variation der EEG Gamma-Band-Aktivität spiegelt Unterschiede kognitiver Funktionen wider". The title is in German but the talk is in English. So let me translate the title: "The Gamma-Trait: inter-individual variation of the EEG Gamma band activity reflects the differences in cognitive functions."

Gamma band is the EEG frequency above 30 Hz. Most experiments shown are about event-related potential (ERP) of the EEG. The events are created by stimuli: pictures showing pattern. The response is measured by EEG.

I didn't take note in the talk. I also came late.
But I remember a few things from the talk.


There are relationship between genes and the cognition.
The certain genes play a role in dopamine production and other neuronal activity.
The dopamine has relationship with Gamma band activity.
Gamma Band activity is generated by certain stimuli.
So the response of human brain or the cognition depends on genes.

It could means that the way we (human) think differently and act so because of our genes.
We were meant to be different from each other. So religious fundamentalism and racism, who hate different others, are really against our nature.


Prior knowledge is important.
There are two experiments showing pictures and recording EEG: first experiment is without prior knowledge and second experiment (in the following 2 weeks) is with prior knowledge from the first one. The second experiment always shows a higher Gamma trait.


Giving an electric current to your head increase your Gamma trait.
The experiment is done with both DC and AC voltage.
If you think you can get smarter after you have an electric current through your head, you are wrong. The effect of an increase of Gamma trait last only a few minutes.

It is more stupid if you think electroshock through your brain can make you genious. :-)


A man who has a task to differentiate patterns shows an interesting Gamma Trait.
If a similar pattern (to the targeted pattern) is shown, there is also Gamma activity although not as high as from the targeted pattern.
For example there is pattern A, B, C, D. Pattern A has similarities with pattern B and C but it is totally different from pattern D. A subject should pay attention to pattern A. Gamma activity shows the highest response for A and shows a little response for B and C but no response for D.


In the talk, there was also different Gamma activity between healthy people and the ones with ADHD. It is too complicated to tell in this blog.


Next Monday, I will come to another talk: "A Bayesian model of Attentional Load".
Maybe the following talk will be useful for my Master Thesis.

Brain-Computer Interface definition, Allison et al, 2008

"Brain-computer interface (BCI) systems are devices that allow people to communicate without moving. Instead, direct measures of brain activity are translated into messages or commands."

From the paper:
B. Allison, I. Volosyak, T. Lüth, D. Valbuena, I. Sugiarto, M.A. Spiegel, A. Teymourian, I.S. Condro, A. Brindusescu, K. Stenzel, H. Cecotti and A. Gräser. 2008. "BCI Demographics I: How many (and what kinds of) people can use an SSVEP BCI?". Proc. 4th International Brain-computer Interface Workshop and Training Course. Graz, Austria, September 18th-21st. pp 333-338.

They are all from Institute of Automation (IAT), University of Bremen, Bremen, Germany.

You can also visit B. Allison's blog or I.S. Condro's blog (yes, that's me).
Some of us have Facebook:

Saturday, November 14, 2009

Brain-Computer Interface definition, Wolpaw et al, 2002

"A BCI is a communication system in which messages or commands that an individual sends to the external world do not pass through the brain's normal output pathways of peripheral nerves and muscles."

From the paper:
Jonathan R. Wolpaw, Niels Birbaumer, Dennis J. McFarland, Gert Pfurtscheller, Theresa M. Vaughan. 2002. "Brain-computer interfaces for communication and control". Clinical Neurophysiology. Ireland: Elsevier. Vol. 113, pp 767-791.

This paper has been cited by more than 1000 papers.

Jonathan R. Wolpaw is a Neuroscientist from the Laboratory of Nervous System Disorders, Wadsworth Center, New York State Department of Health, Albany, New York and from the State University of New York, USA.

Niels Birbaumer is a Neurobiologist from the Institute of Medical Psychology and Behavioral Neurobiology, University of Tuebingen, Tuebingen, Germany and from the Department of Psychophysiology, University of Padova, Padova, Italy.

Dennis J. McFarland and Theresa M. Vaughan are also from the Laboratory of Nervous System Disorders, Wadsworth Center, New York State Department of Health, New York, USA.

Gert Pfurtscheller is from the Department of Medical Informatics, Institute of Biomedical Engineering, Technical University of Graz, Graz, Austria. Now in TU Graz, they have BCI Laboratory.

  • BCI = Brain Computer Interface
  • Padova = Padua

Thursday, November 5, 2009

Brain-Machine Interface in the 19th century

Brain-Computer Interface (BCI) is not really a new technology as we can read from the news from IEEE Spectrum. There was a Head Set in the 19th century by pseudo-scientist, called "Phrenologist". The purpose of phrenology is to find correlation between a person's character and the morphology of the skull.

In one article of IEEE Spectrum, the picture number 2 shows the head set. It is mentioned like this:

HEAD CASE: Today’s electromedical researchers are busy mapping the brain, but 19th-century electrical engineers were already on the case. This electrical phrenology apparatus consists of two parts, a headpiece and a wooden box containing a sledge induction coil and three batteries. The headpiece forms a crown 23 centimeters (9 inches) in diameter with 13 brass electrodes evenly spaced across it.

From the picture, we can see the early research of Brain-Machine Interface. Well, it is not really a machine since the function is unknown. For more information about the history of Phrenology can be seen from their website.

Saturday, October 31, 2009

Mind Reading Technology: the Mailing List

Can BCI be used for reading your mind?
The Germans would say "Jain" (Ja und Nein - Yes and No).

Yes, we can get your brainwave with EEG scanner and then recognize some patterns from the signals. We can also see wonderful 2D, 3D and "4D" patterns with MRI.

As a "Neuroscientist wannabe", I have been to a seminar showing that some phonemes can be "extracted" from EEG signals. So you don't have to move tongue and make a speech, the action of just thinking about a speech can be read by a machine. Scientists (Neuroscientists, Neurolinguists, and Engineers) have been doing this research. Maybe in the next 30 years, how the brain processes language can be interpreted by machine.

Human mind is so complex. Machine cannot really "read" your mind. Computers are able to read the patterns of brain signals. Only a few information from the human "mind" that can be extracted by machine.

I have found a mailing list in Yahoogroups which discuss mind reading technology. OK, it is not really a discussion. It is a monologue. Only one man posts and the other are just interested in reading the informations inside.

The site address:

The list has a link to some websites about mind "reading" technology.

Monday, October 5, 2009

Tech Crunch Interviewed Bremen BCI in CeBit 2008

John Biggs from Tech Crunch made an interview with us, the BrainRobot research group from IAT Uni Bremen. Brendan Allison, as the leader of our group, explained many things about Bremen Brain-Computer Interface (BCI) in the CeBit 2008 event.

Here is a video about applications of Bremen SSVEP BCI for spelling and for moving robot.

You can see me for a couple of seconds. Yes, me with the long hair. I was too busy with preparing a subject to enter the Matrix. A subject wanted to participate in spelling experiment then.

The Next Uri Geller in CeBit 2008

In CeBit 2008, in Hannover, the IAT from Uni Bremen were placed in some corner. It is quite a convenient place. People went around passing that corner. That is the reason we could get 106 subjects in a week.

This video shows how I tested the spelling program before doing experiment with random subjects. You will see the stimulator with 5 flickering boxes: Left, Right, Up, Down and Select. In the middle, there are letters which we can choose to build a word. The hardwares used are notebook, EEG amplifier called g.USBamp from Guger Technologies, EEG electrodes and EEG cap. The news can be seen in Science Blogs.

Link: The next REAL Uri Geller

FYI, Uri Geller is a mentalist who like to bend spoons. He is famous in Germany.
And The Next Uri Geller is a TV show in Germany.

BrainRobot in CeBit 2008

In March 2008, BrainRobot research group of the IAT of the University of Bremen went to the CeBit in Hannover. We conducted a study of SSVEP-based BCI application there. We were doing experiments whether people can spell words using their brain wave: EEG. We got 106 subjects.

Below is the video about what happened in CeBit.



The result of the experiments can be seen in Allison, et al, 2008, "BCI Demographics: How many (and what kinds of) people can use an SSVEP BCI?", Proc 4th International Brain-computer Interface Workshop and Training Course, Graz, Austria, pp 333-338.

Saturday, October 3, 2009

The first neuron

Hello, world!

This is my blog about Brain and I just made the first writing to introduce my blog.

I am Ignatius Sapto Condro Atmawan Bisawarna. People call me "Condro". I participate in the brain research in Bremen since 2008 (or maybe end of 2007). I am a master student in the University of Bremen. There, we have an institute called IAT and a research group called BrainRobot.

End of 2007, I helped a friend putting EEG cap and gel on the head. My friend is Indar Sugiarto. He was doing his project about stimulator for SSVEP using monitor of a desktop PC and a laptop. I think it is the beginning of my participation in the Bremen brain research.


EEG is Electroencephalogram or Electroencephalography.
SSVEP is Steady-state visually evoked potential.


In March 2008, BrainRobot and I went to CeBit Hannover. In a week, we got more than 100 subjects participating in SSVEP-based BCI research. We have spelling application and the subjects should spell some words using their EEG.


BCI is Brain-Computer Interface.


In November 2008, I started my master project "Final Preparation of the CeBit Data". In 2009, I finished the project and started fixing thesis topic in this area. Now, I have made up my mind and the topic is "Improvement of Response Times of SSVEP-based Brain-Computer Interface". I will do some computing related to Time Series Analysis.

While doing thesis, I made these blog and hoping the AdSense can give me money.