Tagged: brain

Brains, Brains, Brains…

“If the brain were so simple we could understand it, we would be so simple we couldn’t.” Lyall Watson

Summer time! For me it means working on bio-inspired algorithms, one in particular I’ve been spending some time on is Artificial Neural Networking (ANN). This had me asking my sister (who is working on her PhD in neuroscience) about how synapses, pathways, etc. work. This post will be on how ANN was inspired and some of the materials I found interesting on it. Let’s start with the obsession with neural network and why it matters? Machines do complicated mathematical calculations in a matter of seconds, yet they have difficulty performing some easy tasks such as recognizing faces, understanding and speaking in local languages, passing theTurning test. OK, let’s compare machines to our brain: A single transistor in your home computer is quite fast; only limited by speed of light and the physical distance to propagate a signal. A signal(Ions) in the neuron, on the other hand, propagates on a fraction of the speed (Flake, 1999). This begs the question, which is better? A good comparison can be found here. One main fact is that our brain makes use of a massive parallelism; it’s this massive interaction between axons and dendrites that contribute to how our brain works. Many argue that the comparison to computers is not very useful as they work differently from each other. Can we make a digital reconstruction of human brain? I follow Blue Brain project for this. Hence, as you can guess ANN algorithm is a simple imitation of how our neurons work. It works by feed forward and back propagations to learn patterns. Originally proposed as McCulloch-Putts neuron in the 1940s and 1980s by invention of Hopfield-Tank feedback neuron network. The 1960s had an good optimistic start on neural networks with the work of Frank Rosenblatt’s perceptron (a pattern classification device). However, by 1969 there was a decline in this research and publication of Perceptrons by Marvin Minsky and Seymour Papert caused it to almost die off. Minsky and Papert showed how a single perceptron was insufficient with any learning algorithm by giving it mathematical proofs. It took a while and many independent works till the value of Neural Networking came to light again. One main contribution is the two-volume book titled Parallel Distributed Processing by James L. McClelland and David E. Rumelhart and their collaborators. In this work, they changed the proposed unit step function proposed to a smooth sigmoid function and added a backward error signal propagation using weights of some hidden neurons called back propagation (Flake, 1999). Reading through chapter 20 of Parallel Distributed Processing written by F. Crick and C. Asanuma, I read about physiology and anatomy of the cerebral cortex. It shows different neural profiles.

Screen Shot 2016-07-24 at 11.35.30 AM(McClelland, 1989)

It talks about different layers in the cortex such as the superficial, upper, middle, and deep layer, axons, synapses, neurotransmitters. The more I read, the more I come to appreciate the complexity of our brain and wonder about the simplicity of Artificial Neural Network algorithms, and can’t help but feel amazed by what Blue Brain Project is aiming to do.

Like a house-cat exploring its environment, lets dive into narrow unexplored places…

Books:

Flake, G. W. The computational beauty of nature, 1999

McClelland, J. L. Rumelhart, D. E. Parallel distributed processing, Volume 2. Psychological and biological models, 1989