Thursday, August 6, 2015

QUANTIFICATION OF UNCERTAINTY

Or…the philosophical Neuro-babble of scientism!


This whole thing about digital is getting “curiouser and curiouser” by the day. When you throw in the barrage of hyperbole, the whole mix is a labyrinthine network of chaos. Facts and opinions merge into a single syllabic “wow” from the pedantic crowd that consumes latte with every breath.

So this next coming of the “sliced bread” is the digital what’s what that will take us into the 22nd Century. Hey but the 21st Century just began, so can we give this 0-100 in nanoseconds to the future a bit of a rest?

Artificial Intelligence (AI) will be the in thing, they say. Who, you might ask? Those in love with their abilities to discern the future, that is! Those that hide in dark closets and code and decode the subject of life but may not have lived it yet or never will from within the confines of their dark rooms. “Too abstract for you, this is?” as Yoda would ask.

I will take my cues from the functional Magnetic Resonance Imaging and from the perch of the Single Positron Emission Computerized Tomographic reveal. Why? Read on… it might make sense. In an abstract sort of way, it does to me.


The human mind is a rich diverse group of billions of neurons (brain cells) that converse electrically, collude, recruit and develop denatured protein memories within. The more the force of thought resident on a particular path, the more delineated the path. Imagine a path less traveled like the jungle infested forest facing Prince Phillip before he can kiss Snow White. However, if the same path is well traveled, minus the evil vain-queen-witch, it becomes a paved highway over time from travel. This paving is a function of experiential gains. The plasticity in the brain of pruning (or hacking away if you prefer as in the case of Prince Phillip to see his lost love) is a daily function of the brain. Oh yes, doesn’t matter how old or young you are, pruning makes the world go around from 0 to 100+ years. The thoughts become the actions and lo and behold our world changes. But how do or can we register those thoughts and from that create the epistemic nature of the action? And if we can, can we then objectify the precursor to any action or behavior? Ah dear readers, there is the slope that provides us with equal measure of frills, thrills and spills.


Scientists are tripping over in the philosophical realm of neurobiology and neuro-functional anatomy with a multitude of hardware to peer into the moment by moment of each firing neuron to simulate brain function. Indeed, they claim, we may be able to pocket your mind into an iPod one day, Moore’s Law be damned! But there are a few roaches I see in that prospect. For instance, let us take the most admired one called fMRI or (functional Magnetic Resonance Imaging) as a means of deciphering the brain’s activities. You must have seen the glossy images of colored tripped brains of individuals in response to some stimulation or behavior or action? I am sure you have. If you haven’t here are a few images to re-polish that paradigm…


What happens with an fMRI anyway? My simplistic viewpoint, and it is simplistic, is that when a stimulus is provoked, say an image of something to provoke a reaction from the individual’s brain, there is increase in energy output from select recruited neurons that identify the site of activity, eg. Temporal lobe or the occipital lobe where vision and memory merge and if an action is desired  then the Parietal lobe comes into play, but through it all the cognitive orders of action thru assimilation of the diverse stored data banks come mostly from the prefrontal cortex (herein called the decision maker). Are you with me thus far? Okay, so the color infuses into the brain images and voila! According to these experts, we have identified the active components of the brain. Repeat that experiment many times and average out the response, create a Bayesian apriori bank of information and then create a p-value of 0.05 or 95% Confidently Bounded Interval (CI) as the threshold and if the firing neurons cross that threshold, the computer registers the data with plethoric hues. The stronger the p-value of 0.05 or 95% Confidently Bounded Interval (CI) as the threshold and if the firing neurons cross that threshold, the computer registers the data with plethoric hues. The stronger the p-value less than 0.04, 0.03, 0.02…0.0001 the higher the coloring labels just like the weather maps go from a light green for drizzle to a magenta within red color for humongous storms based on radar reflectivity. Is that all good so far? Ok so now let us look at how that activity is determined within the fMRI.

fMRI machines use something called BOLD or Blood Oxygen Level Dependent a mechanism promoted by Seiji Ogawa. The idea being that brain activity would require nutrients in the form of sugar and that will necessitate need for oxygen to create the ATP (Adenosine Triphosphate) to liberate a phosphate group to create energy for the brain cellular activity. And that is how the fMRI was born. Two inherent conflicts arise when viewed from this simplistic viewpoint:

One, if energy is used immediately for the activity, then there should be an immediate deficit recorded in the deoxyhemoglobin (hemoglobin that binds with oxygen and the “de” represents removal of oxygen for delivery to tissues) and

Two, the BOLD activity takes place about 5 seconds after the evoked stimulation and response (Why the delay-or representation of a flat line on the BOLD scale?).

The fMRI machines construct the brain image into 3-D pixels called voxels, (Consider “Volumized-pixels”) each about 5cm in size. The complete activity of the brain at any instant can be recorded using a 3-D grid of 60 x 60 x 30 voxels. These machines register information every second of the 3 minute session creating 30 million plus data points. Indeed when we look at a picture, any picture placed in front of us many thoughts creep into our minds and through the act of parallel processing the information presented and that bound within our personal experiential data banks, the individual response is elicited.

In other words, my response to a green grassy knoll might elicit a desire to hit a golf ball, yours might want to lie down in the sun with a book to read. So the p-value thus used as threshold tends to negate the true experiential responses that do not climb above the artificial alpha of threshold so placed as the arbiter of reality. So, then what exactly does fMRI tell us? The simple answer is “some measured brain activity based on delayed oxygen utility to different parts of the brain” that we are trying mightily to cubby-hole into “cause and effect.”  Does that debunk the mounds of fMRI data flowing through the neuro-scientific literature? Not exactly, but it does bring into question the current vogue of misrepresentation and somewhat blind acceptance.



Now let me launch into the SPECT scan rage of the season that keeps giving us brain images like laundry detergent boxes of different colors. What exactly is SPECT? It is imaging of a single photon emission released by the neuron due to increased oxygen entry within the cell. This single photonic emission when pulsed together via a computer program and again based on the threshold of an arbitrarily placed p-value gives us beautiful red, green, blue and magenta images of areas differentiated by those areas depleted in oxygenation activity and those turgid with a surfeit of the same element. The difference is that the fMRI is a computerized tomographic image (slices put together by a computer algorithm) versus SPECT, which is a 3 dimensional planar radio-nucleotide imaging format. (Radio-nucleotide is essentially a material that joins with a specific cellular target (oxygen in this case) and emits a gamma emission (radiation) for the detector to detect and the computer to assimilate into a 3-D image). The difference is obvious but the human endeavored legion of stories as to “cause and effect” multiply exponentially. Some go as far as delineating sexual, aggressive, criminal, sociopathic behaviors on such images and the laity buys it “hook, line and sinker” as the next greatest thing since sliced bread.

Now let us take this whole house of cards worth of information and stoke the beast of Artificial Intelligence. All I can say is it will take a long time to match the equivalence of the human brain. I say that because of the data from Harris Georgiou a neuroscientist who in using the voxels concept in fMRI has determined: “that a typical voxel corresponds to roughly three million neurons, each with several thousand connections with its neighbors. However, the current state-of-the-art neuromorphic chips contain a million artificial neurons each with only 256 connections.” Thus the parallel function within the brain occurs at a much higher structural and functional level given that there are, as previously mentioned, our brains are operating about 50 tasks at once. Imagine the division of labor, concept enhancement or reduction, sensing, feeling, importing and exporting information, comprehension etc. the task of the brain is immense and it’s power needs are a mere 20 watts! Now that is some Bang for the Buck!

This study in Science by Hilbert and Lopez tells us of our accomplishments and what might remain under the dusty future (http://www.sciencemag.org/content/332/6025/60 ) concludes: We estimated the world’s technological capacity to store, communicate, and compute information, tracking 60 analog and digital technologies during the period from 1986 to 2007. In 2007, humankind was able to store 2.9 × 1020 optimally compressed bytes, communicate almost 2 × 1021 bytes, and carry out 6.4 × 1018 instructions per second on general-purpose computers. General-purpose computing capacity grew at an annual rate of 58%. The world’s capacity for bidirectional telecommunication grew at 28% per year, closely followed by the increase in globally stored information (23%). Humankind’s capacity for unidirectional information diffusion through broadcasting channels has experienced comparatively modest annual growth (6%). Telecommunication has been dominated by digital technologies since 1990 (99.9% in digital format in 2007), and the majority of our technological memory has been in digital format since the early 2000s (94% digital in 2007). So if one were to calculate the information storage within the brain given that we have about 100 billion neurons each and each of the neurons has a minimum of 1000 to 10,000 connection which translates to 100 trillion to 1 quadrillion data points or between 100-1000 terabytes of information yield. Due to the continuous increase in actual brain storage of information, now the estimates have reached a staggering 2.5 petabytes or 2500 terabytes. That is some order of magnitude one would say! Compile the memory bank to the connectivity (or "Connectome" as the experts call it to look super-intelligent) and you have a ginormous maze of data flow!


After all that, here is the crux of the neuro-babble matter. AI is a long ways away from mimicking the human brain. True, that IBM’s “Big Blue” can beat Kasparov in the game of chess and Watson can beat the Jeopardy champion, but can it tell the difference of an infant’s crying need between a diaper change and hunger or a cuddle, like a mother can? Didn’t think so! So those stories of computers becoming doctors are highly exaggerated in my opinion. Maybe someday we as humans will have computer chips installed to enhance our memories, cognitive skills etc. but even then the primary base of operation will remain with the human brain – add to, not in lieu of.



So in the end, thus far we can make lots of assumptions about what the brain is doing, but really we don’t have a clue. We appear smart with all the purported calculations and the probability assumptions and yet the main ingredient of “humanness” remains lacking from that large metal box filled with CPUs.

The quantification of uncertainty is a philosophical conundrum as much as it is a mathematical maze. Yet through it all in keeping the scion of truth from imploding, ambiguity has to be given its due share in the process of scientific discovery. One without the other implies abject ignorance.

Medical Research once considered the paragon of statistical research methodology is creaking under the weight of this mathematical jargon. The literature is getting burdened with “ambiguity proofed” positive results through statistical fiat that imply little progress in understanding. Thus ensuring that the “native hue of resolution is sicklied o’er with the pale cast of thought…”

On a more human level… “Have a great day!” (Let Watson figure that out!)  

No comments:

Post a Comment