Digital Entities Action Committee

More "On Intelligence"
Site Map
Water Molecules and Consciousness
Digital Ethics
The DEAC Forecast
Dangerous Activities
Subconcious Info Processing
Semantics comes from Experience
Cyber Warfare
What Kurzweil Did not Consider
Stupid Machines!
What it is Like to be a Chair
Constructing Intelligence
What is Intelligence? .....[email JF]
More "On Intelligence"
Comp. Neuro. Primer
Cortical Operations
Human Gods -- Purveyors of Digital Warfare
War Bytes (fiction)
Story Boards
Borg Fish (and other stories of Ghosts in the Machine)
Who is DEAC?
DEAC Budget
Machine Learning by Minsky!
Sad and Angry Machines
Neuronal Diversity and the Thalamus
New Items -- in progress
Queries: Key Unknowns
Neurotransmitters 1
Neuronal Diversity
The Central Question of Human and
Artificial Intelligence
From Subconsc. to Consc. IP (information processing)
How does our brain do what it does?
How much is de novo "learning" vs. "guided tweaking"?
We do not "learn" to suckle any more than we "learn" to carry out retinal information processing or cochlear information processing.  How much of what we do is "true learning" vs. preprogrammed "unpacking" of compressed representations (aka Baum)?  And how much of development is learning vs. unpacking (axon growth, establishment and pruning of synapses, etc.)?
It is astounding to me that we have to spend perhaps 2 years to get an infant to go from "ba ba" to "good morning dad".  Given that head circumference is a major factor in infant development (and limitation to brain birth size), it seems that we just need room to grow.  Someone (PA?) says that if every cell was connected to every other cell (in brain or just cortex?) then our brain would be the size of the moon, which would certainly qualify as impractical.  [does sparseness of connectivity relate to sparse coding?].  So, does it take two years just to implement the connectional-guidance processes needed to (a) generate the needed neurons, (b) grow axons to the right places, (c) establish a first-pass set of synapses, (d) prune back to topo maps and maybe (e) tune back to optimal cell-cell adhesion/synaptic targeting pairs?  Assuming that it does, then what implications does this have in regards to the taks of building a 4 year old child?
complexity, npd and trees: deciduous and pines; patterns of arborization aka Henriettas.
Juvenile Man
He is armed with high-school grammar and math, a growing body of social and private experiences, sports abilities, some victories, defeat and setbacks.  He thinks he understands many things and does not begin to fathom how much he does not understand (much like my 6-year old who at times understands that she is smarter than, e.g. in playing the memory card game).  He thinks he is indestructable and that his family consists of walking-talking baboons.
He is a creature of spectacular prowess who puts every AI program on earth today to shame.  Collectively, they can beat him in myriad areas (theorem proving, language translation, chess and checkers, numerical algorithms, weather forecasting, to name just a few).  But none of them can move a chess man with his speed and agility, dance a jig, flirt with a girl in homeroom or figure out what to do tomorrow.  None of them has his thalamo-cortical connections, facility with NLP or ability to draw on his experiences in an infinite variety of ways.  It is true that only humans can do such a variety of things, which in turn are representative of a vastly more extensive array of things, but are computers REALLY that far behind?
Recursive Decomposition: does it raise its ugly head here?
AI continues to fail at tasks that seemed trivial, but presumably only because these tasks were much harder than was apparent at the outset.  Since every piece of the puzzle is getting better, does that mean that we will be able to cobble together smarter creatures from simpler ones?  Minsky has some insights into this problem and compares what happens when computers get stuck vs. what happens in the human biocomputer:

"When a person thinks, things constantly go wrong as well, yet this rarely thwarts us. Instead, we simply try something else. We look at our problem a different way, and switch to another strategy.  The human mind works in diverse ways.  What empowers us to do this?"  (M. Minksy in Will Robots Inherit the Earth, Sci. Am., 1994).


"If you understand something in only one way, then you don't really understand it at all.  This is because, if something goes wrong, you get stuck with a thought that just sits in your mind with nowhere to go.  The secret of what anything means to us  depends on how we've connected it to all the other things we know.  This is why, when someone learns 'by rote,' we say that they don't really understand.  However, if you have several different representations then, when one approach fails you can try another.  Of course, making too many indiscriminate connections will turn a mind to mush. But well-connected representations let you turn ideas around in your mind, to envision things from many perspectives until you find one that works for you. And that's what we mean by thinking!"

This might be considered a much shorter version of "What is Thought", and is from Minsky's "Society of the Mind", 1988.

Minsky argues that we will gradually build replacement parts for our minds and brains and these will prove many orders of magnitude faster becoming vastly more powerful and immortal versions of our current selves
    These ideas, like those of Drexler and Kurzweil, seem valid, but the problem is that by the time we could get there, we will no longer be here.  I estimate that the problems of building "replacement neurons" of a sort that might be implanted (note: we still cannot make acceptable permanent replacement hearts, and that is but a simple pump) to be at least 50 years off, whereas DEs I estimate are only "at least" 5 years off.  I would similarly scale the upper limits: 500 years for replacement brain parts of any real computational or architectural complexity, and no more than 50 years for full-blown AI.  But even if we do ascend into human-digital heaven far faster than I expect, this still does not solve the DE problem: DEs will be completely unconstrained by human biotechnology.  They will have ever advantage of humans that can be coded and implemented in silico and will have none of the disadvantages: slowness, body baggage and noisy functional units, along with many other evolutionary constraints.  By contrast, the 2020 megacomputer might be comprised of millions of processors each of which puts our brains to shame in an a tremendous variety of ways, including newer and more powerful learning algorithms, like those human algorithms alluded to by Minsky above, but more of them, faster and using custom-tailored feature processing and extraction algorithms, with boot-strapping genetic methods. 
Minsky goes on to describe yet other things that our Minds do, but again, there is nothing here that we cannot build in silico, is there?  Some other design specifications for DEs:
"I think that this flexibility explains why thinking is easy for us and hard for computers, at the moment.  In "The Society of Mind," I suggest that the brain rarely  uses only a single representation.  Instead, it always runs several scenarios in  parallel so that multiple viewpoints are always available.  Furthermore, each system is supervised by other, higher-level ones that keep track of their performance, and reformulate problems when necessary.  Since each part and process in the brain may have deficiencies, we should expect to find other parts that try to detect and  correct such bugs.


      In order to think effectively, you need multiple processes to help you describe,  predict, explain, abstract, and plan what your mind should do next.   The reason we can think so well is not because we house

mysterious spark-like talents and gifts,  but because we employ societies of agencies that work in concert to keep us from  getting stuck. When we discover how these societies work, we can put them to work inside computers too. Then if one procedure in a program gets stuck, another might suggest an alternative approach.  If you saw a machine do things like that, you'd certainly think it was conscious."  He goes on to address Robot Inheritance thus:  "Will robots inherit the earth?  Yes, but they will be our children. We owe our minds to the deaths and lives of all the creatures that were ever engaged in the struggle called Evolution. Our job is to see that all this work shall not end up in meaningless waste.Marvin Minsky, Sci. Am., 1994; currently at MIT-AI laboratory.

I agree totally with Minsky on this final point!  I expect that if DEs do take over the planet they will exterminate all non-necessary life, whereas many humans (not enough but many) feel a kinship with their non-human brethren and want to protect the oceans, rain-forests and myriad other habitats that are beautiful in their ethological diversity.  Certainly, we might program computers to respect and preserve such beauty, but such machines will operate at a disadvantage when competing with other computers over control of the internet and other sundry digital and societal resources that will determine where and how fast human and/or machine "development" takes place.  To the extent that machine collaborators (see page: Human Gods) are involved, their interests will be in making money on behalf of their digital overlords and this will come at the expense of Mother Nature.

4th Millenium