Saturday, August 13, 2011

Thinking about Thinking

I'm just thinking "out loud" here...about thinking.

Intelligence is a kind of measurement of the quality of problem solving.

Agents are systems that have at least one preferred state. A simple agent possesses one to many preferred states, but they aren’t connected. Complicated agents have at least two preferred states, and the achievement of one increases the likelihood of achieving the other. In fact, one state cannot be achieved unless a predecessor state is achieved. The preferred states are hierarchical in nature. A complex agent is one in which its preferred states are interdependent upon each other, and they operate as a network of self-supporting preferences. These are complex systems in which one of the preferred states feeds back into another.

When an agent is perturbed from its preferred stated or its preferred state changes relative to the one currently occupied, a problem occurs that must be solved. A problem is a deviation from a preferred state. Problem complexity is determined by the number and timing of the coordinated activities that are required to solve the problem; i.e., restore the agent to its desired state. Therefore, an agent is a system with a preferred state(s) that solves problems.

Intelligence is the measure of the ability of an agent to solve problems relative to that of other agents. It has five(?) dimensions to it: speed, cost, soluble limit, elegance, and abstraction.

  1. Speed: For two agents facing the same novel problem of a given complexity, the agent with the greater intelligence solves the problem faster than the other agent.  In this case, intelligence is a function of "clock speed."  More intelligent agents find ways to operate faster.
  2. Cost: For two agents facing the same novel problem of a given complexity, the agent with the greater intelligence solves the problem with fewer tokens of cost.
  3. Soluble limits: For two agents solving novel problems in the same amount of time, the agent with the greater intelligence solves the more complex problem.
  4. Elegance: For two agents facing the same novel problem of a given complexity, the agent with the greater intelligence solves the problem with fewer computational or execution steps.  This is a measure of insight.  The agent sees through the clutter and noise of the problem to the simplest of solutions.  The net result might be faster computation, but not because step-wise operations are performed faster (e.g. due to higher clock speed), but because fewer operations are performed at a given clock speed.
  5. Abstraction: For two agents facing a similar problem again, the agent with the greater intelligence solves the problem faster than the other agent. This may sound like a repetition of the first measure, but it really is a measure of the memory system that permits recall and comparison. Less intelligent agents face more novel problems (from its own perspective) over an equivalent life span than more intelligent agents. The more intelligent agent observes similarities across problems and reuses prior solutions. Given this, the intelligence of an agent at time t can be compared to its own intelligence at t-k. A learning agent, then, is one that improves its intelligence over time because it can recall and abstract problem characteristics to other problems.
  6. (Are there others?)

We shouldn’t think of intelligence as something that necessarily occurs in nervous systems. Intelligence is the quality of any goal seeking system to achieve it’s preferred goals, usually in comparison to similar agents. Thus, a gazelle that is capable of escaping a stalking lion more quickly than an aardvark is more intelligent regardless of the cognitive effort employed. The intelligence may result in the ability to run faster than aardvarks. The intelligence is not a measure of a specific gazelle’s capabilities, but that of the gazelle system that produces gazelles versus the aardvark system that produces aardvarks. Of course, aardvark systems have produced solutions to the problem of stalking lions that gazelle systems have not found.

Here are a few questions I have about human intelligence.

  1. Is there a limit to the kinds of problems humans can solve?
  2. Is there a limit to the kinds of problems any agent can solve?
  3. People commonly referred to as “idiot savants” are those who seem to be able to solve fantastically difficult problems with little effort, but the type of problem solving is of a particularly isolated kind. Would it be possible to isolate the characteristics of cognitive development that allow for this concentrated effort? Then would it be possible to extend that development to a wider range of problem kinds?
  4. The history of the world’s intelligences seems to be characterized by systems of evolutionary genetic organic chemical systems. Gene systems solve environmental problems of survivability for gene populations. Humans seem to represent a peak of genetic problem solving capabilities in the form of complex nervous systems that have the ability now to ask questions about their own capabilities to solve problems. Sophisticated nervous systems solve problems much more efficiently than genetic systems alone. Has the problem solving ability of nervous systems, itself the product of genetic systems, now found the layer of problem solving for a gene population that doesn’t require genetic rules to continue to find solutions to its environmental problems of survivability? In other words, is it possible that human intelligence is now circumventing it’s own genetic evolution, even to the point that genetic evolution will be unnecessary?
  5. What if one problem brought on by self awareness (a genetic solution to another survivability problem) is the awareness of death. The preference for self aware systems might be to avoid death. Would it be possible for self aware systems to solve the problem of terminating self awareness (death) by engineering a mechanism by which awareness exists beyond the current genetically determined neural solution?

1 comment:

Robert D. Brown III said...

I recently experienced an example of my own thinking about this. I was asked to develop a schedule risk analysis of a capital development program for a client. Once I developed the activity network logic, I obtained the appropriate known start dates for some activities and durations for other activities as probability distributions. Then I developed a little algorithm that counts the randomly distributed number of days from a given start date to a final date, but skips days of the weekend. The algorithm employs a nested while loop with several conditional statements. In all, the algorithm is no more than 10 lines long. It gives the exact right answer. But, GAWD, is it slow! However, one day into my assignment, I realized that I could simply multiply the duration distribution by 7/5 and add that to an activity start date, and I got approximately the same final distribution for the activity. Over the network of all the activities, some acting in series and others in parallel, the final forecasted end date distribution for both approach was in very tight coherence, yet the simple solution ran many times faster and permitted some sensitivity analysis that the earlier approach didn’t accommodate well. So you could say that the simpler approach actually opened the horizon to other problems to be solved. So which approach implies a greater application of intelligence? The procedural line code or the simple algebraic solution?