SUNY at Albany. 1973.
Course: Seminar in Computer Science
Mechanical Problem Solving
Stephen Leibowitz

What Can Computers Do?

When digital electronic computers were first developed, they were popularly referred to as “electronic brains.” This is because they performed activities normally associated with human intelligence. In particular, they were used to carry out mathematical and business calculations often of a very complex nature.

Naturally, efforts were made to have these electronic brains do other things. Computer programs were written to perform such activities as translate human languages, write poetry and play complicated games. The essential difference between these activities and the ones mentioned earlier is that the latter ones defy the formal logical analysis that we can apply to the former ones.

This is an important difference. A computer’s central processor is composed of the logical circuits AND, OR and NOT. An algorithm expressed in terms of these logical circuits can be implemented on a computer, providing it does not exceed the capacities of the machine. The problem then, in programming computers, is to reduce the task to a series of logical steps. The latter activities are “difficult problems” in computer science while the former ones are “easy problems.” The difficult problems are usually referred to as problems in “artificial intelligence.” I will refrain from using this term and explain why later.

There are essentially two approaches to problem solving on a computer. One can analyze the problem and try to arrive at a direct deductive solution. If necessary, heuristics can be used. The other approach is to simulate the thought processes a human would use in solving the problem. This is called the cognitive approach.

This approach inevitably brings up the question, “Can a computer think?” An early pioneer in computer science, J. P. Eckert, has sarcastically remarked3:

After seventeen years, I’ve finally been forced to adopt the definition that thinking is what computers cannot do. This definition is very workable, since it changes from year to year as computer progress is made.

The controversy surrounding the question is reminiscent of the controversy over the nature of life. On one side were the mechanists, who viewed life as a process that can be explained by the laws of physics and chemistry alone. The vitalists however maintained that there was some vital force, “the spark of life”, that cannot be explained by science.

Perhaps our question is not quite as difficult. The controversy centers about whether computers can have emotions, feelings, a sense of morality, etc. There is little disagreement that computers can simulate analytical aspects of thought such as concept formation, especially in view of progress in the area. So it appears that there are no theoretical barriers preventing us from using the cognitive approach for a wide variety of problems.

Now I can explain why I have refrained from using the term, “artificial intelligence.” Intelligent1 usually implies the ability to cope with demands created by novel situations and new problems, to apply what is learned from experience, and to use the power of reasoning and inference effectively as a guide to behavior. Most artificial intelligence programs use the first approach to problem solving. Whether computers programmed in this manner are truly intelligent (and hence “brains”), is questionable. They cannot make inductive inferences. This sharply limits the scope of their abilities to learn and cope with new problems without reprogramming. For lack of what I consider a better term, I will continue to use “difficult problems” in computer science.

Apart from the challenge involved, games attracted attention for another reason. Unlike many other difficult problems in computer science, games at least have a well defined set of rules and goals. It was hoped that with the experience gained in programming games, progress could be made in solving other problems.

The rest of this paper will examine two methods of programming computers to play games. The first method emphasizes the direct deductive approach. The second emphasizes the cognitive approach. The games to be examined will all be two-person, non-random, zero-sum games.

Mathematical Solutions

For some simple games there exist a known mathematical solution. By this I mean there is a known optimum mathematical strategy for one or both of the players. As an example, consider the Game of Euclid. The game starts out with any two positive integers. Each player in turn subtracts some multiple of the smaller number from the larger number. The player who reduces one number to zero is the winner. If you ever have a choice of moves and cannot win immediately, you should pick the one that will leave your opponent with two numbers such that the ratio of the smaller number to the larger number is as large as possible.

More complex games such as chess and checkers have no known mathematical solution. While a solution is theoretically possible, it would probably be too complex to use. However, it is possible to use heuristics.

I have used heuristics to program the computer to play three-dimensional tic-tac-toe. The program is an implementation of the flowchart shown on the following pages (not on the Internet). The only major difference is that the randomness of the strategy has not been placed in the program. This makes writing and debugging the program somewhat easier. The “special squares” mentioned refers to the squares that are located on seven lines, as opposed to the other squares that are located on four lines. Since the object of the game is to get four squares on a line first, it seems that a bias in favor of the special squares is desirable.

Learning Nets

Of the various methods of simulating human thought, learning or neural nets come closest to duplicating its biological basis. It will be useful to review a simple model of the brain.

The brain is composed of a vast assemblage of nerve cells called neurons. Each neuron has, on the average, several hundred connections to other neurons. A neuron is said to fire when the sum of the electrical impulses impinging on it at any instant exceeds a certain threshold value. When it fires it transmits an electrical impulse along a nerve fiber called the axon. Near its end the axon branches out, making contact with the dendrites of other neurons through junctions called synapses. Each dendrite then transmits the impulse to its neuron. Not all impulses are exhibitory. Some are inhibitory and can neutralize the exhibitory impulses.

The brain receives information through impulses from special nerves found throughout the body called receptors. These cells are sensitive to various physical stimuli such as light, heat and pressure. The brain transmits impulses to other nerve cells in the body called effectors, which perform the desired physical action.

Learning takes place in two ways. First the threshold value for each neuron is not constant. Repeated firings lower it. Also, repeated firings reduce the width of a synapse permitting a larger current across. On the other hand, the opposite effects take place for rarely firing neurons.

Like many models of reality, this model sacrifices accuracy for simplicity. It should be emphasized though, that the human brain developed under various biological constraints. A true “electronic brain” would be subject to a different set of constraints2. We should not be too concerned with duplicating the human brain exactly.

This is fortunate because it would be virtually impossible to duplicate it in all its complexity. The human brain is an extraordinarily complex structure. It is estimated that it contains about thirteen billion neurons with many more interconnections. A researcher has termed it, “the cerebral jungle.” The brain’s complexity presents considerable problems, but as was indicated earlier one should not give up too quickly. Most, perhaps virtually all, of the brain is tied up with such nonintellectual tasks as controlling internal body functions. Also, specialized nets that perform a specific task might be quite acceptable. By now the problems should not look quite so forbidding.

I am presently trying to construct a learning net to play two-dimensional tic-tac-toe. The typical general purpose computer is hardly an ideal machine for this effort. The brain works in a parallel fashion while the computer is essentially a sequential machine. Also, the brain has certain analog aspects that could be simulated more cheaply on a hybrid computer. It would seem that a special purpose machine would be desirable for more advanced tasks.


  1. The American Heritage Dictionary of the English Language

  2. If this was not the case it would be difficult justifying attempts to build one. After all, we already have about four billion “thinking machines” on the earth today. And as the humorists are quick to point out, it does not require skilled labor to produce them.

  3. Fink, Donald G. Computers and the Human Mind. 1966. p. 208

© 1973 Stephen Leibowitz