Analysis of Research in Instructional Technology
Abstract: The division between technical and academic training that haunts our educational systems could find a unifying paradigm in Artificial Intelligence design. AI knowledge systems offer pre-tested templates instructional systems designers can use to structure knowledge systems in educational environments. The designs used by AI systems can be translated, transferred and modified to suit educational system design requirements. This paper explores the transferability of design concepts between the two system domains with particular emphasis on incorporating heuristics into educational systems designs.
Key words: cybernetics, heuristics, information flow, instructional systems design, creativity
"cybernetics is the 'study [of] the control processes in electronic, mechanical and biological systems ... [the examination of] the flow of information within a system and the schemas people use to control the flow of information within the system' "(Arnold 1986:13)
"Mind is primarily a verb." (Dewey 1934:262)
In the spirit of the Artificial Intelligence designers who looked to philosophy for insights to aid them structuring their systems, I have previously suggested that instructional designers look to the advances made by AI research codification of knowledge to do work, in real time in a real world, to educational systems design.
The division between technical and academic training that haunts our educational systems could find a unifying paradigm in Artificial Intelligence design. Why not design all education to be interactive and applied. Knowledge-how integrated imaginatively with knowledge-that in order to create dynamic and healthy systems.
AI knowledge systems offer pre-tested templates we can use to structure knowledge systems in educational environments. AI system designers design machines to work with people. Instructional designers today need to do the obverse: design human systems that will work effectively with machines.
A great deal of human effort and ingenuity has served to transform the work required to maintain our existence on earth from tedious, mind-numbing and soul-deadening to meaningful and rewarding.
Slavery was a system designed to free a portion of a population from the onerous aspects of work. The system of slavery was, as we are now well aware, found wanting both for economic (efficiency) and humanitarian (decency) reasons. The passing away of the system of slavery was partly the cause, partly a symptom of the mechanization of work that came about with the dawn of the industrial age. The mechanization of work, though it did bring about dehumanizing aspects to work, was generally an attempt to alleviate human suffering and the economic loss which accrued from the hazardous aspects of work. When we are able to understand the principles underlying any aspect of work that we perceive to be hazardous (to economic profit or to human existence), we mechanize.
It is erroneous to believe that one day we will do away with work. Work and love are fundamental to the perpetuation of human life. However, it is an innate human quality to invent tools to make our work easier. When we change our tools, the environments in which we live and work also change. In fact, we label whole huge chunks of historical process by the sorts of tools we use: the iron age, the bronze age, the industrial age, the technological age. Tools and work will always be central to the meaning and purpose of human cultures.
The purpose of this paper, A CYBERNETIC APPROACH TO INSTRUCTIONAL SYSTEMS DESIGN: HEURISTICS, THE POWER OF PURPOSEFUL SEARCHING , is to establish an ideological basis for the creation, and testing of instructional designs which are HUMAN-CENTERED, FLEXIBLE and THOROUGH.
I define HUMAN CENTERED as based on and manifesting humanitarian principles and values. I cite Socrates, John Dewey, Vygotsky and Hannah Arendt as quintessential examples of humanitarian educational philosopher/theorists.
By FLEXIBLE I mean responsive in real time. I cite multi-media, networking, interactive television, the world wide web, audio and video conferencing as tools capable of responsiveness to the ever changing needs of human participants (teachers and students).
By THOROUGH I mean the AI systems' design matrix for intelligent systems (system architectures, knowledge bases, knowledge sources, heterarchal heuristics, neural nets, frames, causal reasoning and hierarchically organized blackboard control systems) applied to education and training systems.
It is my opinion that the designs used by AI systems can be translated, transferred and modified to suit educational system design requirements. AI designs organize knowledge into functional components which do work required by human systems. Educational designs organize knowledge and human systems to do work. Both AI systems and educational systems are working with similar elements. In this paper I will explore the transferability of design concepts between the two domains.
Thought is notoriously difficult to pin down. Philosophers and theoreticians have attempted to define and categorize thought for thousands of years without coming up with a quantifiable, reliable schema. However, when AI researchers decided to apply their knowledge systems to do work in real time with real world constraints, a great deal was learned about the structure of intelligent thought.
As Miller (1995:10) notes, in his ground-breaking work, Living Systems, "conceptual and abstracted spaces do not have the same characteristics and are not subject to the same constraints as physical space.....but physical space is the only common space in which all concrete systems exist." And physical space is the only space in which ideas can be tested and proved.
There is no known beginning to time, space or consciousness. From that awareness we are able to conjecture both forward and backward in time. But, as Hannah Arendt so eloquently explains, our actual existence and therefor any manifestation of our thoughts is limited to the present moment. Action exists in the now. And, unlike the past which is certain, has indeed happened, is in fact defined by its having existed and the future which is only, ever, predictions, now is an uncertain and unpredictable place. So, although thought has the blessed ability to move fluidly through space and time, our ability to do work is always and only possible in the present tense.
A few qualities of thought seem abundantly clear and intractable:
Thought does work, it is purposeful and goal driven.
Thought originates in dialogue and develops through discourse and relationships.
Thought is complex.
Thought works in a synergistic relationship to its environment.
I accept Schank's premises (1986:17) that 'The real intent of Artificial Intelligence is... to find out what intelligence is all about', that AI 'is an attempt to come to understand one of the most complex problems of mind...to define the apparatus that underlies our ability to think' (Ibid: vi) and, 'My premise is that the simplest mental task and the most complex of mental tasks are all interrelated.' (Ibid)
I agree with Murray (1989:2) when he states that, "the theoretical developments necessary to understand intelligent systems are themselves leading to major intellectual advances. These developments can be expected to have implications in other areas of research as can new techniques of computer modeling. The ramifications of work in the artificial intelligence (AI) field [has] spread beyond the disciplines that are concerned with the technical aspects of intelligent systems."
"A thighbone of an antelope, carbon dated to 30,000 BC., contains tallies... This thighbone suggests that people probably always had a need to gather data and use the information it could generate." (Arnold 1986:7)
Data, as the Arnold and Bowie quote above so evocatively suggests is 'gathered' from the world. The Oxford English Dictionary defines data as 'a thing given or granted; something known or assumed as fact.' In my schooldays our history teacher told us that 'a fact is where we choose to stop looking.' The world from which we gather data is infinite and infinitely changeable. We can always dig deeper, look longer, find new facts. And, since the world is always changing, there are always new facts to be found. But, in order to do anything with the data we gather we must choose to stop looking and get to work applying our gathered data to problems and challenges we face living in the world.
Our beliefs about the world color our handling of data. Until rather late in the 20th century it was a general assumption that the existential and the spiritual world were contained within hierarchical progressions from the lowliest life form up a ladder not much different from the medieval Great Chain of Being, up through the higher life forms, humans and then, eventually, to God. This progression from the lowest to the highest was a fundament in both Western and Eastern religion and philosophy. In Western culture, this ladder-like classification was believed to exist in a space/time which operated linearly and eternally. This 'traditional' world view influenced and was reflected in the structures of all social and, specifically, our educational, systems. We need also to be aware that this 'traditional' world view or zeitgeist has influenced and is reflected in the structures with which we think, how we sort, classify and do work with data. As Nakashima in his AI research paper (1997:35) so cogently states, 'Traditional logic uses static relations between information.'
World views change. Today many of us agree with Dewey (1934:237) when he states, "But all ranking of higher and lower are, ultimately, out of place and stupid.'
My position is not that traditional logic is 'stupid'. I propose that educationalists, instructional designers, cognitive philosophers consider replacing the traditional view of hierarchies as timeless, linear, eternal, unchanging, rigid, static, and God-blessed with a post-modern AI view.
I would characterize the post-modern AI view of hierarchies as temporary, multiplicitous, coordinated, purpose-driven, and (if we accept that spirit acts only out of necessity:) spiritually blessed. In post-modern thought we no longer believe it necessary to find 'a final solution.' In fact perhaps even the intention to search for a final solution may itself be a dangerous, mind-numbing, soul-deadening choice. In contrast I would cite the United States' famous melting-pot social theory: the intention to work with people from many different cultures, with many different beliefs does not necessitate, but rather discourages, all ultimate rankings of higher and lower. Maintaining and encouraging and perhaps even systematizing this aspect of the democratic ideal could be seen as essential to the safeguarding (and flourishing.. dare we hope?) of what Tarrant (1989) has described as the ethics at the heart of democracy.
The infinitude of data requires it to be classified in order for it to be used. The nature of co-operation and work demand an ordering of events and roles within a system for the system to be operational. An ordering of events is a temporary (and cotemporaneous) hierarchy of decision making. These temporary hierarchies are essential to any functioning system. When hierarchies built for the purpose of doing work rigidify into the stale repetition of old rules, work and people suffer as a result.
Traditional education corresponded to the traditional world view. We were educated to work with data. We memorized and manipulated information and facts. Intelligence in my schooldays was defined as the ability to recall facts. Problem solving was defined as the production of 'right' answers. Right answers were incontrovertible: answers could be verified by consulting relevant, reliable texts.
We continue to value the memorization of facts. Standardized testing evaluates the ability to memorize and recall data. The most common form of student evaluation is a test for 'right' answers. But many educational analysts (Apple 1979, 1982; Arendt 1978; Ball 1977; Degenhart 1982; Dewey 1916, 1938; Gardner 1983; Hodgkin 1985; Holt 1977; Koestler 1964; Schank 1986; Vlastos 1991; Vygotsky 1962; Weinert 1987) believe that there should be more to education then the memorization and appropriate manipulation of facts for the purpose of producing immediately verifiably correct answers.
"Knowledge and [data] representation [e.g., how that data is organized computationally] are distinct entities that play central but distinguishable roles in knowledge-based systems. Knowledge is a description of the world. It determines a system's competence in solving problems: a system's depth and breadth of problem solving power is determined by what it knows. Representation is the way knowledge is encoded. It defines the performance of a system in solving problems: speed and efficiency of problem solving are determined to a significant degree by the choice of representation." (Simmons & Davis 1993:27)
All systems require information in order to function. Intelligence is the quality, in an individual human system which 'controls' the information flow (available to consciousness) within that human system and between that individual human system and other systems.
There can be no doubt that the ability to handle data is a fundamental aspect of intelligence (Bjork 1994; Gagne 1977; Goodall 1985; Parkinson 1965). Artificial intelligence systems, like all systems, are data dependent. Methods of handling, storing, retrieving, collecting, updating and manipulating data are critical to effective AI systems' design (David 1993; Jordanides 1991). As Wick (1993:614) describes, "the problem of classification as well as methods for solving classification problems have become well-understood and form the basis of most practical applications of expert systems."
"In ancient times and until well after the Renaissance there was a shortage of books everywhere. In many places today there still is a shortage. But acute forms of information input overload have become increasingly common in Western civilization." (Miller 1995:147)
Both living and non living systems depend upon information flowing through them (Miller 1995). The rate of information flow and the type of information required by particular system varies dependent upon the work the system is required to perform (Hayes-Roth 1993; Boden 1977; Goodall 1985; Chandrasekaran 1993; Bylander 1993; Lehr 1995).
Today, thanks in large part to advances in computer and information sciences, we can envision instantaneous accessibility of data networks providing theoretically limitless flows of information, transmitted over telephone lines. But this wondrous availability of limitless information threatens the static, traditional educational system design. Toffler's (1970) predicted information explosion has materialized and our educational system is in need of new systems approaches in order to avoid being destroyed by information overload or the pathological inability (Miller:147-151) to interface with its supra-system, the social, political and economic environment.
Data is, by definition, how we connect to the world around us. We pluck data as the fruit of the tree of life. We cannot understand the whole of life. We cannot absorb the entire tree. - "How do you eat an elephant? " "One bite at a time."
Even if we could somehow ingest an entire tree it would not nourish us. We pluck and eat the fruit of the tree. We are given senses with which we filter, sort, classify and absorb aspects of the world. We analyze and create and build with what we learn from the data that we collect. How we classify and codify this knowledge affects our ability to be able to use it for the work we are required to perform in order to survive and thrive in our environment.
When we ask students to describe, delineate, find, count, report, identify, measure, quantify, define, sort or classify, we are asking them to manipulate data.
"An algorithm is a memorized procedure which people can use to solve a problem...A computation algorithm let anyone, not just a mathematician, perform operations on assorted numbers .....Calculation algorithms... made it easy to organize numbers (data), do calculations quickly (create information), and use the information... to solve problems.. Another plus was that both data (numbers) and the information (calculated answers) could be stored in a ledger... "(Arnold 1986:10)
According to the Oxford English Dictionary 'Algorithm' is a bastardized form of 'Kuwarizm' the 'surname of the Arab mathematician Abu Jafar Muhammad ibn Musa, through a European translation of whose work on algebra the Arabic numerals became generally known. The Arabic, or decimal, system of numeration: hence: arithmetic.'
The OED defines 'arithmetic' as 'ars metrica... art of counting.. The science of numbers: the art of computation by figures.' In other words arithmetics is manipulation of data through the symbolic abstraction called numbers.
Algorithms are formulas, abstract conceptualizations of data manipulations. Algorithms are a step beyond classification. Algorithms allow us to do work with data, to transform data, to examine relationships between data. Algorithms can work with changes in data over space and time; one way to think about algorithms is as data in motion and in relation.
If we think of data as the elements we take from what the world has to offer, we can think of algorithms as the tools with which we work with those elements. If data is the fruit of the tree of life, algorithms are the recipes we use to juice, puree, make jam, freeze-dry, make pies with that fruit. Data is one way to abstract 'beingness.' Algorithms are one way we abstract 'doing.'
"The sum of a system's units is larger than the sum of that measure of its units... Because of this, the more complex systems at higher levels manifest characteristics, more than the sum of the characteristics of the units, not observed at lower levels. These characteristics have been called emergents." (Miller 1995:28-9)
The world is changed as we work with it. Doing work in the world creates byproducts and artifacts. Food, buildings, clothes, books, musical instruments, music and thought itself are by-products of our work with data. But so also are pollutants, weapons and deserts and hate by-products of our work. We live with the consequences of our actions. Once the world is changed, new data is created and therefore new algorithms are needed
"And what discouragement does the perspective of a long and arid computation cast into the mind of a man of genius, who demands time exclusively for meditation, and beholds it snatched from him by the material routine of operations! Yet it is by the laborious route of analysis that he must reach the truth; but he cannot pursue this unless guided by numbers; for without numbers it is not given to us to raise the veil which envelopes the mysteries of nature." (Menabrea 1968:25)
Gathering and manipulating data can be laborious and time consuming. One powerful incentive for inventing a calculating machine (a computation machine, a computer) was to speed up the calculations which could take more than one human lifetime to perform. We have been inspired to mechanize aspects of cognition to relieve ourselves of the burden of 'long and arid computation.'
"Idealizations are used so that one does not have to take into account, at least initially, of all the myriad influences that might mask the essential properties being investigated. Once some of these properties have been described, the idealizing constraints on the system can gradually be relaxed to allow one to study the problem in a more general setting, in order to perfect the model." (French 1995:30)
Algorithms abstract, simplify and idealize transformations. As Gelenter (1968:59) states, "the idealization has a significance of its own," but those transformations can only be verified in the realm of the mundane, in real time and real space.
"There is a further analogy between patterns and programs that is relevant here. If one asks why pattern designers write 'C6F' instead of giving the full cabling instructions every time cabling is required, the obvious answer is - to save space, so that the printed pattern will fit into a handbag and not need a briefcase. A less obvious, and more interesting, answer is that the designer (like the knitter) actually thinks in terms of these abbreviations as she describes the 'details' of what she is doing, and could not proceed intelligently if she were not to do so. Certainly, she can laboriously translate into the language of 'wool forward' or 'wool backward' if she has to (perhaps to undo a mistake she has made, or to explain her skill to a child). But she cannot do this for large sections of the garment as a whole; if she tries to do so, she will be hopelessly lost."(Boden 1977:11)
We use algorithms to do the work on paper before we try to build the system itself. But it is crucial to remember that abstractions, formulas, models, can never be exactly duplicated in real time and real space. No two events or existing entities are ever exactly the same.
"Epistemological problems arise because we are dealing with models of a real and open-ended world. Every model is an approximation, not only because it is imprecise but, more fundamentally, because it introduces assumptions about the world." (Van de Velde 1993:217)
When we ask students to compute, transform, relate, compare and contrast, graph, analyze, model, represent, vary their examples we are asking them to use algorithms.
Rules apply to work we do in the world but the work we actually do, as it occurs in real space and real time will never exactly match the rules. Proofs, rules, and algorithms are guidelines for creating generalizability. We rely on guidelines and yet in order for guidelines to be truly useful, functional, we must modify them as we work and live in the present tense.
"A method is an algorithm. It contains a series of activities and a control flow defined over the activities. The activities consult or impact models. Each model and each interface is playing a particular role in the method. Some of the activities are themselves tasks." (Steels 1993:278)
Once methods are applied, the world is changed by that work (applied methods = work). The new world generated by our work in turn generates new data which must be newly gathered, classified, tested, sorted and used. New data may change the reliability of an algorithm, or even of an entire system. Systems are dynamic and are in a dynamic relationship both to their internal mechanisms and to their external environments and supra-systems.
"[If] the cortex of the infant is [conceived of as] an unorganized machine, which can be organized by suitable interfering training. The organization [of the infant's cortex] might result in the modification into a universal machine or something like it. This would mean that the adult will obey orders given in an appropriate language, even if they were very complicated; he would have no common sense, and would obey the most ridiculous orders unflinchingly. When all his orders had been fulfilled he would sink into a comatose state or perhaps obey some standing order, such as eating. Creatures not unlike this can really be found, but most people behave quite differently under many circumstances." (Turing 1968:43)
We can train students to create new algorithms. We do not have to present our rules as orders which must be memorized, as 'final solutions' to unvarying problems. Students often have an innate understanding of the world of constant change. Their bodies are changing every day. Algorithms could be taught as a means of working with change, as mechanisms which facilitate communication and cooperation during the treacherous processes of work and change, the creative process of living effectively in an ever-changing world.
"The explanation of the precise meaning of the term heuristic method is an important part of this paper. for the moment, however, we shall consider that a heuristic method (or a heuristic, to use the noun form) is a procedure that may lead us by a short cut to the goal we seek or it may lead us down a blind alley. It is impossible to predict the end result until the heuristic has been applied and the results checked by some formal process of reasoning. If a method does not have the characteristic that it may lead us astray, we would not call it a heuristic, but rather an algorithm. The reason for using heuristics instead of algorithms is that they may lead us more quickly to our goal and they allow us to venture by machine into areas where there are no algorithms ...people seem to use heuristic reasoning in nearly every intelligent act" (Gelenter 1968:57)
In a domain in which our knowledge is incomplete, we cannot use algorithms alone to do the work demanded. When we have reached the limits of what we can represent with algorithms but still want to do work, we use heuristics. When we have reached the limits of certainty we must work with uncertainty.
"There is a difference between finding a proof and checking it. To check a proof one merely follows some simple rules that are set down very precisely. To discover a proof, on the other hand, requires ingenuity and imagination. One must use good intuitive judgment in selecting which of many possible alternatives is a step in the right direction." (Gelenter 1968:59)
Heuristic reasoning has always been the cognitive method used when we are searching for new methods to do work. In order to progress from using tallies (count, measure and compare) to using algorithms (reason abstractly, plan ahead, build and follow mathematical arguments) a great deal of trial and error, experimentation, hoping, dreaming, dialogue and searching had to have occurred. Many people failed many times before we were able to invent proofs which could be verified in real space and stand the test of time. In other words, algorithms themselves were (and continue to be) arrived at through heuristic reasoning.
Being able to take risks in new situations, being able to respond creatively to challenges, coming up with novel solutions to problems, these are qualities clearly indicative of intelligence:
We tend to say that a person is intelligent to the extent that he is insightful, creative, and in general able to relate apparently unrelated pieces of information to come up with a new way of looking at things. We tend to claim that a person is unintelligent to the extent that his behavior is thoroughly predictable with reference to what we know that he knows. Thus, when a person does things the way he was told to do them, never questioning and thus never creating new methods, we tend to see him as unintelligent." (Schank 1986: 17)
Intelligent thought might be the tool we require to create and maintain progress. The modern theory of progress supposed that human beings progress in a direction, for a God-given purpose (Berman 1982). Post-modern theory adds that the directions and purposes of human progress derive from and then in turn affect, human value systems (Bauman 1993,1995). Whether covertly employed or clearly stated it is what we value that determines what we work to accomplish (Tarrant 1989; Arendt 1954, 1963).
In the past we have valued docile, rule-abiding students and employees. But now that business is crying out for workers who are able to problem-solve and create using high-level thinking skills and technology, the traditional conflict between independent free thinking citizens of a democratic society and obedient, self abnegating flotsam and jetsam useful to the factory owners, is 'melting into air' (Berman 1982).
"You may have understood what you needed to, but as long as all processing conformed to the expectations that were available, and the standard methods were used, little will be learned from experience." (Schank 1986:24)
As long as the purpose of our national, public educational system was to train young people to take their appropriate places in a society sure of its values, there was no need for heuristic reasoning in classrooms. For most of the 19th century we have been generally content to follow extant rules codified by others. But, today, the United States' failure to maintain economic hegemony and educational superiority has provided educationalists with the opportunity to re-examine the values underpinning education.
One of the most useful contributions AI research has had to our understanding of cognition is in its formalization of heuristic reasoning. Heuristic reasoning is categorically different from logical reasoning. Heuristic reasoning, or heuristics, is the ability to use algorithms, data, experience, analogies and any other knowledge source or system in order to solve a problem for which there is no known algorithm.
"Between the poles of aimlessness and mechanical efficiency, there lie those courses of action in which through successive deeds there runs a sense of growing meaning conserved and accumulating toward an end that is felt as accomplishment of a process." (Dewey 1934:39)
"Heuristics involves the ability to recognize data which can be utilized efficiently and effectively to solve problems. Heuristic reasoning does not require of itself to produce a 'right' answer. Heuristic reasoning is searching for patterns previously undefined. Heuristic reasoning is not random any more than searching in any context can be considered a random act. Searching is always for a purpose or towards a goal "(Turing 1968:50).
"Different ideas have their different 'feels', their immediate qualitative aspects, just as much as anything else. One who is thinking his way through a complicated problem finds direction on his way by means of the property of ideas. Their qualities stop him when he enters the wrong path and send him ahead when he hits the right one. They are signs of an intellectual: 'stop and go.' If a thinker had to work out the meaning of each idea discursively, he would be lost in a labyrinth that had no end and no center. Whenever an idea loses its immediate felt quality, it ceases to be an idea and becomes like an algebraic symbol, a mere stimulus to execute a operation without the need of thinking. For this reason certain trains of ideas leading to their appropriate consummation (or conclusion) are beautiful or elegant. They have esthetic character." (Dewey 1934:120)
The challenge facing educational systems designers today is to create systems which will not only cope effectively with change but thrive and even institute change. Institutionalizing change may seem like a paradoxical impossibility but in fact AI researchers have provided us with excellent models of 'expert systems' which are able to cope with constant change. Heuristics is the title most often given to the method of formally incorporating creative, thinking, intelligent, purposeful change into a functioning AI system. Bringing heuristic methods into the curriculum might change both our classroom procedures and our testing principles.
"Decisions made on part of the evaluation effort are strongly influenced by the value system of those making the decisions. Values should thus be clearly identified in order to assist in the decision making as well as to determine the kinds of information which needs to be collected. The evaluation process may serve as a means of clarifying the value structure of the program of those operating it." (Roth 1981:3)
Change, in this context, does not have to be conceived of as revolutionary (Roth 1981:15). It is perhaps more honest as well as far more comforting and appealing to view constant change as evolutionary, as a series of infinitely small choices and changes which accumulate over time and ensure the health of the system as a whole.
Vygotsky (1962, 1993) maintained that education, like any aspect of social life is inextricably bound up in the fabric of its larger human supra-systems. For Vygotsky it was a given that as individuals we exist in the world, that we develop within and for our social circumstances. To carry Vygotsky's ideas further, to combine them with Miller's (1995) concepts of mutually interdependent, interconnected living systems, we can posit that social systems (educational systems) exist in the world and develop within and for social purposes. Instructional designers will be asked to make the relationship between educational systems and global information environment explicit and enable that relationship to function dynamically over time. Heuristics could be the key to the structuring of that synergistic relationship.
The hierarchical nature of the educational system and the heterarchal nature of heuristic reasoning may at first seem to be inimical to coordinated activities between the two domains. But an examination of AI architectures reveals them to be hierarchically organized. Heuristics is only one aspect of a unified process. Collecting and sorting data, learning rules, working with algorithms, classifying goals, breaking tasks into subtasks, classifying, sorting and arranging tasks and subtasks into hierarchies, all are part and parcel of heuristic thinking. Cognition is a complex and unified process requiring the collaboration of all its subprocesses.
Creativity means asking questions. If we want a machine to be creative, to think in any significant sense of that term, then it must be aroused by what it perceives so as to wonder about it. Much of this wondering is fairly prescribed. We know what to wonder about at a certain basic level. After that, we are taught that to wonder about. Beyond that there is a level at which we wonder about truly new things...
"My claim is that we must teach machines to wonder too. To do so, they must be given very detailed ideas about what to wonder about in specific domains, as well as a set of algorithms about how to wonder in general. In addition, they must be given personalities of a sort. That is, one tends to wonder idiosyncratically. One wonders about things in a way that reflects one's own personal experiences and knowledge about a domain." (Schank 1986:207)
The truly shocking element in Schank's statement above is not that machines might be able to wonder but that intelligence involves wondering and wondering 'idiosyncratically'. It has been a long time since it was fashionable or even possible to introduce the ideas of wonder and idiosyncrasy into any western school systems. AI research can be cited to contend that the development of intelligence will require the development of creativity and wondering.
Heuristic reasoning is creative thinking. We use heuristic methods when we synthesize meaning by relating previously unconnected events or data. Heuristics is the path of our wondering when we ask ourselves, "What would happen if...?"
The future cannot be predicted, but futures can be invented. It was man's ability to invent which has made human society what it is. The mental processes of invention are still mysterious. They are rational, but not logical, that is to say deductive. The first step of the technological or social inventor is to visualize by an act of imagination a thing or state of things which does not yet exist and which appears to him in some way desirable. He can then start rationally arguing backwards from the invention and forward from the means at his disposal, until a way is found from one to the other.'
(Dennis Gabon Inventing the Future 1963 quoted in Kostelanetz 1971a:31)
Sticklen (1993:341) states that , "knowledge has a generative flavor and cannot be captured in a static data structure... knowledge itself is unbounded." But educational systems must work with bounded knowledge. And it is our job as instructional systems designers to consider options for the bounding of knowledge which benefit both educational systems and the users of those systems.
As Jaynes (1977) so convincingly argues, even the mind evolves. The way we think is not immune to the forces of change and growth; and why should it be? Perhaps we have too long considered change to be a danger, a threat to stability. Perhaps now we are ready to reconceive change in the light of post-modern ethics, evolutionary theory and the bio-diversity theory of ecological stability.
We can institutionalize change if we are able to conceptualize change as a requirement, a fundamental dynamic in the healthy functioning of any system.
Educational systems are not good environments for testing radical ideas - their contents (our children and people who require various types of guidance) are too precious. The results of testing, if negative are far too costly in human terms. The question begs itself: How can we test our ideas about information flow, knowledge engineering and technological instructional systems design? I believe the answer is to extrapolate from AI research and design.
I propose that instructional designers incorporate both AI designs AI's assumptions that intelligence is a multi-faceted, integrated, hierarchically functioning, heterarchically creative system relying on and in turn influencing an ever-changing data base; and that we begin to structure our educational systems along those lines.
Perhaps if we wholeheartedly accept the idea that change is a necessary and perhaps even a delightful experience, we can incorporate uncertainty into systems without threatening their internal stability or their security.
Apple, M. 1979. Ideology and curriculum: Routledge & Kegan Paul.
Apple, M. 1982. Education and power: Routledge & Kegan Paul.
Arendt, Hannah. 1954. Between past and future: Faber & Faber.
Arendt, Hannah. 1963. On revolution: Penguin.
Arendt, Hannah. 1978. The life of the mind: thinking: Secker & Warburg.
Arnold, W.R., and J.S. Bowie. 1986. Artificial intelligence: a personal commonsense journey: Prentice-Hall Inc.
Ball, S. 1977. Motivation in Education. Edited by S. Ball: Academic Press.
Bauman, Zygmunt. 1993. Postmodern ethics: Blackwell.
Bauman, Zygmunt. 1995. Life in fragments; essays in postmodern morality: Blackwell.
Bennet, N., L. Wood, and S. Rogers. 1997. Teaching though play: teachers' thinking and classroom practice: Open University Press.
Berman, M. 1982. All that is solid melts into air: the experience of modernity: Penguin.
Beveridge, Michael. 1989. The educational implications of intelligent systems. In Intelligent systems in a human context, edited by L. A. Murray and J. T. E. Richardson. Oxford New York Tokyo: Oxford University Press.
Bjork, R.A. 1994. Memory and metamemory: considerations in the training of human beings. In Metacognition: knowing about knowing, edited by J. Metcalfe and A. Shimamura: MIT Press.
Boden, M.A. 1977. Artificial intelligence and natural man: Basic Books Inc.
Brandes, D., and P. Ginnis. 1986. A guide to student-centred learning: Basil Blackwell.
Briggs, L.J., K.L Gustafson, and M.H. Tillman, eds. 1991. Instructional design: principles and applications: Educational Technology publications.
Bundy, Alan. 1990. Catalogue of artificial intelligence techniques. Edited by A. Bundy. 3rd ed: Springer-Verlag.
Bylander, Tom, Michael Weintraub, and Sheldon R. Simon. 1993. QUAWDS: diagnosis using different models for different subtasks. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Chandrasekaran, B., and Todd R. Johnson. 1993. Generic tasks and task structures: history, critique and new directions. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Charlet, Jean. 1993. ACTE: a causal model-based knowledge acquisition tool. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Clancey, William J., and Monique Barbanson. 1993. Using the system-model-operator metaphor for knowledge acquisition. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Cockburn, C. 1985. Machinery of dominance: women, men and technical know-how: Pluto.
Coiffet, Ph., J. Zhao, J. Zhou, F. Wolinski, P. Novikoff, and D. Schmit. 1991. About qualitative robot control. In Expert Systems and Robotics, edited by T. Jordanides and B. Torby. Berlin Heidelberg: Springer-Verlag.
Collins, C., and J. Mangieri, eds. 1992. Teaching thinking: Lawrence Erlbaum.
Console, Luca, Luigi Portinale, Daniele Theseider Dupre, and Pietro Torasso. 1993. Combining heuristic reasoning with causal reasoning in diagnostic problem solving. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
David, J.M., J.P. Krivine, and R. Simmons, eds. 1993. Second Generation Expert Systems: Springer-Verlag.
Degenhart, M.A.B. 1982. Education and the value of knowledge: Allen and Unwin.
Dewey, John. 1916. Democracy and education: an introduction to the philosophy of education: MacMillan.
Dewey, John. 1934. Art as experience. New York: Milton, Balch & Co.
Dewey, John. 1938. Experience and education: Collier-MacMillan.
Evans, C.R., and A.D.F. Robertson. 1968. Cybernetics: University City Press.
Foster, H., ed. 1983. Postmodern culture: Bay.
Freire, Paulo. 1973. Education: the practice of freedom: Writers and Readers.
French, Robert M. 1995. The subtlety of sameness: a theory and computer model of analogy-making. Cambridge, Mass London, England: MIT Press.
Frude, Neil. 1989. Intelligent systems off the shelf: the high street consumer and artificial intelligence. In Intelligent systems in a human context, edited by L. A. Murray and J. T. E. Richardson. Oxford New York Tokyo: Oxford University Press.
Gagne, R.M. 1977. The conditions of learning and theory of instruction. 3rd ed: Holt, Rinehart & Winston.
Gardner, Howard. 1983. Frames of mind: the theory of multiple intelligences: Basic Books.
Gelenter, H.L., and N. Rochester. 1968. Intelligent behavior in problem-solving machines. In Cybernetics, edited by C. R. Evans and A. D. F. Robertson: University City Press.
Gersie, A. 1997. Reflections on therapeutic storymaking: Jessica Kingsley.
Gersie, A., and N. King. 1990. Storymaking in education and therapy: Jessica Kingsley.
Goodall, A. 1985. The guide to expert systems: Learned information.
Guida, Giovanni, and Marina Zanella. 1993. Knowledge-based design using the multi-modeling approach. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Hamano, Fumio. 1991. Intelligent control. In Expert Systems and Robotics, edited by T. Jordanides and B. Torby. Berlin Heidelberg: Springer-Verlag.
Hare, R.M. 1981. Moral thinking, its levels, method and point: Clarendon.
Hayes-Roth, Barbara. 1993. An architecture for adaptive intelligent systems: Stanford University.
Hodgkin, R.A. 1985. Playing and exploring education through the discovery of order: Methuen.
Holt, John. 1964. How Children Fail. New York: Pitman.
Holt, John. 1967. How Children Learn. New York: Pitman Publishing Corp.
Hubner, Thomas, and Klaus Hormann. 1991. A model-based expert system for the diagnosis of faults in a robot system for cleaning castings. In Expert Systems and Robotics, edited by T. Jordanides and B. Torby. Berlin Heidelberg: Springer-Verlag.
Hunt, John D., and Christopher J. Price. 1993. Integrating functional models and structural domain models for diagnostic applications. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Isaac, Stephen, and William B. Michael. 1977. Handbook in research and evaluatin for education and the behavioral sciences. 3rd ed: EdiTs.
Jaynes, Julian. 1977. The origin of consciousness in the breakdown of the bicameral mind. Boston: Houghton Mifflin.
Johnson, K., and L. Foa. 1989. Instructional Design: new alternatives for effective education and training: Collier Macmillan.
Jordanides, T., and B. Torby, eds. 1991. Expert Systems and Robotics. Berlin Heidelberg: Springer-Verlag.
Khan, B.H. 1997. Web-based instruction: Educational technology.
Koestler, Arthur. 1964. The act of creation: Pan.
Kogan, M. 1978. The politics of educational change: Manchester University.
Kostelanetz, R. 1971a. The politics of speculation. In Social speculations, edited by R. Kostelanetz: William Morrow & Co
Kozulin, A. 1990. Vygotsky's psychology: a biography of ideas: Harvester/Wheatsheaf.
Kostelanetz, R. 1971b. Social speculations: visions for our time. Edited by R. Kostelanetz: William Morrow & Co.
Koton, Phyllis A. 1993. Combining causal models and case-based reasoning. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Lee, V., and P. Williams. 1972. Creativity: Open University.
Lehr, W., ed. 1995. Quality and reliability of telecommunications infrastructure: Lawrence Erlbaum Associates.
Linster, Marc. 1993. Explicit and operational models as a basis for second generation knowledge acquisition tools. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
McCulloch, W.S. 1968. Machines that think and want. In Cybernetics, edited by C. R. Evans and A. D. F. Robertson: University City Press.
McLaren, P., and P. Leonard, eds. 1993. Paulo Friere: Routledge.
McLaughlin, C., and G. Davidson. 1994. Spiritual politics: Findhorn.
Meijer, G.R., T.L. Mai, E.Gaussens, L.O. Hertzberger, and F. Arlabosse. 1991. Robot control with procedural expert system. In Expert Systems and Robotics, edited by T. Jordanides and B. Torby. Berlin Heidelberg: Springer-Verlag.
Menabrea, L.F. 1968. On the mathematical principles of the analytical engine. In Cybernetics, edited by C. R. Evans and A. D. F. Robertson: University City Press.
Metcalfe, J., and A. Shimamura, eds. 1994. Metacognition; knowing about knowing: MIT Press.
Miller, James Grier. 1995. Living systems. paperback ed: University Press of Colorado.
Moyles, J.R., ed. 1994. The excellence of play: Open University.
Murray, Linda A., and John T.E. Richardson, eds. 1989. Intelligent systems in a human context. Oxford New York Tokyo: Oxford University Press.
Musen, Mark A. 1993. An overview of knowledge acquisition. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Nakashima, H., H. Matsubara, and I. Osawa. 1997. Causality as a key to the frame problem. Artificial Intelligence 91 (1):33-50.
Nelson, T.O., and L. Narens. 1994. Why investigate metacognition? In Metacognition: knowing about knowing, edited by J. Metcalfe and A. Shimamura: MIT Press.
Newman, F., and L. Holzman. 1993. Lev Vygotsky: revolutionary scientist.
Nyberg, D., ed. 1975. The philosophy of open education: Routledge & Kegan Paul.
Oliverea, Eugenio, Rui F. Silva, and Carlos Ramos. 1991. Intelligent cooperation for robotics. In Expert Systems and Robotics, edited by T. Jordanides and B. Torby. Berlin Heidelberg: Springer-Verlag.
Paillet, Olivier. 1993. Multiple models for emergency planning. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Parkinson, G. 1965. The theory of meaning: Oxford.
Pellegrini, Anthony D., ed. 1995. The future of play theory: a multi-disciplinary inquiry into the contributions of Brian Sutton-smith: State University of New York.
Raczkowsky, Jorg, and Kerstin Seucken. 1991. Sensor planning for the error diagnosis of robot assembly. In Expert Systems and Robotics, edited by T. Jordanides and B. Torby. Berlin Heidelberg: Springer-Verlag.
Rademakers, Philip, and Johan Vanwelkenhuysen. 1993. Generic models and their support in modeling problem solving behavior. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Randall, W.C. 1995. The stories we are; an essay on self creation: University of Toronto.
Reynaud, Chantal. 1993. Acquisition and validation of expert knowledge by using causal models. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Reynolds, G., and E. Jones. 1997. Master players: learning from children at play: Teachers College Press.
Romiszowski, A.j. 1981. Designing instructional systems: decision making in course planning and curriculum: Kogan Page.
Roth, R.A. 1981. The program evaluation instruction series: University Press of America.
Rush, R. 1957. The doctrines of the great educators: MacMillan.
Saridis, George N. 1991. On the theory of intelligent machines. In Expert Systems and Robotics, edited by T. Jordanides and B. Torby. Berlin Heidelberg: Springer-Verlag.
Schank, R.C. 1986. Explanation patterns: understanding mechanically and creatively: Lawrence Erlbaum Associates.
Simmons, Reid. 1993. Generate, test and debug: a paradigm for combining associational and causal reasoning. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Simmons, Reid, and Randall Davis. 1993. The roles of knowledge and representation in problem solving. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Skinner, B.F. 1971. Beyond freedom and dignity: Penguin.
Steels, Luc. 1993. The componential framework and its role in reusability. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Sticklen, Jon, and Eugene Wallinford. 1993. On the relationship between knowledge-based systems theory and application programs: leveraging task specific approaches. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Swartout, W., and J. Moore. 1993. Explanation in second generation expert systems. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Talbot, M. 1991. The holographic universe: Harper Perennial.
Tarrant, James M. 1989. Democracy and education: Gower
Terpstra, Peter, Gertjan van Heijst, Nigel Shadbolt, and Bob Wielinga. 1993. KA process support through generalised directive models. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Tiffin, J., and L. Rajasinghan. 1995. In search of the virtual class: education in an information society: Routledge.
Toffler, Alvin. 1970. Future Shock. Random House.
Triggs, Bill, H.L. Akin, A. Akmehmet, G. Honderd, K. Khodabandehloo, W. Kropatsch, and E. Oliveira. 1991. Integrating diverse knowledge: new ideas in knowledge processing. In Expert Systems and Robotics, edited by T. Jordanides and B. Torby. Berlin Heidelberg: Springer-Verlag.
Turing, A.M. 1968. Intelligent machinery. In Cybernetics, edited by C. R. Evans and A. D. F. Robertson: University City Press.
Tzafestas, Spyros, and Elpida Tzafestas. 1991. The blackboard architecture in knowledge-based robotic systems. In Expert Systems and Robotics, edited by T. Jordanides and B. Torby. Berlin
Velde, Walter van de. 1993. Issues in knowledge level modelling. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Vlastos, G. 1991. Socrates: Cambridge.
Vygotsky, Lev S. 1993. Problems of general psychology: Plenum Press
Vygotsky, Lev S. 1962. Thought and language: MIT.
Weinert, F.E., and R.H. Kluwe. 1987. Metacognition, motivation and understanding: Lawrence Erlbaum Associates.
Weininger, O., and S. Daniel. 1992. Playing to learn: the young child, the teacher and the classroom: Charles C. Thomas.
Wick, Michael R. 1993. Second generation expert systems. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.
Wielinga, Bob, Walter Van de Velde, Guus Schreiber, and Hans Akkermans. 1993. Towards a unification of knowledge modelling approaches. In Second Generation Expert Systems, edited by J. M. David, J. P. Krivine and R. Simmons: Springer-Verlag.