In this kind of diagram, each object is represented by a network that describes relationships between its parts. Then each part, in turn, is further described in terms of relationships between its parts, etc.,—until those sub-descriptions descend to a level at which each one because a simple list of properties, such as an object’s color, size, and shape. For more details, see §§Frames, Quillian’s thesis in Semantic Information Processing, and Patrick Winston’s book, The Psychology of Computer Vision.

89

Some persons claim to imagine scenes as though looking at a photograph, whereas other persons report no such vivid experiences. However, some studies appear to show that both are equally good at recalling details of remembered scenes.

90

See, for example, http://www.usd.edu/psyc301/Rensink.htm and http://nivea.psycho.univ-paris5.fr/Mudsplash/Nature_Supp_Inf/Movies/Movie_List.html.

91

This prediction scheme appears in section §6-7 of my 1953 PhD thesis, “Neural-Analog Networks and the Brain-Model Problem, Mathematics Dept., Princeton University, Dec. 1953. At that time, I had heard that there were ‘suppressor bands’ like the one in my diagram, at the margins of some cortical areas. These seem to have vanished from more recent texts; perhaps some brain researchers could find them again.

92

In Push Singh’s PhD thesis, [ref] two robots actually consider such questions. Also refer to 2004 BT paper.

93

The idea of a panalogy first appeared in Bib: Frames, and more details about this were proposed in chapter 25 of SoM. A seeming alternative might be to have almost-separate sub-brains for each realm—but that would lead to similar questions at some higher cognitive level.

94

I got some of these ideas about ‘trans’ from the early theories of Roger C. Schank, described in Conceptual information processing, Amsterdam: North-Holland, 1975.

95

Tempest-Tost, 1992, ISBN: 0140167927.

96

As suggested in §3-12 we often learn more from failure than from success—because success means you already possessed that skill, whereas failure instructs us to learn something new.

97

See Douglas Lenat, The Dimensions of Context Space, at http://www.ai.mit.edu/people/phw/6xxx/lenat2.pdf

98

This discussion is adapted from my introduction to Semantic Information Processing, MIT Press, 1969.

99

From: Alexander R.Luria, The Mind of a Mnemonist: Cambridge: Harvard University Press, 1968.

100

Landauer, Thomas K. (1986). “How much do people remember? Some estimates of the quantity of learned information in long-term memory.” Cognitive Science, 10, 477-493. See also Ralph Merkle’s description of this in http://www.merkle.com/humanMemory.html. Furthermore, according to Ronald Rosenfeld, the information in typical text is close to about 6 bits per word. See Rosenfeld, Ronald, “A maximum entropy approach to adaptive statistical language modeling,”

Вы читаете The Emotion Machine
Добавить отзыв
ВСЕ ОТЗЫВЫ О КНИГЕ В ИЗБРАННОЕ

0

Вы можете отметить интересные вам фрагменты текста, которые будут доступны по уникальной ссылке в адресной строке браузера.

Отметить Добавить цитату