基本信息
views: 43

Bio
Current interests
· Relationship between cognitive load, attention and learning
· Brain imaging
· Linking neurophysiology to behaviour
· J
Senior Researcher (1999-)
National Institute of Advanced Industrial Science and Technology (AIST), Neuroscience Research Institute, Cognitive and Behavioral Sciences Group.
Researcher (1997-1999)
Electrotechnical laboratory, Information Science Division, Cognitive Development Group.
STA Fellow (1995-1997)
Electrotechnical laboratory, Information Science Division, Cognitive Science Section.
relational theory of cognition and development;
measuring relational complexity in cognitive tasks;
systematicity and its implication for classical and connectionist cognitive models;
Research Associate (1994-95)
After submitting my thesis I worked with Professor Graeme Halford in the Department of Psychology at The University of Queensland. There, I worked on:
STAR (Structure Tensor Model of Analogy and Relations);
Relational versus associative modes of cognitive processing; and a
Relational theory of cognition and development.
Doctorate (1991-94)
Title: Connectionism and the Problem of Systematicity
Abstract
Description: Systematicity is the property whereby human cognitive capacity is organized around structural similarity. For example, the capacity to understand the concept "John loves Mary" extends to the structurally related concept "Mary loves John". My thesis is concerned with how well connectionist models support systemacity. In particular, I consider systematicity as generalization to novel position. For example, can a network correctly compute complex objects of the form (Subject loves Object) where "Mary" appears in the Subject position having only every been trained on examples with "Mary" in the Object position. I found that the feedforward and simple recurrent networks could not demonstrate generalization across position assuming no similarity between atomic objects (e.g., John, Mary) representations. This is because of the independence between weights which encode and decode component representations across different positions. However, a third architecture called the Tensor-recurrent network (proposed in this thesis) does support generalization across position under the same assumptions. Weight dependence is ensured by exploiting role-filler (position-component) method of representing structured objects of the tensor network. This means that internal component representations are constructed independently of their position within a complex object. In addition, though, appropriate role (position) and filler (component) representations are learned by exploiting the learning ability of feedforward and recurrent networks. That is by backpropagation update information though the tensor network and weights which generate internal role and filler representations. However, the Tensor-recurrent network was only designed for flat structures. Further work is needed to handle recursively structured objects.
Department: Computer Science, The University of Queensland
Supervisor: Dr Janet Wiles
Accepted: February, 1995
University studies*
Doctor of Philosophy, PhD (Department of Computer Science)
Bachelor of Arts with 1st class honours, BA[hon] (Department of Computer Science)
Bachelor of Science, BSc (Departments of Mathematics and Physics)
Postgraduate Diploma in Education, PGDip. Ed. (Department of Education)
Research Interests
Papers共 137 篇Author StatisticsCo-AuthorSimilar Experts
By YearBy Citation主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
Naotsugu Tsuchiya,Hayato Saigo,Steven Phillips, Shigeru Taguchi, Yuko Ishihara, Yusuke Moriguchi, Tamami Nakano, Ai Koizumi, Makiko Yamada,Masafumi Oizumi, Takato Horii, Tadahiro Taniguchi, Noburo Saji
openalex(2023)
FRONTIERS IN PSYCHOLOGY (2023)
Annual Meeting of the Cognitive Science Society (2022)
Cited0Views0EIBibtex
0
0
Load More
Author Statistics
#Papers: 138
#Citation: 5207
H-Index: 30
G-Index: 70
Sociability: 5
Diversity: 3
Activity: 21
Co-Author
Co-Institution
D-Core
- 合作者
- 学生
- 导师
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn