Saturday, October 4, 2008
JALT Conference 2008
Saturday, March 10, 2007
International Symposium on Learning 2006 - Day 2 - Part 2
Day 2 - International Symposium on Learning 2006 part II
Friday, December 01, 2006
10:08 AM
International Symposium on Learning 2006 sponsored by KAST
Yoshihiko Nakamura
What do Humanoids Learn from Humans?
IRT Foundation for Man and Aged Society (project) to research a broad range of areas involving robotics
Humanoid - human-like (resemblance) robots.
Discussed the modeling of humanoids by looking at the human anatomy. In particular, the framework of bones, joints, ligaments, and muscles.
This is interesting, but where is the learning science connection? When do we talk about learning? If I wanted to build a humanoid, I would be loving this, but this isn't really about what humanoids learn from humans, it's about what robot designers learn from the human body.
International Symposium on Learning 2006 - Day 2 - Part 1
Day 2 - International Symposium on Learning 2006
Thursday, November 30, 2006
4:38 PM
International Symposium on Learning 2006 sponsored by KAST
Raja Chatila - Learning Robots: From spatial cognition to skill acquisition
LAAS-CNRS
Raja.Chatila@laas.fr
What is a cognitive robot?
o Integration of perception, decision, and action
o Learning concepts and interpreting the environment
o Deliberation and decision-making
o Learning new skills
o Communication, interaction, and language
Robot companion - European Project COGNIRON
Learning Requirements:
o Objects
· Multi-sensory, 3D, object modeling and recognition; from view-based to object based
o Space
· Maps, regions, concepts. Appearance, geometrical, topological labled models, landmarks
o Situations
· Spatial and temporal relationships
Spatial mapping requires a combination of object and topographical processing.
This involves incremental mapping that the robot learns over time.
Beyond spatial toward communication. Really talking about a sort of communicative competence. Takes signals from the environment and interprets them to devise appropriate responses.
Object modeling. First you need to recognize items in the environment. This requires constant processing of environmental data, including the use of 2D tracking and 3D representations.
A lot of training is required. This is similar to training voice recognition or even handwriting recognition. They started it with videos of people doing a series of actions. This is then interpreted by the robot via a 3D representation of the human.
Move to autonomous learning.
Learning concepts to learning skills
o Open-ended
o Common representations
o Process guided by utility
o Incremental learning
Interesting building of temporal knowledge. The robot stores information "maps" about an object from multiple perceptual angles. These maps are then combined to enable the robot to recognize the object at any angle.
Multiple object recognitions can be combined to recognize groups (scenes) of objects. This is similar to chunking in language learning. Learning to group items for easier production, or in this case recognition.
Provided a cognitive chart at the end, which would have been an hour discussion in its own right. I wish that he could have spent more time on it giving this audience.
Take away - Learning about ones environment is really a precursor to interacting with humans in any sort of naturalist way. To an extent, this is entirely possible at this point, but will require a lot of work. Also, autonomy is still a ways off, but it's as much of a question of time/information as it is about technology. The building of communal knowledge.
International Symposium on Learning
Day 1
Thursday, Nov 30, 2006, 11:14 AM
Hotel Grand International - Seoul, South Korea
Here are some notes that I took on my tablet using MS ONENOTE. That might explain that terrible appearance because these are all based on MS handwriting recognition.
International Symposium on Learning 2006 sponsored by KAST
1st speaker = Daeyeol Lee
Neural mechanism of reinforcement learning and decision making
* Monkey video
-sound of brain activity with different movements/decisions
* Matching pennies-studied via game theory (ie, Nash in beautiful mind)
-No behavior is random
-Animal must randomize choices, otherwise computer will play off of strategy. This is the reinforcement model
* Dors-lateral prefrontal cortex-association with learning behaviors
IX Essentially, it seems that fewer neurons fire after repeated trials
* Sondheim-seems a little like and" no duh" town Supp J conclusion
This could indicate that once connections are made, future decisions
require less of a load on the brain.
Conclusion – Seems like a “no duh” conclusion.
Overall, a great speaker. He really made the topic interesting and engaged the audience. Interesting research.
Amy Poremba -Learning and Memory in the Auditory System
Testing auditory signals and learning behaviors In rabbits.
* what is the brain doing during learning
-removed brain segments to isolate areas necessary for learning
if tone is accompanied by shock received in animal's foot.
* Sensory modalities are used at the same time (auditory/verbal)
* environmental attributes can effect learning. I wish that this would
have been clarified better throughout the presentation.
panel discussion
Operant Conditioning (evident in previous speakers work)
-law of effect - the response is a function of its consequences.
-theory of mind. ability to build models and guess whet others are dtp thinking.
this into is the used to predict actions and counter-measures
can be taken.
-Moves from Operant Conditioning to more of a Cognitivist theory of a processing model.
-Operant Conditioning cannot explain the thought process in game theory
Would it make a difference if the experiments were done with 2 monkeys as opposed to I monkey Vs./ computer?
-Yes, but this will take awhile to do.
Can your research explain decision making? (improve. decisions-making)
Is there a universal learning process?
There are many Similarities in animal models.
She essentially stated the operant conditioning has a major place in learning.
