The document discusses various methods for robot navigation from simple to complex. It begins by explaining turtle graphics and sensor feedback methods. It then introduces using a coordinate system and estimating the robot's position to define waypoints and goals as coordinates. Commonly used waypoint navigation is explained along with automatic waypoint generation using RRT. Finally, it covers using graph searches like Dijkstra's algorithm and potential fields to optimize the path planning. The focus is on moving from object-based to coordinate-based representations and selecting rational routes.
Searching Behavior of a Simple Manipulator only with Sense of Touch Generated...Ryuichi Ueda
?
This document describes a study that uses probabilistic flow control (PFC) to generate search behavior in a simple robotic manipulator tasked with finding and grasping a fixed rod. PFC weights particles representing possible rod locations based on a value function, guiding the robot's motion in a search-like pattern as it taps and changes the direction of its arms while compensating for uncertainty in localizing the rod through touch sensing alone. The results demonstrate that PFC allows the robot to successfully complete the task through searching, while avoiding local minima issues, though longer completion times occur with higher weighting rates that prioritize exploration over exploitation. Future work will apply PFC to more practical cases and allow the weighting rate to vary.
This document summarizes an experiment using Particle Filter on Episode (PFoE) to teach and replay behaviors on a mobile robot. PFoE allows a robot to directly make decisions from recorded episodes of sensor values and actions without needing a map. In the experiment, a trainer used a gamepad to control the robot and teach it behaviors over three laps, which were recorded as an episode. During replay, the robot was able to reperform the taught behaviors in different sensory situations and properly alternate its goal, demonstrating PFoE can enable teach-and-replay with a simple algorithm.
direct use of particle filters for decision makingRyuichi Ueda
?
The document discusses three cases where particle filters are used directly for decision making under uncertainty.
The first case presents a real-time QMDP approach that allows a robot goalkeeper to choose appropriate sub-tasks based on its probabilistic self-localization and the accuracy required for each sub-task.
The second case introduces a probabilistic flow control method that generates searching behaviors for robots to reduce uncertainty through motion when information is limited.
The third case proposes a particle filter on episode approach for robots to learn decision making rules based on similarities to past experiences, without requiring environmental maps. This could build a cognitive model for robots with limited computing resources.
Searching Behavior of a Simple Manipulator only with Sense of Touch Generated...Ryuichi Ueda
?
This document describes a study that uses probabilistic flow control (PFC) to generate search behavior in a simple robotic manipulator tasked with finding and grasping a fixed rod. PFC weights particles representing possible rod locations based on a value function, guiding the robot's motion in a search-like pattern as it taps and changes the direction of its arms while compensating for uncertainty in localizing the rod through touch sensing alone. The results demonstrate that PFC allows the robot to successfully complete the task through searching, while avoiding local minima issues, though longer completion times occur with higher weighting rates that prioritize exploration over exploitation. Future work will apply PFC to more practical cases and allow the weighting rate to vary.
This document summarizes an experiment using Particle Filter on Episode (PFoE) to teach and replay behaviors on a mobile robot. PFoE allows a robot to directly make decisions from recorded episodes of sensor values and actions without needing a map. In the experiment, a trainer used a gamepad to control the robot and teach it behaviors over three laps, which were recorded as an episode. During replay, the robot was able to reperform the taught behaviors in different sensory situations and properly alternate its goal, demonstrating PFoE can enable teach-and-replay with a simple algorithm.
direct use of particle filters for decision makingRyuichi Ueda
?
The document discusses three cases where particle filters are used directly for decision making under uncertainty.
The first case presents a real-time QMDP approach that allows a robot goalkeeper to choose appropriate sub-tasks based on its probabilistic self-localization and the accuracy required for each sub-task.
The second case introduces a probabilistic flow control method that generates searching behaviors for robots to reduce uncertainty through motion when information is limited.
The third case proposes a particle filter on episode approach for robots to learn decision making rules based on similarities to past experiences, without requiring environmental maps. This could build a cognitive model for robots with limited computing resources.