Lidar is an optical remote sensing technology that uses light (often from a pulsed laser) to measure distance. It works by illuminating a target with a laser and analyzing the reflected light. Common components of a lidar system include a laser, scanner/optics, photodetector, and receiver electronics. Lidar has advantages over radar like faster lock-on time and narrower beam spread. Applications include agriculture, mapping, oil/gas exploration, engineering, autonomous vehicles, and atmospheric sensing from aircraft or satellites. Recent advances include lidar speed guns, Google's driverless car which uses lidar for navigation, and autonomous cruise control systems using lidar.
The document discusses next-generation V2X (vehicle-to-everything) technology, covering vehicular communication standards such as DSRC, IEEE 802.11p, and Cellular V2X (C-V2X). It outlines advancements in wireless communication for vehicular networks, detailing the evolution from existing standards to newer ones like 802.11bd and C-V2X adaptations for 5G networks. Additionally, it introduces the concept of Vehicular Named Data Networking (V-NDN), which shifts data communication focus from device-centric to a more efficient, data-centric approach.
The document discusses magnetic levitation (maglev), a technology that suspends objects using magnetic fields, detailing its principles, applications, and various types of systems like maglev trains and magnetic bearings. It highlights advantages such as reduced energy consumption and noise pollution, and explores future possibilities including space propulsion and high-speed travel. Despite its potential, the technology faces limitations related to complexity and the need for skilled operators.
The document discusses the potential of drone technology across various sectors, emphasizing its benefits in disaster aid, healthcare, journalism, agriculture, and military applications. It highlights innovative uses such as delivering medical supplies, monitoring wildlife and environmental changes, and enhancing business operations, while also addressing concerns of misuse without proper regulations. As advancements continue, drones are expected to create new opportunities and improve efficiency in multiple fields.
The document provides an overview of stepper motors, detailing concepts such as step angle, types (variable reluctance, permanent magnet, and hybrid), and methods of operation (1-phase-on, 2-phase-on, half step). It explains the principles behind each type of stepper motor and includes truth tables for their operational modes. Key calculations for determining step angles based on stator and rotor configurations are also presented.
The document discusses digital image upscaling techniques from traditional methods to deep learning methods. It covers classical super-resolution methods for images and videos, including interpolation-based, edge-directed, frequency-domain, and example-based methods. It also explains the challenges of super-resolution such as information loss during the digital conversion process.
A hybrid electric vehicle combines an electric motor with an internal combustion engine or other power source to improve fuel efficiency. There are two main types of hybrid systems - series and parallel. In a series hybrid, the engine only charges a battery which powers the electric motor to turn the wheels. In a parallel hybrid, both the engine and motor can power the wheels directly and work together or independently based on driving conditions. Key components of hybrid systems include batteries to store energy, a generator to charge batteries, and regenerative braking to capture kinetic energy during deceleration. Hybrid vehicles provide benefits like lower emissions and fuel use while maintaining the performance of conventional vehicles. Further research and development of hybrid technology promises more efficient and environmentally friendly vehicles.
This document provides an overview and summary of a presentation on Simultaneous Localization and Mapping (SLAM). It introduces the speaker, Dong-Won Shin, and his background and research in SLAM. The contents of the presentation are then outlined, including an introduction to SLAM, traditional SLAM approaches like Extended Kalman Filter SLAM and FastSLAM, efforts towards large-scale mapping like graph-based SLAM and loop closure detection, modern state-of-the-art systems like ORB SLAM, KinectFusion and Lidar SLAM, and applications of SLAM. Key algorithms in visual odometry, backend optimization, and loop closure detection are also summarized.
This document provides an overview of Simultaneous Localization and Mapping (SLAM), detailing its goals and components, including the use of various sensors for estimating ego-motion and constructing maps. It discusses challenges such as noise and outlier handling, the importance of feature associations, and the role of bundle adjustment in optimizing the SLAM process. Additionally, it highlights the integration of visual inertial odometry (VIO) and mentions relevant tools and open-source implementations for SLAM applications.
The document discusses various methods for robot navigation from simple to complex. It begins by explaining turtle graphics and sensor feedback methods. It then introduces using a coordinate system and estimating the robot's position to define waypoints and goals as coordinates. Commonly used waypoint navigation is explained along with automatic waypoint generation using RRT. Finally, it covers using graph searches like Dijkstra's algorithm and potential fields to optimize the path planning. The focus is on moving from object-based to coordinate-based representations and selecting rational routes.
Deep High Resolution Representation Learning for Human Pose Estimationharmonylab
?
The document discusses a top-down approach for human pose estimation using the HRNet model, which maintains high-resolution representations while incorporating low-resolution subnetworks in parallel. It highlights the model's superior accuracy in detecting keypoints compared to existing methods and details experimental results using datasets like COCO and MPII. The proposed architecture enhances performance through multi-scale fusion, demonstrating effectiveness in maintaining high resolution throughout the detection process.
This document outlines high-quality measurement point acquisition and automatic modeling technology for equipment and environments using 3D laser scanning. It discusses techniques for efficient point cloud processing, optimal scanner placement, and registration methods, emphasizing the importance of data quality and measurement efficiency. Various applications in urban and industrial settings are highlighted, showcasing significant improvements in model accuracy and operational efficiency.
This document provides an overview and summary of a presentation on Simultaneous Localization and Mapping (SLAM). It introduces the speaker, Dong-Won Shin, and his background and research in SLAM. The contents of the presentation are then outlined, including an introduction to SLAM, traditional SLAM approaches like Extended Kalman Filter SLAM and FastSLAM, efforts towards large-scale mapping like graph-based SLAM and loop closure detection, modern state-of-the-art systems like ORB SLAM, KinectFusion and Lidar SLAM, and applications of SLAM. Key algorithms in visual odometry, backend optimization, and loop closure detection are also summarized.
This document provides an overview of Simultaneous Localization and Mapping (SLAM), detailing its goals and components, including the use of various sensors for estimating ego-motion and constructing maps. It discusses challenges such as noise and outlier handling, the importance of feature associations, and the role of bundle adjustment in optimizing the SLAM process. Additionally, it highlights the integration of visual inertial odometry (VIO) and mentions relevant tools and open-source implementations for SLAM applications.
The document discusses various methods for robot navigation from simple to complex. It begins by explaining turtle graphics and sensor feedback methods. It then introduces using a coordinate system and estimating the robot's position to define waypoints and goals as coordinates. Commonly used waypoint navigation is explained along with automatic waypoint generation using RRT. Finally, it covers using graph searches like Dijkstra's algorithm and potential fields to optimize the path planning. The focus is on moving from object-based to coordinate-based representations and selecting rational routes.
Deep High Resolution Representation Learning for Human Pose Estimationharmonylab
?
The document discusses a top-down approach for human pose estimation using the HRNet model, which maintains high-resolution representations while incorporating low-resolution subnetworks in parallel. It highlights the model's superior accuracy in detecting keypoints compared to existing methods and details experimental results using datasets like COCO and MPII. The proposed architecture enhances performance through multi-scale fusion, demonstrating effectiveness in maintaining high resolution throughout the detection process.
This document outlines high-quality measurement point acquisition and automatic modeling technology for equipment and environments using 3D laser scanning. It discusses techniques for efficient point cloud processing, optimal scanner placement, and registration methods, emphasizing the importance of data quality and measurement efficiency. Various applications in urban and industrial settings are highlighted, showcasing significant improvements in model accuracy and operational efficiency.
Learning Less is More - 6D Camera Localization via 3D Surface RegressionBrian Younggun Cho
?
Learning Less is More - 6D Camera Localization via 3D Surface Regression
?? Learning-based Visual Localization?? SOTA ? ??? LessMore ??? ????
- ????? ??? ECCV 2018, Visual Localization workshop?? Eric Brachmann? ????? ???????.
This document discusses using computational graphs to calculate gradients for neural networks and other deep learning models. It explains that directly calculating gradients on paper for complex models is difficult and inefficient. Instead, computational graphs can be used as a data structure to represent the calculation process within a model. Nodes in the graph correspond to mathematical operations like matrix multiplications and activation functions. This allows gradients to be efficiently calculated by backpropagating through the graph. Examples of linear models, convolutional networks, and neural Turing machines are given to show how computational graphs can handle both simple and complex models.
- The document discusses how a neural network with one hidden layer can approximate any function from RN to RM to arbitrary precision using universal approximation.
- It provides an example of using a neural network with ReLU activations to approximate a function from R to R. The output is a linear combination of shifted and scaled ReLU units.
- With 4 hidden units, this network architecture can represent a bump function by combining 4 different weighted hidden units.
3. Simultaneous Localization and Mapping
3
- Visual localization
- ??? ??? ???? ?? (?? ?? x)
- GPS? ? ???? ?? ??
- Ex)
- Mapping
- ??? ?? ?? ????? ?? ?? ???? ? ?? ??
- ?? ??? ?? ??? ??
- Ex)
Indoor environment
Private area
Downtown
Disaster area
Cesar Cadena, Past, Present, and Future of Simultaneous Localization And Mapping: Towards the
Robust-Perception Age, IEEE Transactions on Robotics 32 (6) pp 1309-1332, 2016
4. Earlier Inspirations
4
- Bayesian Filtering based SLAM
- prototype of traditional Bayesian filtering based SLAM
framework emerged in 1900s.
- ex) EKF SLAM, FastSLAM
- Visual Odometry
- The process of estimating the ego-motion of a robot
using only the input of a single or multiple cameras
attached to it
- ex) stereo VO, monocular VO
- Structure from motion
- Investigating the problem of recovering relative camera
poses and 3D structure from a set of camera images
- Off-line version of visual SLAM
18. Modern State of the Art Systems
18
- Sparse SLAM
- Only use a small selected subset of the pixels (features) from a monocular color camera
- Fast and real time on CPU but it produces a sparse map (point clouds)
- Landmark-based or feature-based representations
- ORB SLAM
- One of the SOTA frameworks in the sparse SLAM category
- Complete SLAM system for monocular camera
- Real-time on standard CPUs in a wide variety of environments
- small hand-held indoors
- drones flying in industrial environments
- cars driving around a city
19. Modern State of the Art Systems
19
- Dense SLAM
- Use most or all of the pixels in each received frame
- Or use depth images from a depth camera
- It produces a dense map but GPU acceleration is necessary from the real-time operation
- Volumetric model or surfel(Surface Element)-based representations
- InfiniTam
- One of the SOTA frameworks in the Dense SLAM category
- Multi-platform framework for real-time, large-scale depth fusion and tracking
- Densely reconstructed 3D scene
20. Modern State of the Art Systems
20
- Direct method (semi-dense SLAM)
- Make use of pixel intensities directly
- Enable using all information in the image
- It produces a semi-dense map
- Higher accuracy and robustness in particular even in environments with little keypoints
- LSD SLAM
- Highly cited SLAM framework in the direct method SLAM category
- Large-scale, consistent maps of the environment
- Accurate pose estimation based on direct image alignment
21. Modern State of the Art Systems
21
- Lidar SLAM
- Make use of the Lidar sensor input for the localization and mapping
- Autonomous driving purpose-oriented in outdoor environment
- LOAM
- One of the SOTA frameworks in the Lidar SLAM category
- Very low drift error using the edge and planar features
- Low computation complexity
22. SLAM ??? ??? ??
22
- Motion model
- ??? ??? ??? ????? ??
- Sensor model
- ????? ???? ??? ?????? ??