The interaction between human beings and robotic agents, and the interest towards such topics, have been exponentially growing in the recent years. The purpose of this thesis project is to identify a relation between the behaviours of a humanoid robot placed in a social context, and the emotional responses of a subject interacting with it. In particular, through the use of Brain-Computer Interface (BCI) and gaze tracking technologies, it has been investigated on the relation between the trust towards a robotic agent and the effects it has on the brain signals. In order to evaluate this relation, the framework makes use of the acquired brain signals to extract biometric features, such as attention, stress, and mental workload, along with the visual focus. In order to investigate towards this direction, an interactive game session has been set up for the human-robot interaction. In particular, an instance of the well-known Rock-Paper-Scissors game has been used. The experimental results have been shown a correlation between the behaviours of a robotic agent and the effect of trust on the brain signals of the human user. In particular, the emotional response varies depending the type of behaviours expressed by the robotic agent.
In this talk well present the technology behind the Fully Automated Store by Checkout Technologies. The actual version of the store is a result of the work of 12 engineers that spans the areas from hardware and design to the ultimate deep learning architectures. Will be also discussed the challenges and lessons learnt during this adventure and what it means to deploy the system which has an AI engine in its core. Creation of the dataset and the invention of the specific metrics that is capable to measure the accuracy of the entire system will be discussed.
Comparison of Matrix Completion Algorithms for Background Initialization in V...Andrews Cordolino Sobral
油
Scene Background Modeling and Initialization (SBMI) Workshop in conjunction with ICIAP 2015.
Comparison of Matrix Completion Algorithms for Background Initialization in Videos
This document summarizes the design of different types of 8-bit comparators. It discusses the design of a conventional bit-wise comparator, a borrow look-ahead comparator, and a mux-based comparator. The comparators were designed using Xilinx Vivado software. Simulation results show that the borrow look-ahead comparator has the lowest power consumption of 0.071W and fewest number of cells at 46, making it the most efficient design compared to the conventional and mux-based comparators. The document compares the performance of the different comparator designs in terms of power consumption and number of cells.
The document presents a project report on machine learning. It discusses several projects completed including implementing neural networks to compute averages, extracting histogram of joints features, and developing a gesture recognition system using Hidden Markov Models. The gesture recognition system uses a Kinect sensor to capture skeleton data, extracts features, builds a codebook using clustering, trains HMM models for each gesture, and achieves over 85% accuracy on a dataset of 15 gestures. Future work to improve the system is also outlined.
2020 vision - the journey from research lab to real-world productKTN
油
This presentation, delivered by Jag Minhas, CEO and Founder, Sensing Feeling, was the first presentation of the Implementing AI: Vision Systems Webinar.
This document outlines a quality control project that uses image processing to identify faulty bolts on a conveyor belt. It includes an overview of the project requirements and specifications, design aspects like the hardware components and software used. Block diagrams and a flowchart illustrate the process workflow. The software implementation section describes various Matlab functions used for image processing tasks like preprocessing, feature extraction and matching. Finally, the document provides a schedule and references.
A comparative review of various approaches for feature extraction in Face rec...Vishnupriya T H
油
This document provides an overview of various approaches for feature extraction in face recognition. It discusses common feature extraction algorithms such as PCA, DCT, LDA, and ICA. PCA is aimed at data compression while ensuring no information loss. DCT transforms images from spatial to frequency domains. LDA maximizes between-class variations and minimizes within-class variations. ICA determines statistically independent variables and minimizes higher-order dependencies. The document reviews several papers comparing the performance of these algorithms individually and in combination for face recognition applications.
DEVELOPMENT OF AUTOMATIC TEACHING METHOD USING STEREO CAMERA FOR SCARA ROBOTSDngPhmPhc
油
The document describes the development of an automatic teaching method using a stereo camera for SCARA robots. The method involves building a camera-robot system with hardware including a stereo camera and SCARA robot. Software algorithms are designed for position calculation, end-effector angle updating, and communication between the PC and robot controller. Experiments are conducted to evaluate repeatability, distance correlation, feedback position accuracy, and overall system operability. The automatic teaching method shows potential but could be improved with marker redesign and synchronization optimization.
IRJET - Face Recognition based Attendance SystemIRJET Journal
油
This document describes a face recognition-based attendance system. It begins with an introduction to face recognition and the challenges of implementing such a system in real-time. It then reviews related work on algorithms used for face detection (Haar cascade), feature extraction (Histogram of Oriented Gradients), and recognition (Convolutional Neural Networks). The proposed system is described as collecting a student database, extracting encodings from images using CNN, and comparing real-time detected faces to the database using HOG detection and Euclidean distance matching to mark attendance. Experimental results aimed to test recognition under different training, lighting, and pose conditions.
The document contains summaries of several projects completed by Marek uplata including a moving object tracker, simulator of coordinating productions, face biometric recognition system, medical CT volume data visualization, power network blackouts monitor, and motion control projects in Matlab/Simulink including a positional servosystem and direct vector control loops for an asynchronous motor. Details provided for each project include description, source code size, tasks, technologies used, and duration.
In this presentation, latest results of our research activity in the ESCEL Comp4Drones project is presented. The application of S3D to the modeling and performance analysis of drone-based services is described
IoT-based Autonomously Driven Vehicle by using Machine Learning & Image Proce...IRJET Journal
油
This document describes a miniature self-driving car model that uses IoT, machine learning, and image processing techniques. The model uses a Raspberry Pi as the main processor with an 8MP camera to provide visual input. The Raspberry Pi is trained using a convolutional neural network and machine learning algorithm to detect traffic lanes, lights, and obstacles. It can then control four DC motors and wheels via an Arduino Uno and motor driver to autonomously navigate environments based on its visual perceptions. The document outlines the hardware and software setup, image processing and machine learning methodology, and demonstrates the model's ability to detect lanes, obstacles, and traffic lights in testing.
IRJET - A Review on Face Recognition using Deep Learning AlgorithmIRJET Journal
油
This document provides an overview of face recognition using deep learning algorithms. It discusses how deep learning approaches like convolutional neural networks (CNNs) have achieved high accuracy in face recognition tasks compared to earlier methods. CNNs can learn discriminative face features from large datasets during training to generalize to new images, handling variations in pose, illumination and expression. The document reviews popular CNN architectures and training approaches for face recognition. It also discusses other traditional face recognition methods like PCA and LDA, and compares their performance to deep learning methods.
IRJET - Examination Forgery Avoidance System using Image Processing and IoTIRJET Journal
油
This document describes a proposed system to avoid exam forgery using image processing and IoT. The system uses a camera to capture candidate images, a fingerprint sensor to verify identity, and a Raspberry Pi for processing. Candidate images and fingerprints are matched against a stored dataset. If verified, a door will open to allow exam access. Otherwise, an alert is sent to management. The system aims to reduce exam forgery by reliably verifying candidate identity in real-time.
https://imatge-upc.github.io/activitynet-2016-cvprw/
This thesis explore different approaches using Convolutional and Recurrent Neural Networks to classify and temporally localize activities on videos, furthermore an implementation to achieve it has been proposed. As the first step, features have been extracted from video frames using an state of the art 3D Convolutional Neural Network. This features are fed in a recurrent neural network that solves the activity classification and temporally location tasks in a simple and flexible way. Different architectures and configurations have been tested in order to achieve the best performance and learning of the video dataset provided. In addition it has been studied different kind of post processing over the trained network's output to achieve a better results on the temporally localization of activities on the videos. The results provided by the neural network developed in this thesis have been submitted to the ActivityNet Challenge 2016 of the CVPR, achieving competitive results using a simple and flexible architecture.
This document discusses different software estimation techniques including LOC-based, FP-based, and process-based estimation. It provides examples of estimating effort, cost, and schedule for a CAD software project using each technique. LOC-based estimation involves decomposing the software into functions and estimating LOC for each. FP-based estimation involves estimating information domain values and complexity factors to calculate function points. Process-based estimation involves estimating effort for each software process activity and function. The document also discusses software metrics, size-oriented metrics using LOC as a normalization value, and function-oriented metrics using function points. It provides steps for establishing a goal-driven software metrics program.
The SmartLock uses a microcontroller, Bluetooth module, and servo to automate a door lock. It connects to a mobile app via Bluetooth to unlock/lock with buttons. When a command is received, the SmartLock activates LEDs and a speaker, then uses a servo to rotate the lock mechanism. It provides status updates and can also be manually unlocked/locked. It was designed and prototyped using 3D printing to test electromechanical actuation, power management, and Bluetooth control.
This document summarizes a presentation on 1-bit semantic segmentation. It discusses quantizing neural networks to 1-bit to enable on-device AI with small, low-power processors. It describes building and training binarized neural networks, comparing their performance to FP32 networks, and implementing a hardware architecture for real-time 1-bit semantic segmentation on an FPGA board. The results show the potential for low-cost, embedded semantic segmentation through neural network quantization and specialized hardware design.
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDSIRJET Journal
油
The document discusses face counting using OpenCV and Python by analyzing unusual events in crowds. It proposes using the Haar cascade algorithm for face detection and counting. Feature extraction is performed using gray-level co-occurrence matrix (GLCM) to extract texture and edge features. Discriminant analysis is then used to differentiate between samples accurately. The system aims to correctly detect and count faces in images using Python tools like OpenCV for digital image processing tasks and feature extraction algorithms like GLCM and discrete wavelet transform (DWT). It is intended to have good recognition accuracy compared to previous methods.
Adaptive Hyper-Parameter Tuning for Black-box LiDAR Odometry [IROS2021]KenjiKoide1
油
Adaptive Hyper-Parameter Tuning for Black-box LiDAR Odometry
Kenji Koide, Masashi Yokozuka, Shuji Oishi, and Atsuhiko Banno
Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2021), pp. 7708-7714, Prague, Czech Republic, Sep., 2021
https://staff.aist.go.jp/k.koide/
IRJET- Implementation of Gender Detection with Notice Board using Raspberry PiIRJET Journal
油
1) The document describes a system that uses a Raspberry Pi device with a camera module to implement gender detection.
2) Images captured by the camera are processed through a convolutional neural network to extract facial features and predict gender.
3) The system is intended to address limitations of existing gender detection technologies and provide a low-cost hardware solution using a Raspberry Pi single-board computer.
Pathogen Detection with Brewster's Angle Straddle InterferometryLauren Brownlee
油
Product Requirements Document for Optical Engineering Senior Design Team. Team members: Lauren Brownlee, Gary Ge, Sean Reid, Pedro Vallejo-Ramirez. Customer: Professor Lewis Rothberg. Adviser: Professor Wayne Knox.
Automated Identification of On-hold Self-admitted Technical DebtRungrojMaipradit1
油
Thank you for the interesting research. I have a few questions:
1. How generalizable are the findings to closed-source projects? Could delays in technical debt resolution after issue resolution be an issue in industry as well?
2. If two on-hold SATD comments reference the same issue, and one comment has already been resolved/removed, is there an opportunity for the approach to suggest resolving the other comment as well since the underlying issue is addressed?
AI & Digital Dentistry / June 2021 / Youngjun Kim (Imagoworks Inc.)Youngjun Kim
油
The document discusses applications of artificial intelligence and deep learning in digital dentistry. It describes how AI can be used for tasks like automated processing of patient data from X-rays and 3D scans, assisting with diagnosis and treatment planning, and automated design of dental appliances. Deep learning has applications in areas such as detecting lesions in X-rays, segmenting anatomical structures from CT and CBCT scans, and automatically generating 3D tooth models.
DEVELOPMENT OF AUTOMATIC TEACHING METHOD USING STEREO CAMERA FOR SCARA ROBOTSDngPhmPhc
油
The document describes the development of an automatic teaching method using a stereo camera for SCARA robots. The method involves building a camera-robot system with hardware including a stereo camera and SCARA robot. Software algorithms are designed for position calculation, end-effector angle updating, and communication between the PC and robot controller. Experiments are conducted to evaluate repeatability, distance correlation, feedback position accuracy, and overall system operability. The automatic teaching method shows potential but could be improved with marker redesign and synchronization optimization.
IRJET - Face Recognition based Attendance SystemIRJET Journal
油
This document describes a face recognition-based attendance system. It begins with an introduction to face recognition and the challenges of implementing such a system in real-time. It then reviews related work on algorithms used for face detection (Haar cascade), feature extraction (Histogram of Oriented Gradients), and recognition (Convolutional Neural Networks). The proposed system is described as collecting a student database, extracting encodings from images using CNN, and comparing real-time detected faces to the database using HOG detection and Euclidean distance matching to mark attendance. Experimental results aimed to test recognition under different training, lighting, and pose conditions.
The document contains summaries of several projects completed by Marek uplata including a moving object tracker, simulator of coordinating productions, face biometric recognition system, medical CT volume data visualization, power network blackouts monitor, and motion control projects in Matlab/Simulink including a positional servosystem and direct vector control loops for an asynchronous motor. Details provided for each project include description, source code size, tasks, technologies used, and duration.
In this presentation, latest results of our research activity in the ESCEL Comp4Drones project is presented. The application of S3D to the modeling and performance analysis of drone-based services is described
IoT-based Autonomously Driven Vehicle by using Machine Learning & Image Proce...IRJET Journal
油
This document describes a miniature self-driving car model that uses IoT, machine learning, and image processing techniques. The model uses a Raspberry Pi as the main processor with an 8MP camera to provide visual input. The Raspberry Pi is trained using a convolutional neural network and machine learning algorithm to detect traffic lanes, lights, and obstacles. It can then control four DC motors and wheels via an Arduino Uno and motor driver to autonomously navigate environments based on its visual perceptions. The document outlines the hardware and software setup, image processing and machine learning methodology, and demonstrates the model's ability to detect lanes, obstacles, and traffic lights in testing.
IRJET - A Review on Face Recognition using Deep Learning AlgorithmIRJET Journal
油
This document provides an overview of face recognition using deep learning algorithms. It discusses how deep learning approaches like convolutional neural networks (CNNs) have achieved high accuracy in face recognition tasks compared to earlier methods. CNNs can learn discriminative face features from large datasets during training to generalize to new images, handling variations in pose, illumination and expression. The document reviews popular CNN architectures and training approaches for face recognition. It also discusses other traditional face recognition methods like PCA and LDA, and compares their performance to deep learning methods.
IRJET - Examination Forgery Avoidance System using Image Processing and IoTIRJET Journal
油
This document describes a proposed system to avoid exam forgery using image processing and IoT. The system uses a camera to capture candidate images, a fingerprint sensor to verify identity, and a Raspberry Pi for processing. Candidate images and fingerprints are matched against a stored dataset. If verified, a door will open to allow exam access. Otherwise, an alert is sent to management. The system aims to reduce exam forgery by reliably verifying candidate identity in real-time.
https://imatge-upc.github.io/activitynet-2016-cvprw/
This thesis explore different approaches using Convolutional and Recurrent Neural Networks to classify and temporally localize activities on videos, furthermore an implementation to achieve it has been proposed. As the first step, features have been extracted from video frames using an state of the art 3D Convolutional Neural Network. This features are fed in a recurrent neural network that solves the activity classification and temporally location tasks in a simple and flexible way. Different architectures and configurations have been tested in order to achieve the best performance and learning of the video dataset provided. In addition it has been studied different kind of post processing over the trained network's output to achieve a better results on the temporally localization of activities on the videos. The results provided by the neural network developed in this thesis have been submitted to the ActivityNet Challenge 2016 of the CVPR, achieving competitive results using a simple and flexible architecture.
This document discusses different software estimation techniques including LOC-based, FP-based, and process-based estimation. It provides examples of estimating effort, cost, and schedule for a CAD software project using each technique. LOC-based estimation involves decomposing the software into functions and estimating LOC for each. FP-based estimation involves estimating information domain values and complexity factors to calculate function points. Process-based estimation involves estimating effort for each software process activity and function. The document also discusses software metrics, size-oriented metrics using LOC as a normalization value, and function-oriented metrics using function points. It provides steps for establishing a goal-driven software metrics program.
The SmartLock uses a microcontroller, Bluetooth module, and servo to automate a door lock. It connects to a mobile app via Bluetooth to unlock/lock with buttons. When a command is received, the SmartLock activates LEDs and a speaker, then uses a servo to rotate the lock mechanism. It provides status updates and can also be manually unlocked/locked. It was designed and prototyped using 3D printing to test electromechanical actuation, power management, and Bluetooth control.
This document summarizes a presentation on 1-bit semantic segmentation. It discusses quantizing neural networks to 1-bit to enable on-device AI with small, low-power processors. It describes building and training binarized neural networks, comparing their performance to FP32 networks, and implementing a hardware architecture for real-time 1-bit semantic segmentation on an FPGA board. The results show the potential for low-cost, embedded semantic segmentation through neural network quantization and specialized hardware design.
FACE COUNTING USING OPEN CV & PYTHON FOR ANALYZING UNUSUAL EVENTS IN CROWDSIRJET Journal
油
The document discusses face counting using OpenCV and Python by analyzing unusual events in crowds. It proposes using the Haar cascade algorithm for face detection and counting. Feature extraction is performed using gray-level co-occurrence matrix (GLCM) to extract texture and edge features. Discriminant analysis is then used to differentiate between samples accurately. The system aims to correctly detect and count faces in images using Python tools like OpenCV for digital image processing tasks and feature extraction algorithms like GLCM and discrete wavelet transform (DWT). It is intended to have good recognition accuracy compared to previous methods.
Adaptive Hyper-Parameter Tuning for Black-box LiDAR Odometry [IROS2021]KenjiKoide1
油
Adaptive Hyper-Parameter Tuning for Black-box LiDAR Odometry
Kenji Koide, Masashi Yokozuka, Shuji Oishi, and Atsuhiko Banno
Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2021), pp. 7708-7714, Prague, Czech Republic, Sep., 2021
https://staff.aist.go.jp/k.koide/
IRJET- Implementation of Gender Detection with Notice Board using Raspberry PiIRJET Journal
油
1) The document describes a system that uses a Raspberry Pi device with a camera module to implement gender detection.
2) Images captured by the camera are processed through a convolutional neural network to extract facial features and predict gender.
3) The system is intended to address limitations of existing gender detection technologies and provide a low-cost hardware solution using a Raspberry Pi single-board computer.
Pathogen Detection with Brewster's Angle Straddle InterferometryLauren Brownlee
油
Product Requirements Document for Optical Engineering Senior Design Team. Team members: Lauren Brownlee, Gary Ge, Sean Reid, Pedro Vallejo-Ramirez. Customer: Professor Lewis Rothberg. Adviser: Professor Wayne Knox.
Automated Identification of On-hold Self-admitted Technical DebtRungrojMaipradit1
油
Thank you for the interesting research. I have a few questions:
1. How generalizable are the findings to closed-source projects? Could delays in technical debt resolution after issue resolution be an issue in industry as well?
2. If two on-hold SATD comments reference the same issue, and one comment has already been resolved/removed, is there an opportunity for the approach to suggest resolving the other comment as well since the underlying issue is addressed?
AI & Digital Dentistry / June 2021 / Youngjun Kim (Imagoworks Inc.)Youngjun Kim
油
The document discusses applications of artificial intelligence and deep learning in digital dentistry. It describes how AI can be used for tasks like automated processing of patient data from X-rays and 3D scans, assisting with diagnosis and treatment planning, and automated design of dental appliances. Deep learning has applications in areas such as detecting lesions in X-rays, segmenting anatomical structures from CT and CBCT scans, and automatically generating 3D tooth models.
Production Planning & Control and Inventory Management.pptxVirajPasare
油
Production Planning and Control : Importance, Objectives and Functions . Inventory Management - Meaning, Types , Objectives, Selective Inventory Control : ABC Analysis
Cecille Seminario Marra, a dedicated bioengineer, graduated from Florida Gulf Coast University with a BS in Bioengineering. She has two years of experience in bioengineering and biotechnology, focusing on medical technology advancements. Cecille excels in managing projects and analyzing data using MATLAB, Python, and R.
This factbook, using research from BloombergNEF and other sources, provides public and private sector leaders the critical information they need to accelerate the
transition to clean energy, along with all the health and economic benefits it will bring.
UHV UNIT-5 IMPLICATIONS OF THE ABOVE HOLISTIC UNDERSTANDING OF HARMONY ON ...ariomthermal2031
油
Design and Implementation of Modules for the Extraction of Biometric Parameters in an Augmented BCI Framework
1. UNIVERSITY OF PALERMO
POLYTECHNIC SCHOOL
Departmentof Industrial and DigitalInnovation (DIID)
Computer ScienceEngineeringfor Intelligent Systems
Design and Implementation of Modules
for the Extraction of Biometric Parameters
in an Augmented BCI Framework
Master Degree Thesis of:
Salvatore La Bua
WWW.SLBLABS.COMMarch, 2017
2. Introduction
What
Investigate the effects of the interaction with a robotic agent
on the mental status of the human player
through brain signal analysis
Acceptance of a robotic agent by the user
Performance improvements over a classical BCI system
How
Rock-Paper-Scissors game integration
UniPA BCI Framework based on the P300 paradigm
Augmented by
Eye gaze coordinate acquisition
Biometric feature extraction
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 2
3. Introduction
Human-Robot Interaction (HRI)
HRI as a multidisciplinary research topic
Artificial Intelligence
Human-Computer Interaction
Natural Language Processing
Social Sciences
Design
Model of the users expectation towards a robotic agent
in a human-robot interaction
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 3
4. Introduction
Brain-Computer Interfaces (BCI)
Direct communication between
brain and external devices
Non-Invasive
Partially-Invasive
Invasive
Brain Lobes
Frontal: emotions, social behaviour
Temporal: speech, hearing recognition
Parietal: sensory recognition
Occipital: visual processing
Extraction of biometric features from brain signals
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 4
5. Introduction
Visual Focus
Importance of eye gaze for direct interaction in a social
environment
Interfaces dedicated to people affected by degenerative
pathologies
Entertainment applications, such as games
Better advertisement placement
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 5
6. Methodology
Background Information
Problem
Effects of the behaviour of a robotic agent on the brain signals
Trust context in Human-Robot Interaction
Feature Extraction
Entropy: as a stress indicator
Energy: as a concentration indicator
Mental Workload: as an index of engagement in the task
Brain waves types
隆 Delta: Hz 0.5歎3 related to instinct, deep sleep
慮 Theta: Hz 3歎8 related to emotions
留 Alpha: Hz 8歎12 related to consciousness
硫 Beta: Hz 12歎38 related to concentration, stress
粒 Gamma: Hz 38歎42 related to information processing
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 6
7. Methodology
The math behind
Entropy:
=
2
log (
2
); =
Energy:
乞 = 犒
=
() 2
Mental Workload:
+
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 7
8. The Proposed Solution
Architecture Structure
Action Selection
Direct interface with the user
Acquisition of bio-signals
Acquisition of eye gaze coordinates
Selection of the Base action
Feature Extraction and Analysis
Bio-signals analysis
Features extraction
Features analysis
Computation of Intention, Attention,
Stress indices
Response Modulation
Threshold of the Base action by means of the Intention index
Modulation of the resulting action by means of Attention and Stress indices
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 8
9. The Proposed Solution
Class Diagram
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 9
10. The Proposed Solution
Functional Blocks
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
Action Selection
Eye-Tracking module
Screen coordinates acquisition
Weighing module
Weighing of the BCI classifier response
precision and the Eye-Tracking module
response precision, by means of the
users skill level
ID Selection module
Action selection by means of the weighted BCI classifier and Eye-Tracking module
precisions
S. La Bua 10
11. The Proposed Solution
Functional Blocks
Feature Extraction
and Analysis
It makes use of external calls
to the MATLAB engine
Features extracted and analysed
Correlation Factor: related to the Intention index
Energy: related to the Attention index
Entropy: related to the Stress index
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 11
12. The Proposed Solution
Functional Blocks
Response Modulation
Threshold module
ID Selection validation by
means of Intention index
thresholding
Modulation module
In the case the selected ID has passed the validation step,
the resulting action is modulated by means of the Attention and Stress indices
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 12
13. The Proposed Solution
Robotic Controller
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 13
14. The Proposed Solution
Utilisation Modes
Basic Mode
Simplest mode
Minimal number of
modules involved
Classical BCI approach
P300 paradigm
classification
Direct Behaviour
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 14
15. The Proposed Solution
Utilisation Modes
Hybrid Mode
Advanced mode
Eye-Tracking module
Combination of brain
signals and eye gaze
User skill level as
weighting parameter
Composite Behaviour
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 15
16. The Proposed Solution
Utilisation Modes
Bio-Hybrid Mode
Complete mode
Feature Extraction
and Analysis
functional block
Response Modulation
functional block
Intention, Attention and
Stress indices computation
Modulated Behaviour
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 16
17. Architecture
Eye-Tracking module
P300 6x6 spelling matrix 3x3 spelling window areas
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 17
18. Architecture
Eye-Tracking module
Preliminary tests results
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
SUBCATEGORIES FOR SINGLE
ELEMENT
FOCUS % CENTRAL FOCUS % LATERAL FOCUS % EXTERNAL FOCUS %
3-BY-3, 700X700PX 100 99.9000 0.1000 0
3-BY-3, 300X300PX 98.4562 93.2697 6.7303 1.5438
6-BY-6, 700X700PX 100 84.7408 2.7592 0
6-BY-6, 300X300PX 99.5997 75.9943 24.0057 0.4003
SUBCATEGORIES FOR ROW SPAN
SELECTION
FOCUS % CENTRAL FOCUS % LATERAL FOCUS % EXTERNAL FOCUS %
3-BY-3, 700X700PX 74.2632 93.9192 6.0808 25.7368
3-BY-3, 300X300PX 77.1340 89.9075 10.0925 22.8660
6-BY-6, 700X700PX 69.5037 96.3287 3.6713 30.4963
6-BY-6, 300X300PX 75.0674 71.7202 28.2798 24.9326
AVERAGE BY PARAMETER FOCUS % CENTRAL FOCUS %
700X700PX 85.9417 93.7222
300X300PX 87.5643 82.7229
GAIN WITH LARGER WINDOW -1.8530% +13.2966%
AVERAGE BY PARAMETER FOCUS % CENTRAL FOCUS %
3-BY-3 87.4634 94.2491
6-BY-6 86.0427 82.1960
GAIN WITH LESS DENSE MATRIX +1.6512% +14.6639%
S. La Bua 18
19. Architecture
Data Structures
Generic signal data structure fields
N fields dedicated to the brain signals acquisition
Ch 1 Ch 16
3 auxiliary fields to carry peculiar information
A, B, C
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
CH 1 CH 2 揃 揃 揃 CH N A B C
S. La Bua 19
20. Architecture
Data Structures
Baseline Calibration signal
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
A (RED) B (CYAN) C (MAGENTA)
BASELINE CALIBRATION -2 EYES STATUS 0
S. La Bua 20
21. Architecture
Data Structures
Game Session signal
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
A (RED) B (CYAN) C (MAGENTA)
GAME SESSION TRIAL STATUS TRIAL SUB-PHASE GAZE TRACKING
S. La Bua 21
22. Architecture
Data Structures
P300 Calibration signal
P300 Spelling signal
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
A B C
P300 Spelling -1 Flashing tag 0
A B C
P300 Calibration Calibration target Flashing tag 0
S. La Bua 22
23. The Framework
Main Interface
1. Basic settings
P300-related settings
Preset modes
2. Main functionalities
Signal quality check
P300 Calibration and
Recognition
Game session control
3. Interface modality
Alphabetic or Symbolic
4. Devices
Eye-Tracker settings
5. Plots and Indicators
Signals and Indices
visualisation
6. Output panel
Feedback for the operator
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
1
2
3
4
5
6
S. La Bua 24
24. The Framework
Baseline Acquisition Interface
Control dialog window User dialog window
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 25
25. The Framework
Game Session Interface
1. Game modality
Fair
Cheat-to-Win/Lose
2. Trials number per session
Initial Fair sub-session
Middle Cheating sub-session
Terminal Fair sub-session
3. Devices
BCI signal acquisition
Kinect gesture recognition
Play against a robotic agent
4. Session panel
Moves selection
Trial temporal progress
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
1 2
3
4
S. La Bua 27
26. Experiments
Introduction
Purpose
Investigate the effects of the interaction with a cheating robotic
agent on the mental status of the human player
Rock-Paper-Scissors game session
Scenarios
The robot behaves according to the games rules
The robot exhibits a cheat-to-win behaviour
The robot exhibits a cheat-to-lose behaviour
Game Session
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
Initial Fair
sub-session
Cheating
sub-session
Terminal Fair
sub-session
S. La Bua 28
27. Experiments
Set-up
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
Subjects
16 Subjects
Aged 18-51
Hardware
g.tec g.USBamp
g.tec g.GAMMAbox
g.tec g.GAMMAcap2
Secondary standard PC screen
Tobii EyeX eye tracker
Kinect for Xbox One
Telenoid
Camera(s)
S. La Bua 29
28. Experiments
EEG Electrodes configuration
Channels-Electrodes
correspondence
L R
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
Ch 01 F7
Ch 02 F3
Ch 03 FZ
Ch 04 T3
Ch 05 C3
Ch 06 T5
Ch 07 P3
Ch 08 O1
Ch 09 F8
Ch 10 F4
Ch 11 T4
Ch 12 C4
Ch 13 T6
Ch 14 P4
Ch 15 PZ
Ch 16 O2
S. La Bua 30
31. Experiments
Subcategories
Sub-Session Analysis
Analysis of the Baseline signal, Fair and Cheating sub-sessions
Trials Analysis
Single trial analysis for each subject
Intra-Class Comparison
Comparison between Cheat-to-Win and Cheat-to-Lose classes
Average Analysis
Average over all subjects, by class and by sub-sessions
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 33
38. Experiments
Trials Analysis
Focus %: Cheat-to-Win Cheat-to-Lose
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 40
43. Experiments
Average Analysis
Entropy
The entropy values do not show any particular evidence of stress
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
ENTROPY FAIR 1 CHEAT FAIR 2
MEAN STD DEV MEAN STD DEV MEAN STD DEV
CHEAT WIN 3.8584 0.2191 3.8998 0.2540 3.8742 0.1891
CHEAT LOSE 3.7420 0.0850 3.7632 0.1177 3.7304 0.1074
S. La Bua 45
44. Experiments
Average Analysis
Energy
The energy values show higher concentration level for the Cheat-to-Win class
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
ENERGY FAIR 1 CHEAT FAIR 2
MEAN STD DEV MEAN STD DEV MEAN STD DEV
CHEAT WIN 0.2572 0.2141 0.3032 0.2267 0.2254 0.1951
CHEAT LOSE 0.1498 0.0596 0.1720 0.0948 0.1143 0.0447
S. La Bua 46
45. Experiments
Average Analysis
Mental Workload
The mental workload values show a slightly lower engagement level for the
Cheat-to-Win class
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
MENTAL WL FAIR 1 CHEAT FAIR 2
MEAN STD DEV MEAN STD DEV MEAN STD DEV
CHEAT WIN 1.3798 1.1625 0.8988 0.4215 0.9437 0.4570
CHEAT LOSE 1.0923 0.2716 1.0382 0.3229 1.0777 0.3936
S. La Bua 47
46. Experiments
Average Analysis
Visual Focus
The visual focus values show higher visual attention level for the Cheat-to-Win
class
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
FOCUS % FAIR 1 CHEAT FAIR 2
MEAN STD DEV MEAN STD DEV MEAN STD DEV
CHEAT WIN 7.89100 8.93670 9.13020 11.3344 12.1404 20.1567
CHEAT LOSE 4.59710 9.91690 3.24540 7.09430 2.20110 4.79480
S. La Bua 48
48. Conclusions and Future Works
A robotic agent that cheats to win is perceived as more
agentic and human-like than a robot that cheats to lose
Some of the Questionnaire results
Trust related improvement
Biometric features to mitigate or amplify the effects of the
robotic agent behaviour on the subjects emotional response
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
Unusual Behaviour Fair Play Intelligence
Strongly
Disagree
Strongly
Agree
S. La Bua 50
49. Future Works
Framework Extension
Sensor Aggregation functional block
Galvanic Skin Response (GSR) sensor
Heart Rate (HR) sensor
Other physiological sensors
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 51
50. Future Works
Extended Framework
DESIGN AND IMPLEMENTATION OF MODULES FOR THE EXTRACTION OF
BIOMETRIC PARAMETERS IN AN AUGMENTED BCI FRAMEWORK
S. La Bua 52
51. Thank you for
your attention
Salvatore La Bua
slabua@gmail.com
WWW.SLBLABS.COM