Presentation at ACM SIGCHI Conference on Human Factors in Computing Systems (CHI) 2014. See http://dl.acm.org/citation.cfm?doid=2556288.2557060 for the full paper.
Exploiting Social Environment to Increase Cellphone Awarenesschenjennan
?
Jen Chen presented on exploiting a phone's social environment to increase awareness of appropriate cellphone settings. The presentation discussed how using Bluetooth to detect surrounding phones could help remind owners to configure their phone settings based on the current context, such as switching to silent in class or meetings to avoid inappropriate interruptions. A survey was also discussed to better understand users' privacy preferences around contextual phone sharing. The conclusion was that the most suitable phone setting depends on social standards and context to balance convenience with avoiding interruptions.
Matthew Smith presented on using technology to meet speech and language needs. He has over 10 years of experience as a speech language pathologist. The presentation reviewed how computers, iPhones, iPads and other devices can be used effectively in therapy. It covered topics like telerehabilitation, interactive games and online resources, augmentative and alternative communication devices, and the benefits and challenges of using technology in speech therapy. Hands-on demonstrations were provided of different programs and applications.
Towards a Pedagogy of Comparative Visualization in 3D Design DisciplinesBond University
?
Spatial visualisation skills and interpretations are critical in the design professions, but traditionally difficult to effectively teach. Visualization and multimedia presentation studies show positive improvements in learner outcomes for specific learning domains. But the development and translation of a comparative visualization pedagogy between disciplines is poorly understood. This research seeks to identify an approach to developing comparable multimodal and interactive visualizations and attendant student reflections for curriculum designers in courses that can utilize visualizations and manipulations. Results from previous use of comparative multimodal visualization pedagogy in a multimedia 3D modelling class are used as a guide to translation of pedagogy to architecture design. The focus is how to guide the use of comparative multimodal visualizations through media properties, lesson sequencing, and reflection to inform effective instruction and learning.
Teaching Complex Theoretical Multi-Step Problems in ICT Networking through 3D...Bond University
?
This presentation presents an Augmented Reality simulation to assist understanding of networking and the five layer TCP/IP model. This is a joint project between Bond University and CQUniversity Australia. The simulation has been constructed using Unity3D (https://unity3d.com/) and Vufoia (https://developer.vuforia.com/) -- see https://youtu.be/0pHJWjG4-aQ for a video demonstration.
Sheehy Et Al 4th International Wireless Ready Symposium (1)Paul Herring
?
Designing a virtual teacher for non-verbal children with autism: Pedagogical affordances and the influence of teacher voice. Presentation given to the 4th International Wireless Ready SymposiumDigital Asia: Language, Technology & Community
2010
HandHold Adaptive and iPrompts PRO -- for Site License Partnersrobtedesco
?
HandHold Adaptive is a leading developer of apps for autism and special education, including their flagship app iPrompts, one of the first special education apps. They have developed a suite of apps for visual supports, social stories, and speech therapy. Their apps have been independently researched and studies have found students are highly engaged with the technology and teachers find the apps easy to use. HandHold Adaptive offers the apps individually or in a PRO suite, and also provides custom licensing options for schools and districts.
The document discusses supporting human communication through human-computer interaction designs. It describes how communication involves not just transmitting data across distances, but also understanding between people despite psychological barriers. While technologies like video conferencing and machine translation aim to help, they don't always work effectively if they don't consider the nature of human communication. The document argues that designs should be grounded in research on the features and constraints of successful and unsuccessful communication. The goal is for technologies to help bridge psychological distances and truly connect human minds.
This document summarizes a presentation on supporting human communication with human-computer interaction designs. It discusses how communication can be understood both as data transmission and understanding between people. While data transmission is not difficult today, understanding across differences remains challenging. The presentation explores how HCI can help by grounding designs in an understanding of human communication and addressing issues like awareness and understanding. Ultimately, the goal is finding ways for technology to better facilitate understanding between people.
This paper proposes a unified learning framework to jointly address audio-visual speech recognition and manipulation tasks using cross-modal mutual learning. It aims to disentangle representative features from audio and visual input data using advanced learning strategies. A linguistic module is used to extract knowledge across modalities through cross-modal learning. The goal is to recognize speech with the aid of visual information like lip movements, while preserving identity information for data recovery and synthesis tasks.
The document discusses gender considerations in human-computer interaction (HCI) and user-centered design (UCD). It provides an overview of gender HCI as a subfield focusing on designing interactive systems that account for gender differences. Examples are given of applications that could be adapted to support gender differences, such as intelligent and adaptive interfaces, natural interfaces using augmented reality, and examining emotional/social factors in online games/courses.
Graphical vs. textual representations were compared in a requirements comprehension study. Subjects (N=28 students) viewed requirements documents presented graphically, textually, or with both. Results showed no significant difference in comprehension accuracy between representations. However, graphical representations required significantly more visual effort as measured by eye movements. Subjects also preferred graphical representations but found them more difficult. The document structure influenced whether subjects adopted a top-down or bottom-up problem-solving strategy.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This paper presents a framework called FILTWAM for real-time emotion recognition in e-learning environments using webcams. FILTWAM can recognize emotions from facial expressions and provide timely feedback. It was tested in a proof of concept study where 10 participants mimicked facial expressions corresponding to basic emotions. Video recordings were analyzed by experts and the software, showing the software achieved an overall accuracy of 72% in recognizing emotions from facial expressions. The study validated the use of webcam data for real-time interpretation of emotions during e-learning.
The Challenges of Affect Detection in the Social Programmer EcosystemNicole Novielli
?
Invited talk at the University of Hamburg - January 2016
https://www.inf.uni-hamburg.de/home/news/kolloquium/wise15-16/novielli-nicole.html
More info: N. Novielli, F. Calefato, F. Lanubile. ¡°The Challenges of Sentiment Detection in the Social Programmer Ecosystem¡± In Proc. 7th Int¡¯l Workshop on Social Software Engineering (SSE¡¯15), Sep. 1, 2015, Bergamo, Italy.
Software engineering involves a large amount of social interaction, as programmers often need to cooperate with others, whether directly or indirectly. However, we have become fully aware of the importance of social aspects in software engineering activities only over the last decade. In fact, it was not until the recent diffusion and massive adoption of social media that we could witness the rise of the ¡°social programmer¡± and the surrounding ecosystem. Social media has deeply influenced the design of software development-oriented tools such as GitHub (i.e., a social coding site) and Stack Overflow (i.e., a community-based question answering site). Stack Overflow, in particular, is an example of an online community where social programmers do networking by reading and answering others¡¯ questions, thus participating in the creation and diffusion of crowdsourced knowledge and software documentation.
One of the biggest drawbacks of computer-mediated communication is to appropriately convey sentiment through text. While display rules for emotions exist and are widely accepted for interaction in traditional face-to-face communication, web users are not necessarily prepared for effectively dealing with the social media barriers to non-verbal communication. Thus, the design of systems and mechanisms for the development of emotional awareness between communicators is an important technical and social challenge for research related to computer-supported collaboration and social computing.
As a consequence, a recent research trend has emerged to study the role of affect in the social programmer ecosystem, by applying sentiment analysis to the content available in sites such as GitHub and Stack Overflow, as well as in other asynchronous communication artifacts such as comments in issue tracking systems. This talk surveys the state-of-the-art in sentiment analysis tools and examines to what extent they are able to detect affective expressions in communication traces left by software developers. A discussion is offered about the advantages and limitations of choosing sentiment polarity and strength as an appropriate way to operationalize affective states in empirical studies. Finally, open challenges and opportunities of affective software engineering are discussed, with special focus on the need to combine cognitive emotion modeling with affective computing and natural language processing techniques to build large-scale, robust approaches for sentiment detection in software engineering.
This document summarizes a study that used a grounded theory approach to understand user uptake of advanced video conferencing technologies. The researchers observed 17 video conferences and identified themes about socio-technical interactions and group dynamics. A survey found a gap between users' actual experiences and perceptions of potential. Features of in-person interactions were juxtaposed with video conferences to identify differences. Trial annotation tools were limited. Future work aims to develop a large-scale grounded annotation tool to systematically analyze technology use and inform design improvements.
Augmented reality techniques can enhance collaboration by providing spatial cues that improve awareness of partners' actions. Studies found AR collaboration mimicked natural face-to-face interaction more than video conferencing through gestures, speech patterns and subjective feedback. Future collaborative AR systems aim to seamlessly bridge the physical and virtual through wearable displays and tangible interactions to support remote and co-located collaboration.
Virtual Communication in Educational InstitutionsTanya Joosten
?
Creating and maintaining virtual communities can be challenging due to issues like building trust, anonymity, and feeling detached from others. Communication problems can arise from differences in perception and a lack of nonverbal cues in digital communication. To address these challenges, it is important to build trust, provide feedback, mediate conflicts respectfully, and use multiple communication methods including face-to-face when possible. Changing one's reactions and focusing on similarities can help improve virtual interactions and relationships.
The document discusses designing distributed user interfaces (DUIs) that span multiple devices. It proposes developing a design patterns language to provide interaction designers with options for how to distribute interfaces across devices and a rationale for choosing different design options. Example patterns could illustrate concepts like distribution of interactions and activities. Future work involves clarifying the pattern language concepts, evaluating existing DUIs with the patterns, designing a mobile DUI using the patterns, and assessing the patterns' usefulness for designers.
This document discusses the promises and challenges of video conferencing technology for enabling geographically dispersed collaboration. While such tools aim to provide a communication experience close to face-to-face interaction, surveys found users still saw dissimilarities compared to in-person meetings. To better achieve parity with face-to-face interaction and increase adoption, the document recommends improving infrastructure, enhancing social presence features, and most importantly, developing tools through a user-centered design approach that allows flexible, informal interactions central to in-person communication.
This presentation was on Empathic Mixed Reality, which we applied Mixed Reality technology to Empathic Computing in our studies. We shared an overview of our research and selected findings. This talk was given at ETRI and KAIST in Daejeon, South Korea, on the 24th of May 2017.
Does your organization have loads of unused data? Information design can turn that data into understandable visuals, giving your members the right information to make choices or learn something new about your industry. Be better positioned to tell your story by learning how to make your infographics clear, compelling, and convincing. Learn how infographics can boost your website¡¯s SEO and can aid in user engagement in this free webinar.
Multimodal Analytics for Real-time Feedback in Co-located Collaboration, EC-T...Sambit Praharaj
?
This presentation was part of the EC-TEL conference in Leeds, UK. The full research paper is published in the Springer LNCS proceedings and can be found here:
https://link.springer.com/chapter/10.1007/978-3-319-98572-5_15
2011 | Communication design highlights for service design francesca // urijoe
?
This document discusses tools and methods for communicating service design plans and solutions. It outlines several visualization tools that can be used at different stages of a service design process, including moodboards, posters, storyboards, and system maps. These tools aim to strategically represent and visualize a service solution in order to help different partners communicate and develop the solution together. The document also emphasizes the importance of communication for engaging diverse stakeholders and users, and notes that field research is a key first step to enable informed design decisions by organizing relevant insights and data from users.
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET Journal
?
This document summarizes a research paper on hand gesture recognition using convolutional neural networks (CNN). The paper aims to develop a system to recognize American Sign Language (ASL) to help facilitate communication for deaf individuals. The system would capture hand gestures via video and translate them into text. The researchers conducted a literature review on previous work using CNNs and 3D convolutional models for sign language recognition. They intend to implement a 3D CNN model on ASL data and analyze the results to improve recognition accuracy for communicating via sign language.
Talk to Me: Using Virtual Avatars to Improve Remote CollaborationMark Billinghurst
?
The document discusses using virtual avatars to improve remote collaboration. It provides background on communication cues used in face-to-face interactions versus remote communication. It then discusses early experiments using augmented reality for remote conferencing dating back to the 1990s. The document outlines key questions around designing effective virtual bodies for collaboration and discusses various technologies that have been developed for remote collaboration using augmented reality, virtual reality, and mixed reality. It summarizes several studies that have evaluated factors like avatar representation, sharing of different communication cues, and effects of spatial audio and visual cues on collaboration tasks.
Excretion in Humans | Cambridge IGCSE BiologyBlessing Ndazie
?
This IGCSE Biology presentation covers excretion in humans, explaining the removal of metabolic wastes such as carbon dioxide, urea, and excess salts. Learn about the structure and function of the kidneys, the role of the liver in excretion, ultrafiltration, selective reabsorption, and the importance of homeostasis. Includes diagrams and explanations to help Cambridge IGCSE students prepare effectively for exams!
More Related Content
Similar to Kinect-taped communication: Using motion sensing to study gesture use and similarity in face-to-face and computer-mediated brainstorming (20)
This paper proposes a unified learning framework to jointly address audio-visual speech recognition and manipulation tasks using cross-modal mutual learning. It aims to disentangle representative features from audio and visual input data using advanced learning strategies. A linguistic module is used to extract knowledge across modalities through cross-modal learning. The goal is to recognize speech with the aid of visual information like lip movements, while preserving identity information for data recovery and synthesis tasks.
The document discusses gender considerations in human-computer interaction (HCI) and user-centered design (UCD). It provides an overview of gender HCI as a subfield focusing on designing interactive systems that account for gender differences. Examples are given of applications that could be adapted to support gender differences, such as intelligent and adaptive interfaces, natural interfaces using augmented reality, and examining emotional/social factors in online games/courses.
Graphical vs. textual representations were compared in a requirements comprehension study. Subjects (N=28 students) viewed requirements documents presented graphically, textually, or with both. Results showed no significant difference in comprehension accuracy between representations. However, graphical representations required significantly more visual effort as measured by eye movements. Subjects also preferred graphical representations but found them more difficult. The document structure influenced whether subjects adopted a top-down or bottom-up problem-solving strategy.
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
This paper presents a framework called FILTWAM for real-time emotion recognition in e-learning environments using webcams. FILTWAM can recognize emotions from facial expressions and provide timely feedback. It was tested in a proof of concept study where 10 participants mimicked facial expressions corresponding to basic emotions. Video recordings were analyzed by experts and the software, showing the software achieved an overall accuracy of 72% in recognizing emotions from facial expressions. The study validated the use of webcam data for real-time interpretation of emotions during e-learning.
The Challenges of Affect Detection in the Social Programmer EcosystemNicole Novielli
?
Invited talk at the University of Hamburg - January 2016
https://www.inf.uni-hamburg.de/home/news/kolloquium/wise15-16/novielli-nicole.html
More info: N. Novielli, F. Calefato, F. Lanubile. ¡°The Challenges of Sentiment Detection in the Social Programmer Ecosystem¡± In Proc. 7th Int¡¯l Workshop on Social Software Engineering (SSE¡¯15), Sep. 1, 2015, Bergamo, Italy.
Software engineering involves a large amount of social interaction, as programmers often need to cooperate with others, whether directly or indirectly. However, we have become fully aware of the importance of social aspects in software engineering activities only over the last decade. In fact, it was not until the recent diffusion and massive adoption of social media that we could witness the rise of the ¡°social programmer¡± and the surrounding ecosystem. Social media has deeply influenced the design of software development-oriented tools such as GitHub (i.e., a social coding site) and Stack Overflow (i.e., a community-based question answering site). Stack Overflow, in particular, is an example of an online community where social programmers do networking by reading and answering others¡¯ questions, thus participating in the creation and diffusion of crowdsourced knowledge and software documentation.
One of the biggest drawbacks of computer-mediated communication is to appropriately convey sentiment through text. While display rules for emotions exist and are widely accepted for interaction in traditional face-to-face communication, web users are not necessarily prepared for effectively dealing with the social media barriers to non-verbal communication. Thus, the design of systems and mechanisms for the development of emotional awareness between communicators is an important technical and social challenge for research related to computer-supported collaboration and social computing.
As a consequence, a recent research trend has emerged to study the role of affect in the social programmer ecosystem, by applying sentiment analysis to the content available in sites such as GitHub and Stack Overflow, as well as in other asynchronous communication artifacts such as comments in issue tracking systems. This talk surveys the state-of-the-art in sentiment analysis tools and examines to what extent they are able to detect affective expressions in communication traces left by software developers. A discussion is offered about the advantages and limitations of choosing sentiment polarity and strength as an appropriate way to operationalize affective states in empirical studies. Finally, open challenges and opportunities of affective software engineering are discussed, with special focus on the need to combine cognitive emotion modeling with affective computing and natural language processing techniques to build large-scale, robust approaches for sentiment detection in software engineering.
This document summarizes a study that used a grounded theory approach to understand user uptake of advanced video conferencing technologies. The researchers observed 17 video conferences and identified themes about socio-technical interactions and group dynamics. A survey found a gap between users' actual experiences and perceptions of potential. Features of in-person interactions were juxtaposed with video conferences to identify differences. Trial annotation tools were limited. Future work aims to develop a large-scale grounded annotation tool to systematically analyze technology use and inform design improvements.
Augmented reality techniques can enhance collaboration by providing spatial cues that improve awareness of partners' actions. Studies found AR collaboration mimicked natural face-to-face interaction more than video conferencing through gestures, speech patterns and subjective feedback. Future collaborative AR systems aim to seamlessly bridge the physical and virtual through wearable displays and tangible interactions to support remote and co-located collaboration.
Virtual Communication in Educational InstitutionsTanya Joosten
?
Creating and maintaining virtual communities can be challenging due to issues like building trust, anonymity, and feeling detached from others. Communication problems can arise from differences in perception and a lack of nonverbal cues in digital communication. To address these challenges, it is important to build trust, provide feedback, mediate conflicts respectfully, and use multiple communication methods including face-to-face when possible. Changing one's reactions and focusing on similarities can help improve virtual interactions and relationships.
The document discusses designing distributed user interfaces (DUIs) that span multiple devices. It proposes developing a design patterns language to provide interaction designers with options for how to distribute interfaces across devices and a rationale for choosing different design options. Example patterns could illustrate concepts like distribution of interactions and activities. Future work involves clarifying the pattern language concepts, evaluating existing DUIs with the patterns, designing a mobile DUI using the patterns, and assessing the patterns' usefulness for designers.
This document discusses the promises and challenges of video conferencing technology for enabling geographically dispersed collaboration. While such tools aim to provide a communication experience close to face-to-face interaction, surveys found users still saw dissimilarities compared to in-person meetings. To better achieve parity with face-to-face interaction and increase adoption, the document recommends improving infrastructure, enhancing social presence features, and most importantly, developing tools through a user-centered design approach that allows flexible, informal interactions central to in-person communication.
This presentation was on Empathic Mixed Reality, which we applied Mixed Reality technology to Empathic Computing in our studies. We shared an overview of our research and selected findings. This talk was given at ETRI and KAIST in Daejeon, South Korea, on the 24th of May 2017.
Does your organization have loads of unused data? Information design can turn that data into understandable visuals, giving your members the right information to make choices or learn something new about your industry. Be better positioned to tell your story by learning how to make your infographics clear, compelling, and convincing. Learn how infographics can boost your website¡¯s SEO and can aid in user engagement in this free webinar.
Multimodal Analytics for Real-time Feedback in Co-located Collaboration, EC-T...Sambit Praharaj
?
This presentation was part of the EC-TEL conference in Leeds, UK. The full research paper is published in the Springer LNCS proceedings and can be found here:
https://link.springer.com/chapter/10.1007/978-3-319-98572-5_15
2011 | Communication design highlights for service design francesca // urijoe
?
This document discusses tools and methods for communicating service design plans and solutions. It outlines several visualization tools that can be used at different stages of a service design process, including moodboards, posters, storyboards, and system maps. These tools aim to strategically represent and visualize a service solution in order to help different partners communicate and develop the solution together. The document also emphasizes the importance of communication for engaging diverse stakeholders and users, and notes that field research is a key first step to enable informed design decisions by organizing relevant insights and data from users.
IRJET- Hand Gesture based Recognition using CNN MethodologyIRJET Journal
?
This document summarizes a research paper on hand gesture recognition using convolutional neural networks (CNN). The paper aims to develop a system to recognize American Sign Language (ASL) to help facilitate communication for deaf individuals. The system would capture hand gestures via video and translate them into text. The researchers conducted a literature review on previous work using CNNs and 3D convolutional models for sign language recognition. They intend to implement a 3D CNN model on ASL data and analyze the results to improve recognition accuracy for communicating via sign language.
Talk to Me: Using Virtual Avatars to Improve Remote CollaborationMark Billinghurst
?
The document discusses using virtual avatars to improve remote collaboration. It provides background on communication cues used in face-to-face interactions versus remote communication. It then discusses early experiments using augmented reality for remote conferencing dating back to the 1990s. The document outlines key questions around designing effective virtual bodies for collaboration and discusses various technologies that have been developed for remote collaboration using augmented reality, virtual reality, and mixed reality. It summarizes several studies that have evaluated factors like avatar representation, sharing of different communication cues, and effects of spatial audio and visual cues on collaboration tasks.
Excretion in Humans | Cambridge IGCSE BiologyBlessing Ndazie
?
This IGCSE Biology presentation covers excretion in humans, explaining the removal of metabolic wastes such as carbon dioxide, urea, and excess salts. Learn about the structure and function of the kidneys, the role of the liver in excretion, ultrafiltration, selective reabsorption, and the importance of homeostasis. Includes diagrams and explanations to help Cambridge IGCSE students prepare effectively for exams!
Effects of various chemical factors on in-vitro growth of tissue culture. Various factors like Environmental, Chemical, Physical, and photoperiod affect plant tissue in vitro growth. ºÝºÝߣ discuss about the chemical factors like Macronutrients, micronutrients, PGR as well include the new chemical factor that are descovered recently like Meta-topolin, TDZ etc.
The Solar System¡¯s passage through the Radcliffe wave during the middle MioceneS¨¦rgio Sacani
?
As the Solar System orbits the Milky Way, it encounters various Galactic environments, including dense regions of the
interstellar medium (ISM). These encounters can compress the heliosphere, exposing parts of the Solar System to the ISM, while also
increasing the influx of interstellar dust into the Solar System and Earth¡¯s atmosphere. The discovery of new Galactic structures, such
as the Radcliffe wave, raises the question of whether the Sun has encountered any of them.
Aims. The present study investigates the potential passage of the Solar System through the Radcliffe wave gas structure over the past
30 million years (Myr).
Methods. We used a sample of 56 high-quality, young (¡Ü30 Myr) open clusters associated with a region of interest of the Radcliffe
wave to trace its motion back and investigate a potential crossing with the Solar System¡¯s past orbit.
Results. We find that the Solar System¡¯s trajectory intersected the Radcliffe wave in the Orion region. We have constrained the timing
of this event to between 18.2 and 11.5 Myr ago, with the closest approach occurring between 14.8 and 12.4 Myr ago. Notably, this
period coincides with the Middle Miocene climate transition on Earth, providing an interdisciplinary link with paleoclimatology. The
potential impact of the crossing of the Radcliffe wave on the climate on Earth is estimated. This crossing could also lead to anomalies
in radionuclide abundances, which is an important research topic in the field of geology and nuclear astrophysics.
Hormones and the Endocrine System | IGCSE BiologyBlessing Ndazie
?
This IGCSE Biology presentation explores hormones and the endocrine system, explaining their role in controlling body functions. Learn about the differences between nervous and hormonal control, major endocrine glands, key hormones (such as insulin, adrenaline, and testosterone), and homeostasis. Understand how hormones regulate growth, metabolism, reproduction, and the fight-or-flight response. A perfect resource for Cambridge IGCSE students preparing for exams!
History of atomic layer deposition (ALD) in a nutshellRiikka Puurunen
?
Lecture slides presented at Aalto University course CHEM-E5175 Materials engineering by thin films (by Prof. Ville Miikkulainen), in a visiting lecture Jan 28, 2025
Contents:
1 Invention of Atomic Layer Epitaxy 1974
2 Microchemistry Ltd and spread of ALE/ALD
3 Independent invention, Molecular Layering 1960s -->
4 Connecting the two independent development branches of ALD
5 Take-home message
(Extra materials on fundamentals of ALD, assumed as background knowledge)
ºÝºÝߣShare: /slideshow/history-of-atomic-layer-deposition-ald-in-a-nutshell/275984811
Youtube: https://youtu.be/FBLThDjRff0
In vitro means production in a test tube or other similar vessel where culture conditions and medium are controlled for optimum growth during tissue culture.
It is a critical step in plant tissue culture where roots are induced and developed from plant explants in a controlled, sterile environment.
ºÝºÝߣ include factors affecting In-vitro Rooting, steps involved, stages and In vitro rooting of the two genotypes of Argania Spinosa in different culture media.
(February 25th, 2025) Real-Time Insights into Cardiothoracic Research with In...Scintica Instrumentation
?
s a major gap - these methods can't fully capture how cells behave in a living, breathing system.
That's where Intravital Microscopy (IVM) comes in. This powerful imaging technology allows researchers to see cellular activity in real-time, with incredible clarity and precision.
But imaging the heart and lungs presents a unique challenge. These organs are constantly in motion, making real-time visualization tricky. Thankfully, groundbreaking advances - like vacuum-based stabilization and motion compensation algorithms - are making high-resolution imaging of these moving structures a reality.
What You'll Gain from This Webinar:
- New Scientific Insights ¨C See how IVM is transforming our understanding of immune cell movement in the lungs, cellular changes in heart disease, and more.
- Advanced Imaging Solutions ¨C Discover the latest stabilization techniques that make it possible to capture clear, detailed images of beating hearts and expanding lungs.
- Real-World Applications ¨C Learn how these innovations are driving major breakthroughs in cardiovascular and pulmonary research, with direct implications for disease treatment and drug development.
- Live Expert Discussion ¨C Connect with experts and get answers to your biggest questions about in vivo imaging.
This is your chance to explore how cutting-edge imaging is revolutionizing cardiothoracic research - shedding light on disease mechanisms, immune responses, and new therapeutic possibilities.
- Register now and stay ahead of the curve in in vivo imaging!
PROTEIN DEGRADATION via ubiquitous pathawayKaviya Priya A
?
Protein degradation via ubiquitous pathway In general science, a ubiquitous pathway refers to a biochemical or metabolic pathway that is:
1. *Widely present*: Found in many different organisms, tissues, or cells.
2. *Conserved*: Remains relatively unchanged across different species or contexts.
Examples of ubiquitous pathways include:
1. *Glycolysis*: The process of breaking down glucose for energy, found in nearly all living organisms.
2. *Citric acid cycle (Krebs cycle)*: A key metabolic pathway involved in energy production, present in many cells.
3. *Pentose phosphate pathway*: A metabolic pathway involved in energy production and antioxidant defenses, found in many organisms.
These pathways are essential for life and have been conserved across evolution, highlighting their importance for cellular function and survival.
Telescope equatorial mount polar alignment quick reference guidebartf25
?
Telescope equatorial mount polar alignment quick reference guide. Helps with accurate alignment and improved guiding for your telescope. Provides a step-by-step process but in a summarized format so that the quick reference guide can be reviewed and the steps repeated while you are out under the stars with clear skies preparing for a night of astrophotography imaging or visual observing.
Biowaste Management and Its Utilization in Crop Production.pptxVivek Bhagat
?
Bio-waste management involves the collection, treatment, and recycling of organic waste to reduce environmental impact. Proper utilization in crop production includes composting, vermiculture, and biofertilizers, enhancing soil fertility and sustainability. This eco-friendly approach minimizes waste, improves crop yield, and promotes sustainable agriculture.
This ppt shows about viral disease in plants and vegetables.It shows different species of virus effect on plants along their vectors which carries those tiny microbes.
first law of thermodynamics class 12(chemistry) final.pdfismitguragain527
?
Kinect-taped communication: Using motion sensing to study gesture use and similarity in face-to-face and computer-mediated brainstorming
1. Kinect-taped Communication:
Using Motion Sensing to Study Gesture Use
and Similarity in Face-to-Face and
Computer-Mediated Brainstorming
Hao-Chuan Wang, Chien-Tung Lai
National Tsing Hua University, Taiwan
2. [cf.
?Bos
?et
?al.,
?2002;
?Setlock
?et
?al.,
?2004;
?Scissors
?et
?al.,
?2008,
?Wang
?et
?al.,
?2009]
Computer-mediated communication (CMC) tools are
prevalent, but are they all equal?
?? Ex. Video vs. Audio
Media properties influence aspects of communication
differently
?? Task performance, grounding, styles, similarity of
language patterns, social processes and outcomes etc.
How media influence communication?
4. Studying gesture use in
communication
Current methods:
?? Videotaping with manual coding.
?? Giving specific instructions to participants
(e.g., to gesture or not).
?? Using confederates etc.
Problems to solve:
?? High cost. Labor-intensiveness.
?? Resolution of manual analysis-
Hard to recognize and reliably label small movements.
?? Scalability-
Hard to study arbitrary communication in the wild.
5. ¡°Kinect-taping¡±method
Like videotaping, we use motion sensing devices, such as
Microsoft Kinect, to record hand and body movements
during conversations.
?? Detailed, easier-to-process representations.
?? Behavioral science instrument (¡°microscope¡±) to
study non-verbal communication in ad hoc groups.
?? Low cost if automatic measures are satisfactory.
6. Re-appropriating motion sensors in HCI:
Sensing-aided user research for
future designs
From sensors as design elements to sensors as
research instruments to help future designs.
!
(a)!Face(to(face!(F2F)!communication! (b
[cf.
?Mark
?et
?al.,
?2014]
7. A media comparison study
Investigate how people use gestures during face-to-
face and computer-mediated brainstorming
Compare three communication media
?? Face-to-Face
?? Video
?? Audio
!
(a)!Face(to(face!(F2F)!communication!
!
(b)!Video(mediated!communication!
Figure'1.'A'sample'study'setting'that'compares'(a)'F2F'to'(b)'videomediated'communication'
by'using'Kinect'as'a'behavioral'science'instrument.'
8. Hypotheses
H1. Visibility increases gesture use
Proportion of gesture
Face-to-Face Video Audio
H2. Visibility increases accommodation
Similarity between group members¡¯ gestures
Face-to-Face Video Audio
Also explore how gesture use, level of understanding,
and ideation productivity correlate.
[cf.
?Clark
?
?Brennan,
?1991]
[cf.
?Giles
?
?Coupland,
?1991]
9. Experimental design
36 individuals, 18 two-person groups
Kinect-taped group brainstorming sessions
Face-to-Face
Video
Audio
Three
?trials
?(15
?min
?each)
?
?
in
?counterbalanced
?order
?
Data analysis
Amount and similarity of gestures,
Level of understanding, Productivity
10. How to quantify gestures?
How many gestures are there in a 15 min talk?
18. Feature extraction and representation
Unit motions are represented as feature vectors
?? Time length, path length, displacement,
velocity, speed, angular movement etc.
?? Features extracted for both hands and both
elbows.
73 features extracted for each unit motion.
Similarity between unit motions: Cosine value
between the two vectors.
19. Validating the similarity metric
1
2
3
Machine Ranking
Human Ranking
1
2
3
Randomly select
motion queries
Retrieve similar and
dissimilar motions
Kinect-taped motion
database
21. H1: Amount of gesture use
H2: Similarity between group members
Associations
?? Amount of gesture and understanding
?? Amount of gesture and ideation productivity
?? Gesture similarity and ideation productivity
Key Results
22. Visibility on proportion of gesture use
0
2
4
6
8
10
12
14
16
Face-to-face Video Audio
ProportionofGestureUse(%)
H1 not supported. Media did not influence percentage of gesture.
People gesture as much in Audio as in F2F and Video.
23. Association between self-gesture
and level of understanding
ModelPredicted,UnderstandingModelPredicted,Num
Propor9on,of,Individual¡¯s,Own,Gesture,Use,(%)
Audio
F2F
Video
Individual¡¯s Own Gesture Use (%)
Non-communicative
function of gesture.
Understanding
correlates with
self-gesture but not
partner-gesture
Stronger correlation
with reduced or no
visibility.
24. Similarity between group members
0.46
0.47
0.48
0.49
0.5
0.51
0.52
0.53
0.54
0.55
Face-to-face Video Audio
Between-participantGestural
Similarity
H2 supported. Similarity F2F Video Audio.
People gesture more similarly when they can see each other.
26. Motion sensing for
studying non-verbal
behaviors in CMC.
Summary and implications
?
Media
?
Comparison
?
?
Study
Kinect-
taping
Method
Visibility influences
similarity but not
amount of gesture.
Only self-gesture
correlates with
understanding.
Gesture doesn¡¯t
seem to convey
much meaning to the
partner. Seeing the
partner is not crucial
to understanding.
27. Study communication
of ad hoc groups
in the wild.
Distributed
deployment
study of CMC tools.
Cross-lingual and
cross-cultural
communication.
Summary and implications (cont.)
?
Media
?
Comparison
?
?
Study
Kinect-
taping
Method
The value of video
may be relatively
limited to the social
and collaborative
aspect (similarity
etc.).
Feedback that
promotes self-
gesturing may help
understanding.
28. Microsoft Research Asia
(UR FY13-RES-OPP-027)
Ministry of Science and Technology, Taiwan
(NSC 102-2221-E-007-073-MY3)
Contact:
Hao-Chuan Wang ?ÍõºÆÈ«
haochan@cs.nthu.edu.tw
Acknowledgement