際際滷shows by User: carologic / http://www.slideshare.net/images/logo.gif 際際滷shows by User: carologic / Fri, 24 Sep 2021 21:21:13 GMT 際際滷Share feed for 際際滷shows by User: carologic Navigating the Complexity of Trust at UXPA Boston 2021 /slideshow/navigating-the-complexity-of-trust-at-uxpa-boston-2021/250290430 trustbostonuxpa2021carolsmith-210924212113
Trust is complex and transient. Context, safety, privacy, respect, and many other considerations are built into each individuals concept of trust. How can we examine this complexity in a way that supports the work of making digital experiences? What research supports this work and how can we use practices of responsible development to make systems that earn appropriate levels of trust? What is an appropriate level of trust for emerging technologies such as machine learning systems? This talk will examine trust and how UX practitioners can define and measure it. Carol J. Smith September 24, 2021 Carnegie Mellon University, SEI Twitter: @carologic @sei_etc]]>

Trust is complex and transient. Context, safety, privacy, respect, and many other considerations are built into each individuals concept of trust. How can we examine this complexity in a way that supports the work of making digital experiences? What research supports this work and how can we use practices of responsible development to make systems that earn appropriate levels of trust? What is an appropriate level of trust for emerging technologies such as machine learning systems? This talk will examine trust and how UX practitioners can define and measure it. Carol J. Smith September 24, 2021 Carnegie Mellon University, SEI Twitter: @carologic @sei_etc]]>
Fri, 24 Sep 2021 21:21:13 GMT /slideshow/navigating-the-complexity-of-trust-at-uxpa-boston-2021/250290430 carologic@slideshare.net(carologic) Navigating the Complexity of Trust at UXPA Boston 2021 carologic Trust is complex and transient. Context, safety, privacy, respect, and many other considerations are built into each individuals concept of trust. How can we examine this complexity in a way that supports the work of making digital experiences? What research supports this work and how can we use practices of responsible development to make systems that earn appropriate levels of trust? What is an appropriate level of trust for emerging technologies such as machine learning systems? This talk will examine trust and how UX practitioners can define and measure it. Carol J. Smith September 24, 2021 Carnegie Mellon University, SEI Twitter: @carologic @sei_etc <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/trustbostonuxpa2021carolsmith-210924212113-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Trust is complex and transient. Context, safety, privacy, respect, and many other considerations are built into each individuals concept of trust. How can we examine this complexity in a way that supports the work of making digital experiences? What research supports this work and how can we use practices of responsible development to make systems that earn appropriate levels of trust? What is an appropriate level of trust for emerging technologies such as machine learning systems? This talk will examine trust and how UX practitioners can define and measure it. Carol J. Smith September 24, 2021 Carnegie Mellon University, SEI Twitter: @carologic @sei_etc
Navigating the Complexity of Trust at UXPA Boston 2021 from Carol Smith
]]>
938 0 https://cdn.slidesharecdn.com/ss_thumbnails/trustbostonuxpa2021carolsmith-210924212113-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Implementing Ethics: Developing Trustworthy AI PyCon 2020 /slideshow/implementing-ethics-developing-trustworthy-ai-pycon-2020/232544532 implementingethics-pycon2020-200424004434
Ethics discussions abound, but translating do no harm into our work is frustrating at best, and obfuscatory at worst. We can agree that keeping humans safe and in control is important, but implementing ethics is intimidating work. Learn how to wield your preferred technology ethics code to make an AI system that is accountable, de-risked, respectful, secure, honest and usable. The presenter will introduce the topic of ethics and then step through a user experience (UX) framework to guide AI development teams successfully through this process. Presented virtually for PyCon 2020 which was to be held in Pittsburgh, PA, but was reorganized online due to Covid-19.]]>

Ethics discussions abound, but translating do no harm into our work is frustrating at best, and obfuscatory at worst. We can agree that keeping humans safe and in control is important, but implementing ethics is intimidating work. Learn how to wield your preferred technology ethics code to make an AI system that is accountable, de-risked, respectful, secure, honest and usable. The presenter will introduce the topic of ethics and then step through a user experience (UX) framework to guide AI development teams successfully through this process. Presented virtually for PyCon 2020 which was to be held in Pittsburgh, PA, but was reorganized online due to Covid-19.]]>
Fri, 24 Apr 2020 00:44:34 GMT /slideshow/implementing-ethics-developing-trustworthy-ai-pycon-2020/232544532 carologic@slideshare.net(carologic) Implementing Ethics: Developing Trustworthy AI PyCon 2020 carologic Ethics discussions abound, but translating do no harm into our work is frustrating at best, and obfuscatory at worst. We can agree that keeping humans safe and in control is important, but implementing ethics is intimidating work. Learn how to wield your preferred technology ethics code to make an AI system that is accountable, de-risked, respectful, secure, honest and usable. The presenter will introduce the topic of ethics and then step through a user experience (UX) framework to guide AI development teams successfully through this process. Presented virtually for PyCon 2020 which was to be held in Pittsburgh, PA, but was reorganized online due to Covid-19. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/implementingethics-pycon2020-200424004434-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Ethics discussions abound, but translating do no harm into our work is frustrating at best, and obfuscatory at worst. We can agree that keeping humans safe and in control is important, but implementing ethics is intimidating work. Learn how to wield your preferred technology ethics code to make an AI system that is accountable, de-risked, respectful, secure, honest and usable. The presenter will introduce the topic of ethics and then step through a user experience (UX) framework to guide AI development teams successfully through this process. Presented virtually for PyCon 2020 which was to be held in Pittsburgh, PA, but was reorganized online due to Covid-19.
Implementing Ethics: Developing Trustworthy AI PyCon 2020 from Carol Smith
]]>
2123 0 https://cdn.slidesharecdn.com/ss_thumbnails/implementingethics-pycon2020-200424004434-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Designing Trustworthy AI: A User Experience Framework at RSA 2020 /slideshow/designing-trustworthy-ai-a-user-experience-framework-at-rsa-2020/231061488 rsa-designtrustai-mlai-f03smith-200329172426
Artificial intelligence (AI) holds great promise to empower us with knowledge and scaled effectiveness. To harness the power of AI systems, we canand mustensure that we keep humans safe and in control. This session will introduce a new user experience (UX) framework to guide the creation of AI systems that are accountable, de-risked, respectful, secure, honest and usable. Presented at the RSA Conference 2020 in San Francisco, CA on February 28, 2020.]]>

Artificial intelligence (AI) holds great promise to empower us with knowledge and scaled effectiveness. To harness the power of AI systems, we canand mustensure that we keep humans safe and in control. This session will introduce a new user experience (UX) framework to guide the creation of AI systems that are accountable, de-risked, respectful, secure, honest and usable. Presented at the RSA Conference 2020 in San Francisco, CA on February 28, 2020.]]>
Sun, 29 Mar 2020 17:24:26 GMT /slideshow/designing-trustworthy-ai-a-user-experience-framework-at-rsa-2020/231061488 carologic@slideshare.net(carologic) Designing Trustworthy AI: A User Experience Framework at RSA 2020 carologic Artificial intelligence (AI) holds great promise to empower us with knowledge and scaled effectiveness. To harness the power of AI systems, we canand mustensure that we keep humans safe and in control. This session will introduce a new user experience (UX) framework to guide the creation of AI systems that are accountable, de-risked, respectful, secure, honest and usable. Presented at the RSA Conference 2020 in San Francisco, CA on February 28, 2020. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/rsa-designtrustai-mlai-f03smith-200329172426-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Artificial intelligence (AI) holds great promise to empower us with knowledge and scaled effectiveness. To harness the power of AI systems, we canand mustensure that we keep humans safe and in control. This session will introduce a new user experience (UX) framework to guide the creation of AI systems that are accountable, de-risked, respectful, secure, honest and usable. Presented at the RSA Conference 2020 in San Francisco, CA on February 28, 2020.
Designing Trustworthy AI: A User Experience Framework at RSA 2020 from Carol Smith
]]>
1433 0 https://cdn.slidesharecdn.com/ss_thumbnails/rsa-designtrustai-mlai-f03smith-200329172426-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
IA is Elemental: People are Fundamental at World IA Day 2020 Pittsburgh /slideshow/ia-is-elemental-people-are-fundamental-at-world-ia-day-2020-pittsburgh/228952546 presentation-wiad20-final-20200221-200223222922
Information architects work in a system with ourselves at the center. We are fundamental to making great experiences and as such, we must care for ourselves in order to best represent the people using the systems we are creating. Prioritizing the needs of users comes next, and with that protecting them by caring about diversity, inclusion and ethics. Finally, collaboration with colleagues and communities that influence our work can be done by educating them about IA work.]]>

Information architects work in a system with ourselves at the center. We are fundamental to making great experiences and as such, we must care for ourselves in order to best represent the people using the systems we are creating. Prioritizing the needs of users comes next, and with that protecting them by caring about diversity, inclusion and ethics. Finally, collaboration with colleagues and communities that influence our work can be done by educating them about IA work.]]>
Sun, 23 Feb 2020 22:29:22 GMT /slideshow/ia-is-elemental-people-are-fundamental-at-world-ia-day-2020-pittsburgh/228952546 carologic@slideshare.net(carologic) IA is Elemental: People are Fundamental at World IA Day 2020 Pittsburgh carologic Information architects work in a system with ourselves at the center. We are fundamental to making great experiences and as such, we must care for ourselves in order to best represent the people using the systems we are creating. Prioritizing the needs of users comes next, and with that protecting them by caring about diversity, inclusion and ethics. Finally, collaboration with colleagues and communities that influence our work can be done by educating them about IA work. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/presentation-wiad20-final-20200221-200223222922-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Information architects work in a system with ourselves at the center. We are fundamental to making great experiences and as such, we must care for ourselves in order to best represent the people using the systems we are creating. Prioritizing the needs of users comes next, and with that protecting them by caring about diversity, inclusion and ethics. Finally, collaboration with colleagues and communities that influence our work can be done by educating them about IA work.
IA is Elemental: People are Fundamental at World IA Day 2020 Pittsburgh from Carol Smith
]]>
918 0 https://cdn.slidesharecdn.com/ss_thumbnails/presentation-wiad20-final-20200221-200223222922-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Gearing up for Ethnography, Michigan State, World Usability Day 2019 /slideshow/gearing-up-for-ethnography-michigan-state-world-usability-day-2019/193791243 ethno-wud2019-final-191115030938
Prepping for UX research can be intimidating, and there is never enough time or resources. Carol will share her personal experiences in the field, both good and bad. She has learned the hard way, doing observations in moving vehicles, coal mines, hospitals, schools, homes, and offices. She will also share interesting anecdotes from colleagues and review both ethical and behavioral standards for researchers. The key is to prepare well, learn to be flexible and to adapt to the situation. Presented at World Usability Day 2019 at Michigan State University with Michigan UXPA]]>

Prepping for UX research can be intimidating, and there is never enough time or resources. Carol will share her personal experiences in the field, both good and bad. She has learned the hard way, doing observations in moving vehicles, coal mines, hospitals, schools, homes, and offices. She will also share interesting anecdotes from colleagues and review both ethical and behavioral standards for researchers. The key is to prepare well, learn to be flexible and to adapt to the situation. Presented at World Usability Day 2019 at Michigan State University with Michigan UXPA]]>
Fri, 15 Nov 2019 03:09:38 GMT /slideshow/gearing-up-for-ethnography-michigan-state-world-usability-day-2019/193791243 carologic@slideshare.net(carologic) Gearing up for Ethnography, Michigan State, World Usability Day 2019 carologic Prepping for UX research can be intimidating, and there is never enough time or resources. Carol will share her personal experiences in the field, both good and bad. She has learned the hard way, doing observations in moving vehicles, coal mines, hospitals, schools, homes, and offices. She will also share interesting anecdotes from colleagues and review both ethical and behavioral standards for researchers. The key is to prepare well, learn to be flexible and to adapt to the situation. Presented at World Usability Day 2019 at Michigan State University with Michigan UXPA <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ethno-wud2019-final-191115030938-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Prepping for UX research can be intimidating, and there is never enough time or resources. Carol will share her personal experiences in the field, both good and bad. She has learned the hard way, doing observations in moving vehicles, coal mines, hospitals, schools, homes, and offices. She will also share interesting anecdotes from colleagues and review both ethical and behavioral standards for researchers. The key is to prepare well, learn to be flexible and to adapt to the situation. Presented at World Usability Day 2019 at Michigan State University with Michigan UXPA
Gearing up for Ethnography, Michigan State, World Usability Day 2019 from Carol Smith
]]>
4322 7 https://cdn.slidesharecdn.com/ss_thumbnails/ethno-wud2019-final-191115030938-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Development at AAAI Symposium /slideshow/designing-trustworthy-ai-a-humanmachine-teaming-framework-to-guide-development-at-aaai-symposium/192742311 designtrustai-aaai-201911-191112140839
"Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Development" is a paper presented at the AAAI 2019 Fall Symposium on AI in Government and the Public Sector, (sponsored by the Association for the Advancement of Artificial Intelligence) in Washington, DC, November 79, 2019. Artificial intelligence (AI) holds great promise to empower us with knowledge and augment our effectiveness. We can -- and must -- ensure that we keep humans safe and in control, particularly with regard to government and public sector applications that affect broad populations. How can AI development teams harness the power of AI systems and design them to be valuable to humans? Diverse teams are needed to build trustworthy artificial intelligent systems, and those teams need to coalesce around a shared set of ethics. There are many discussions in the AI field about ethics and trust, but there are few frameworks available for people to use as guidance when creating these systems. The Human-Machine Teaming (HMT) Framework for Designing Ethical AI Experiences described in this paper, when used with a set of technical ethics, will guide AI development teams to create AI systems that are accountable, de-risked, respectful, secure, honest, and usable. To support the team's efforts, activities to understand people's needs and concerns will be introduced along with the themes to support the team's efforts. For example, usability testing can help determine if the audience understands how the AI system works and complies with the HMT Framework. The HMT Framework is based on reviews of existing ethical codes and best practices in human-computer interaction and software development. Human-machine teams are strongest when human users can trust AI systems to behave as expected, safely, securely, and understandably. Using the HMT Framework to design trustworthy AI systems will provide support to teams in identifying potential issues ahead of time and making great experiences for humans.]]>

"Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Development" is a paper presented at the AAAI 2019 Fall Symposium on AI in Government and the Public Sector, (sponsored by the Association for the Advancement of Artificial Intelligence) in Washington, DC, November 79, 2019. Artificial intelligence (AI) holds great promise to empower us with knowledge and augment our effectiveness. We can -- and must -- ensure that we keep humans safe and in control, particularly with regard to government and public sector applications that affect broad populations. How can AI development teams harness the power of AI systems and design them to be valuable to humans? Diverse teams are needed to build trustworthy artificial intelligent systems, and those teams need to coalesce around a shared set of ethics. There are many discussions in the AI field about ethics and trust, but there are few frameworks available for people to use as guidance when creating these systems. The Human-Machine Teaming (HMT) Framework for Designing Ethical AI Experiences described in this paper, when used with a set of technical ethics, will guide AI development teams to create AI systems that are accountable, de-risked, respectful, secure, honest, and usable. To support the team's efforts, activities to understand people's needs and concerns will be introduced along with the themes to support the team's efforts. For example, usability testing can help determine if the audience understands how the AI system works and complies with the HMT Framework. The HMT Framework is based on reviews of existing ethical codes and best practices in human-computer interaction and software development. Human-machine teams are strongest when human users can trust AI systems to behave as expected, safely, securely, and understandably. Using the HMT Framework to design trustworthy AI systems will provide support to teams in identifying potential issues ahead of time and making great experiences for humans.]]>
Tue, 12 Nov 2019 14:08:39 GMT /slideshow/designing-trustworthy-ai-a-humanmachine-teaming-framework-to-guide-development-at-aaai-symposium/192742311 carologic@slideshare.net(carologic) Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Development at AAAI Symposium carologic "Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Development" is a paper presented at the AAAI 2019 Fall Symposium on AI in Government and the Public Sector, (sponsored by the Association for the Advancement of Artificial Intelligence) in Washington, DC, November 79, 2019. Artificial intelligence (AI) holds great promise to empower us with knowledge and augment our effectiveness. We can -- and must -- ensure that we keep humans safe and in control, particularly with regard to government and public sector applications that affect broad populations. How can AI development teams harness the power of AI systems and design them to be valuable to humans? Diverse teams are needed to build trustworthy artificial intelligent systems, and those teams need to coalesce around a shared set of ethics. There are many discussions in the AI field about ethics and trust, but there are few frameworks available for people to use as guidance when creating these systems. The Human-Machine Teaming (HMT) Framework for Designing Ethical AI Experiences described in this paper, when used with a set of technical ethics, will guide AI development teams to create AI systems that are accountable, de-risked, respectful, secure, honest, and usable. To support the team's efforts, activities to understand people's needs and concerns will be introduced along with the themes to support the team's efforts. For example, usability testing can help determine if the audience understands how the AI system works and complies with the HMT Framework. The HMT Framework is based on reviews of existing ethical codes and best practices in human-computer interaction and software development. Human-machine teams are strongest when human users can trust AI systems to behave as expected, safely, securely, and understandably. Using the HMT Framework to design trustworthy AI systems will provide support to teams in identifying potential issues ahead of time and making great experiences for humans. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/designtrustai-aaai-201911-191112140839-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> &quot;Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Development&quot; is a paper presented at the AAAI 2019 Fall Symposium on AI in Government and the Public Sector, (sponsored by the Association for the Advancement of Artificial Intelligence) in Washington, DC, November 79, 2019. Artificial intelligence (AI) holds great promise to empower us with knowledge and augment our effectiveness. We can -- and must -- ensure that we keep humans safe and in control, particularly with regard to government and public sector applications that affect broad populations. How can AI development teams harness the power of AI systems and design them to be valuable to humans? Diverse teams are needed to build trustworthy artificial intelligent systems, and those teams need to coalesce around a shared set of ethics. There are many discussions in the AI field about ethics and trust, but there are few frameworks available for people to use as guidance when creating these systems. The Human-Machine Teaming (HMT) Framework for Designing Ethical AI Experiences described in this paper, when used with a set of technical ethics, will guide AI development teams to create AI systems that are accountable, de-risked, respectful, secure, honest, and usable. To support the team&#39;s efforts, activities to understand people&#39;s needs and concerns will be introduced along with the themes to support the team&#39;s efforts. For example, usability testing can help determine if the audience understands how the AI system works and complies with the HMT Framework. The HMT Framework is based on reviews of existing ethical codes and best practices in human-computer interaction and software development. Human-machine teams are strongest when human users can trust AI systems to behave as expected, safely, securely, and understandably. Using the HMT Framework to design trustworthy AI systems will provide support to teams in identifying potential issues ahead of time and making great experiences for humans.
Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Development at AAAI Symposium from Carol Smith
]]>
2040 8 https://cdn.slidesharecdn.com/ss_thumbnails/designtrustai-aaai-201911-191112140839-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
On the Road: Best Practices for Autonomous Experiences at WUC19 /slideshow/on-the-road-best-practices-for-autonomous-experiences-at-wuc19/182784197 onroad-austria-worldusabilitycongress-final-191016140123
Presented at the World Usability Congress in Graz, Austria on October 16, 2019. Self-driving vehicles are still a rarity in most cities, but as they become more common and as more and more humans interact with them we need to consider the wide variety of human experiences that occur within and along-side these vehicles. What information does the driver need when the vehicle is getting started vs. on its way? What information engenders trust and how much is too much? What changes due to experience level and comfort? How do we account for reliable easy commutes and people who use vehicles differently each day? How do these vehicles interact with other drivers, pedestrians, bicyclists and general society?]]>

Presented at the World Usability Congress in Graz, Austria on October 16, 2019. Self-driving vehicles are still a rarity in most cities, but as they become more common and as more and more humans interact with them we need to consider the wide variety of human experiences that occur within and along-side these vehicles. What information does the driver need when the vehicle is getting started vs. on its way? What information engenders trust and how much is too much? What changes due to experience level and comfort? How do we account for reliable easy commutes and people who use vehicles differently each day? How do these vehicles interact with other drivers, pedestrians, bicyclists and general society?]]>
Wed, 16 Oct 2019 14:01:23 GMT /slideshow/on-the-road-best-practices-for-autonomous-experiences-at-wuc19/182784197 carologic@slideshare.net(carologic) On the Road: Best Practices for Autonomous Experiences at WUC19 carologic Presented at the World Usability Congress in Graz, Austria on October 16, 2019. Self-driving vehicles are still a rarity in most cities, but as they become more common and as more and more humans interact with them we need to consider the wide variety of human experiences that occur within and along-side these vehicles. What information does the driver need when the vehicle is getting started vs. on its way? What information engenders trust and how much is too much? What changes due to experience level and comfort? How do we account for reliable easy commutes and people who use vehicles differently each day? How do these vehicles interact with other drivers, pedestrians, bicyclists and general society? <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/onroad-austria-worldusabilitycongress-final-191016140123-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Presented at the World Usability Congress in Graz, Austria on October 16, 2019. Self-driving vehicles are still a rarity in most cities, but as they become more common and as more and more humans interact with them we need to consider the wide variety of human experiences that occur within and along-side these vehicles. What information does the driver need when the vehicle is getting started vs. on its way? What information engenders trust and how much is too much? What changes due to experience level and comfort? How do we account for reliable easy commutes and people who use vehicles differently each day? How do these vehicles interact with other drivers, pedestrians, bicyclists and general society?
On the Road: Best Practices for Autonomous Experiences at WUC19 from Carol Smith
]]>
2920 11 https://cdn.slidesharecdn.com/ss_thumbnails/onroad-austria-worldusabilitycongress-final-191016140123-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Designing More Ethical and Unbiased Experiences - Abstractions /slideshow/designing-more-ethical-and-unbiased-experiences-abstractions/167235562 ethicsai-abstractions-finalwithreferences-190828184617
Presented at Abstractions, Pittsburgh, PA Karen Bachmann and Carol Smith, August 23, 2019 Humans are biased, and sadly, we are not always able to filter our deeply ingrained biases. UX designers and researchers have long understood this, but as we watch major technology companies make significant mistakes with regard to ethics and bias, the cost of not accounting for bias and ethics is becoming more evident and widely known. Even knowing what pitfalls exist, we still miss opportunities for doing good as a result of our own human biases obscuring our vision. We need tools to explore and challenge our biases in a productive way to deliver better outcomes. We need a set of shared values within teams and, ultimately, across the industry to promote our common responsibility to deliver the greatest benefit while causing the least amount of harm. How can we work together to intensify the focus on ethical design? In this session, well share ways you can empower yourself and your teams to do the right thing for people. ]]>

Presented at Abstractions, Pittsburgh, PA Karen Bachmann and Carol Smith, August 23, 2019 Humans are biased, and sadly, we are not always able to filter our deeply ingrained biases. UX designers and researchers have long understood this, but as we watch major technology companies make significant mistakes with regard to ethics and bias, the cost of not accounting for bias and ethics is becoming more evident and widely known. Even knowing what pitfalls exist, we still miss opportunities for doing good as a result of our own human biases obscuring our vision. We need tools to explore and challenge our biases in a productive way to deliver better outcomes. We need a set of shared values within teams and, ultimately, across the industry to promote our common responsibility to deliver the greatest benefit while causing the least amount of harm. How can we work together to intensify the focus on ethical design? In this session, well share ways you can empower yourself and your teams to do the right thing for people. ]]>
Wed, 28 Aug 2019 18:46:17 GMT /slideshow/designing-more-ethical-and-unbiased-experiences-abstractions/167235562 carologic@slideshare.net(carologic) Designing More Ethical and Unbiased Experiences - Abstractions carologic Presented at Abstractions, Pittsburgh, PA Karen Bachmann and Carol Smith, August 23, 2019 Humans are biased, and sadly, we are not always able to filter our deeply ingrained biases. UX designers and researchers have long understood this, but as we watch major technology companies make significant mistakes with regard to ethics and bias, the cost of not accounting for bias and ethics is becoming more evident and widely known. Even knowing what pitfalls exist, we still miss opportunities for doing good as a result of our own human biases obscuring our vision. We need tools to explore and challenge our biases in a productive way to deliver better outcomes. We need a set of shared values within teams and, ultimately, across the industry to promote our common responsibility to deliver the greatest benefit while causing the least amount of harm. How can we work together to intensify the focus on ethical design? In this session, well share ways you can empower yourself and your teams to do the right thing for people. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ethicsai-abstractions-finalwithreferences-190828184617-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Presented at Abstractions, Pittsburgh, PA Karen Bachmann and Carol Smith, August 23, 2019 Humans are biased, and sadly, we are not always able to filter our deeply ingrained biases. UX designers and researchers have long understood this, but as we watch major technology companies make significant mistakes with regard to ethics and bias, the cost of not accounting for bias and ethics is becoming more evident and widely known. Even knowing what pitfalls exist, we still miss opportunities for doing good as a result of our own human biases obscuring our vision. We need tools to explore and challenge our biases in a productive way to deliver better outcomes. We need a set of shared values within teams and, ultimately, across the industry to promote our common responsibility to deliver the greatest benefit while causing the least amount of harm. How can we work together to intensify the focus on ethical design? In this session, well share ways you can empower yourself and your teams to do the right thing for people.
Designing More Ethical and Unbiased Experiences - Abstractions from Carol Smith
]]>
4015 17 https://cdn.slidesharecdn.com/ss_thumbnails/ethicsai-abstractions-finalwithreferences-190828184617-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Dynamic UXR: Ethical Responsibilities and AI. Carol Smith at Strive in Toronto /slideshow/dynamic-uxr-ethical-responsibilities-and-ai-carol-smith-at-uxr-strive-in-toronto/155713104 ethicsai-uxr-strive-final-190715173804
Artificially intelligent (AI) technologies are exciting and with them come a lot of new user experience research (UXR) responsibilities. How do we understand and clarify our users need for transparency, control, and access (and more) when the system is constantly changing? These dynamic systems are already part of our everyday lives and quickly becoming part of our jobs. What are our responsibilities with regard to ethics and protecting users from bias? Presented at Strive, June 7, 2019 in Toronto, Ontario, Canada. Strive is the 2019 UX Research Conference presented by the UX Research Collective Inc. ]]>

Artificially intelligent (AI) technologies are exciting and with them come a lot of new user experience research (UXR) responsibilities. How do we understand and clarify our users need for transparency, control, and access (and more) when the system is constantly changing? These dynamic systems are already part of our everyday lives and quickly becoming part of our jobs. What are our responsibilities with regard to ethics and protecting users from bias? Presented at Strive, June 7, 2019 in Toronto, Ontario, Canada. Strive is the 2019 UX Research Conference presented by the UX Research Collective Inc. ]]>
Mon, 15 Jul 2019 17:38:04 GMT /slideshow/dynamic-uxr-ethical-responsibilities-and-ai-carol-smith-at-uxr-strive-in-toronto/155713104 carologic@slideshare.net(carologic) Dynamic UXR: Ethical Responsibilities and AI. Carol Smith at Strive in Toronto carologic Artificially intelligent (AI) technologies are exciting and with them come a lot of new user experience research (UXR) responsibilities. How do we understand and clarify our users need for transparency, control, and access (and more) when the system is constantly changing? These dynamic systems are already part of our everyday lives and quickly becoming part of our jobs. What are our responsibilities with regard to ethics and protecting users from bias? Presented at Strive, June 7, 2019 in Toronto, Ontario, Canada. Strive is the 2019 UX Research Conference presented by the UX Research Collective Inc. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ethicsai-uxr-strive-final-190715173804-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Artificially intelligent (AI) technologies are exciting and with them come a lot of new user experience research (UXR) responsibilities. How do we understand and clarify our users need for transparency, control, and access (and more) when the system is constantly changing? These dynamic systems are already part of our everyday lives and quickly becoming part of our jobs. What are our responsibilities with regard to ethics and protecting users from bias? Presented at Strive, June 7, 2019 in Toronto, Ontario, Canada. Strive is the 2019 UX Research Conference presented by the UX Research Collective Inc.
Dynamic UXR: Ethical Responsibilities and AI. Carol Smith at Strive in Toronto from Carol Smith
]]>
4841 16 https://cdn.slidesharecdn.com/ss_thumbnails/ethicsai-uxr-strive-final-190715173804-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Prototyping for Beginners - Pittsburgh Inclusive Innovation Summit 2019 /slideshow/prototyping-for-beginners-pittsburgh-inclusive-innovation-summit-2019/138956327 prototypingbeginners-pghinclusiveinn-201903-190331192430
To design for inclusion we often must try out different ideas. In this interactive session you'll learn about all types of prototyping and how to get feedback on your ideas from your users. This session will briefly introduce a variety of prototypes and materials and evaluation methods for early learning. Participants will have time to build a quick prototype and practice getting feedback on it. We'll cover designing for accessibility and inclusion even at the prototype stage. You'll have the information you need to launch your ideas as early as possible to learn from the experience and improve more quickly. Presented at the Pittsburgh Inclusive Innovation Summit March 30, 2019 held at Point Park University. ]]>

To design for inclusion we often must try out different ideas. In this interactive session you'll learn about all types of prototyping and how to get feedback on your ideas from your users. This session will briefly introduce a variety of prototypes and materials and evaluation methods for early learning. Participants will have time to build a quick prototype and practice getting feedback on it. We'll cover designing for accessibility and inclusion even at the prototype stage. You'll have the information you need to launch your ideas as early as possible to learn from the experience and improve more quickly. Presented at the Pittsburgh Inclusive Innovation Summit March 30, 2019 held at Point Park University. ]]>
Sun, 31 Mar 2019 19:24:30 GMT /slideshow/prototyping-for-beginners-pittsburgh-inclusive-innovation-summit-2019/138956327 carologic@slideshare.net(carologic) Prototyping for Beginners - Pittsburgh Inclusive Innovation Summit 2019 carologic To design for inclusion we often must try out different ideas. In this interactive session you'll learn about all types of prototyping and how to get feedback on your ideas from your users. This session will briefly introduce a variety of prototypes and materials and evaluation methods for early learning. Participants will have time to build a quick prototype and practice getting feedback on it. We'll cover designing for accessibility and inclusion even at the prototype stage. You'll have the information you need to launch your ideas as early as possible to learn from the experience and improve more quickly. Presented at the Pittsburgh Inclusive Innovation Summit March 30, 2019 held at Point Park University. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/prototypingbeginners-pghinclusiveinn-201903-190331192430-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> To design for inclusion we often must try out different ideas. In this interactive session you&#39;ll learn about all types of prototyping and how to get feedback on your ideas from your users. This session will briefly introduce a variety of prototypes and materials and evaluation methods for early learning. Participants will have time to build a quick prototype and practice getting feedback on it. We&#39;ll cover designing for accessibility and inclusion even at the prototype stage. You&#39;ll have the information you need to launch your ideas as early as possible to learn from the experience and improve more quickly. Presented at the Pittsburgh Inclusive Innovation Summit March 30, 2019 held at Point Park University.
Prototyping for Beginners - Pittsburgh Inclusive Innovation Summit 2019 from Carol Smith
]]>
8820 16 https://cdn.slidesharecdn.com/ss_thumbnails/prototypingbeginners-pghinclusiveinn-201903-190331192430-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Navigating challenges in IA people management at IAC19 /carologic/navigating-challenges-in-ia-people-management-at-iac19 mgia-iaconf-201903-final-190316142246
Whether you are building a team, managing experience practitioners or navigating career changers, managing a team of creative and analytical IA practitioners can be challenging. The welcome change towards diverse and inclusive hiring practices can add even more challenges. Learn how an experienced manager navigated through painful challenges and wonderful successes while managing large and small design departments in organizations with employees around the world. Presented at IA Conference 2019 in Orlando Florida by Carol Smith.]]>

Whether you are building a team, managing experience practitioners or navigating career changers, managing a team of creative and analytical IA practitioners can be challenging. The welcome change towards diverse and inclusive hiring practices can add even more challenges. Learn how an experienced manager navigated through painful challenges and wonderful successes while managing large and small design departments in organizations with employees around the world. Presented at IA Conference 2019 in Orlando Florida by Carol Smith.]]>
Sat, 16 Mar 2019 14:22:46 GMT /carologic/navigating-challenges-in-ia-people-management-at-iac19 carologic@slideshare.net(carologic) Navigating challenges in IA people management at IAC19 carologic Whether you are building a team, managing experience practitioners or navigating career changers, managing a team of creative and analytical IA practitioners can be challenging. The welcome change towards diverse and inclusive hiring practices can add even more challenges. Learn how an experienced manager navigated through painful challenges and wonderful successes while managing large and small design departments in organizations with employees around the world. Presented at IA Conference 2019 in Orlando Florida by Carol Smith. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/mgia-iaconf-201903-final-190316142246-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Whether you are building a team, managing experience practitioners or navigating career changers, managing a team of creative and analytical IA practitioners can be challenging. The welcome change towards diverse and inclusive hiring practices can add even more challenges. Learn how an experienced manager navigated through painful challenges and wonderful successes while managing large and small design departments in organizations with employees around the world. Presented at IA Conference 2019 in Orlando Florida by Carol Smith.
Navigating challenges in IA people management at IAC19 from Carol Smith
]]>
2849 15 https://cdn.slidesharecdn.com/ss_thumbnails/mgia-iaconf-201903-final-190316142246-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
What can DesignOps do for you?鐃 by Carol Smith at TLMUX in Montreal /slideshow/what-can-designops-do-for-you-by-carol-smith-at-tlmux-in-montreal/134328791 designops-tlmux-montreal-final-190304002239
You have probably seen the terms DesignOps and/or ResearchOps float by in your social media queue. These teams make designing (and researching) at scale beautifully efficient and successful. Carol steps through how these teams work, the types of activities they perform, situations they are helpful for, and ways you can leverage these types of programs in your organization. Carol will share examples from her experiences and stories from other organizations that are using Design Ops to do effective design at scale. Presented at Tout le monde UX in Montreal, Quebec, Canada on February 28, 2019. http://toutlemonde-ux.com/]]>

You have probably seen the terms DesignOps and/or ResearchOps float by in your social media queue. These teams make designing (and researching) at scale beautifully efficient and successful. Carol steps through how these teams work, the types of activities they perform, situations they are helpful for, and ways you can leverage these types of programs in your organization. Carol will share examples from her experiences and stories from other organizations that are using Design Ops to do effective design at scale. Presented at Tout le monde UX in Montreal, Quebec, Canada on February 28, 2019. http://toutlemonde-ux.com/]]>
Mon, 04 Mar 2019 00:22:39 GMT /slideshow/what-can-designops-do-for-you-by-carol-smith-at-tlmux-in-montreal/134328791 carologic@slideshare.net(carologic) What can DesignOps do for you?鐃 by Carol Smith at TLMUX in Montreal carologic You have probably seen the terms DesignOps and/or ResearchOps float by in your social media queue. These teams make designing (and researching) at scale beautifully efficient and successful. Carol steps through how these teams work, the types of activities they perform, situations they are helpful for, and ways you can leverage these types of programs in your organization. Carol will share examples from her experiences and stories from other organizations that are using Design Ops to do effective design at scale. Presented at Tout le monde UX in Montreal, Quebec, Canada on February 28, 2019. http://toutlemonde-ux.com/ <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/designops-tlmux-montreal-final-190304002239-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> You have probably seen the terms DesignOps and/or ResearchOps float by in your social media queue. These teams make designing (and researching) at scale beautifully efficient and successful. Carol steps through how these teams work, the types of activities they perform, situations they are helpful for, and ways you can leverage these types of programs in your organization. Carol will share examples from her experiences and stories from other organizations that are using Design Ops to do effective design at scale. Presented at Tout le monde UX in Montreal, Quebec, Canada on February 28, 2019. http://toutlemonde-ux.com/
What can DesignOps do for you? by Carol Smith at TLMUX in Montreal from Carol Smith
]]>
3062 14 https://cdn.slidesharecdn.com/ss_thumbnails/designops-tlmux-montreal-final-190304002239-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Designing Trustable AI Experiences at IxDA Pittsburgh, Jan 2019 /slideshow/designing-trustable-ai-experiences-at-ixda-pittsburgh-jan-2019/129229522 designai-ixda-pgh-20190124-190125143909
How can we, as designers, create artificially intelligent systems that dont hurt humans? What should we think about to make these systems transparent? What information needs to be available to users to engender trust? This talk proposes a model for talking about the major decision points in building an AI. Carol will tackle the biggest challenges inherent with AI including issues of ethics and the implications for your work. Wondering why you keep hearing about the Trolley Problem? Has someone claimed that your AI is nearly sentient? Bring your questions and curiosity for this engaging evening, and shell warn you before spoilers of The Good Place (息2019 NBCUniversal Media, LLC).]]>

How can we, as designers, create artificially intelligent systems that dont hurt humans? What should we think about to make these systems transparent? What information needs to be available to users to engender trust? This talk proposes a model for talking about the major decision points in building an AI. Carol will tackle the biggest challenges inherent with AI including issues of ethics and the implications for your work. Wondering why you keep hearing about the Trolley Problem? Has someone claimed that your AI is nearly sentient? Bring your questions and curiosity for this engaging evening, and shell warn you before spoilers of The Good Place (息2019 NBCUniversal Media, LLC).]]>
Fri, 25 Jan 2019 14:39:09 GMT /slideshow/designing-trustable-ai-experiences-at-ixda-pittsburgh-jan-2019/129229522 carologic@slideshare.net(carologic) Designing Trustable AI Experiences at IxDA Pittsburgh, Jan 2019 carologic How can we, as designers, create artificially intelligent systems that dont hurt humans? What should we think about to make these systems transparent? What information needs to be available to users to engender trust? This talk proposes a model for talking about the major decision points in building an AI. Carol will tackle the biggest challenges inherent with AI including issues of ethics and the implications for your work. Wondering why you keep hearing about the Trolley Problem? Has someone claimed that your AI is nearly sentient? Bring your questions and curiosity for this engaging evening, and shell warn you before spoilers of The Good Place (息2019 NBCUniversal Media, LLC). <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/designai-ixda-pgh-20190124-190125143909-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> How can we, as designers, create artificially intelligent systems that dont hurt humans? What should we think about to make these systems transparent? What information needs to be available to users to engender trust? This talk proposes a model for talking about the major decision points in building an AI. Carol will tackle the biggest challenges inherent with AI including issues of ethics and the implications for your work. Wondering why you keep hearing about the Trolley Problem? Has someone claimed that your AI is nearly sentient? Bring your questions and curiosity for this engaging evening, and shell warn you before spoilers of The Good Place (息2019 NBCUniversal Media, LLC).
Designing Trustable AI Experiences at IxDA Pittsburgh, Jan 2019 from Carol Smith
]]>
4724 28 https://cdn.slidesharecdn.com/ss_thumbnails/designai-ixda-pgh-20190124-190125143909-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Designing Trustable AI Experiences at World Usability Day in Cleveland /slideshow/designing-trustable-ai-experiences-at-world-usability-day-in-cleveland/123047302 designai-wud18-cleveland-181115024549
How can designers improve trust of cognitive systems? What can we do to make these systems transparent? What information needs to be transparent? The biggest challenges inherent with AI will be discussed, specifically the ethical conflicts and the implications for your work, along with the basics of these concepts so that you can distinguish between simply smart systems and AI. Presented at the World World Usability Day 2018 celebration in Cleveland, Ohio.]]>

How can designers improve trust of cognitive systems? What can we do to make these systems transparent? What information needs to be transparent? The biggest challenges inherent with AI will be discussed, specifically the ethical conflicts and the implications for your work, along with the basics of these concepts so that you can distinguish between simply smart systems and AI. Presented at the World World Usability Day 2018 celebration in Cleveland, Ohio.]]>
Thu, 15 Nov 2018 02:45:49 GMT /slideshow/designing-trustable-ai-experiences-at-world-usability-day-in-cleveland/123047302 carologic@slideshare.net(carologic) Designing Trustable AI Experiences at World Usability Day in Cleveland carologic How can designers improve trust of cognitive systems? What can we do to make these systems transparent? What information needs to be transparent? The biggest challenges inherent with AI will be discussed, specifically the ethical conflicts and the implications for your work, along with the basics of these concepts so that you can distinguish between simply smart systems and AI. Presented at the World World Usability Day 2018 celebration in Cleveland, Ohio. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/designai-wud18-cleveland-181115024549-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> How can designers improve trust of cognitive systems? What can we do to make these systems transparent? What information needs to be transparent? The biggest challenges inherent with AI will be discussed, specifically the ethical conflicts and the implications for your work, along with the basics of these concepts so that you can distinguish between simply smart systems and AI. Presented at the World World Usability Day 2018 celebration in Cleveland, Ohio.
Designing Trustable AI Experiences at World Usability Day in Cleveland from Carol Smith
]]>
6264 15 https://cdn.slidesharecdn.com/ss_thumbnails/designai-wud18-cleveland-181115024549-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Gearing up for Ethnography at Midwest UX 2018 /slideshow/gearing-up-for-ethnography-at-midwest-ux-2018/123047160 ethno-mwux2018-final-181115024343
We are all low on time and resources, and our UX research must occur wherever and whenever possible. Carol will share her personal experiences in the field, both good and bad. She has learned the hard way doing observations in moving vehicles, coal mines, hospitals, schools, homes, and offices. She will also share interesting anecdotes from colleagues and review both ethical and behavioral standards for researchers. The key is to prepare well, learn to be flexible and to adapt to the situation. Presented at Midwest UX 2018 held in Chicago, IL.]]>

We are all low on time and resources, and our UX research must occur wherever and whenever possible. Carol will share her personal experiences in the field, both good and bad. She has learned the hard way doing observations in moving vehicles, coal mines, hospitals, schools, homes, and offices. She will also share interesting anecdotes from colleagues and review both ethical and behavioral standards for researchers. The key is to prepare well, learn to be flexible and to adapt to the situation. Presented at Midwest UX 2018 held in Chicago, IL.]]>
Thu, 15 Nov 2018 02:43:43 GMT /slideshow/gearing-up-for-ethnography-at-midwest-ux-2018/123047160 carologic@slideshare.net(carologic) Gearing up for Ethnography at Midwest UX 2018 carologic We are all low on time and resources, and our UX research must occur wherever and whenever possible. Carol will share her personal experiences in the field, both good and bad. She has learned the hard way doing observations in moving vehicles, coal mines, hospitals, schools, homes, and offices. She will also share interesting anecdotes from colleagues and review both ethical and behavioral standards for researchers. The key is to prepare well, learn to be flexible and to adapt to the situation. Presented at Midwest UX 2018 held in Chicago, IL. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ethno-mwux2018-final-181115024343-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> We are all low on time and resources, and our UX research must occur wherever and whenever possible. Carol will share her personal experiences in the field, both good and bad. She has learned the hard way doing observations in moving vehicles, coal mines, hospitals, schools, homes, and offices. She will also share interesting anecdotes from colleagues and review both ethical and behavioral standards for researchers. The key is to prepare well, learn to be flexible and to adapt to the situation. Presented at Midwest UX 2018 held in Chicago, IL.
Gearing up for Ethnography at Midwest UX 2018 from Carol Smith
]]>
1743 15 https://cdn.slidesharecdn.com/ss_thumbnails/ethno-mwux2018-final-181115024343-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Designing AI for Humanity at dmi:Design Leadership Conference in Boston /slideshow/designing-ai-for-humanity-at-dmidesign-leadership-conference-in-boston/123047052 designai-humanity-dmi-final-181115024134
As design leaders we must enable our teams with skills and knowledge to take on the new and exciting opportunities that building powerful AI systems bring. Dynamic systems require transparency regarding data provenance, bias, training methods, and more, to gain users trust. Carol will cover these topics and challenge us as design leaders, to represent our fellow humans by provoking conversations regarding critical ethical and safety needs. Presented at dmi:Design Leadership Conference in Boston in October 2018.]]>

As design leaders we must enable our teams with skills and knowledge to take on the new and exciting opportunities that building powerful AI systems bring. Dynamic systems require transparency regarding data provenance, bias, training methods, and more, to gain users trust. Carol will cover these topics and challenge us as design leaders, to represent our fellow humans by provoking conversations regarding critical ethical and safety needs. Presented at dmi:Design Leadership Conference in Boston in October 2018.]]>
Thu, 15 Nov 2018 02:41:34 GMT /slideshow/designing-ai-for-humanity-at-dmidesign-leadership-conference-in-boston/123047052 carologic@slideshare.net(carologic) Designing AI for Humanity at dmi:Design Leadership Conference in Boston carologic As design leaders we must enable our teams with skills and knowledge to take on the new and exciting opportunities that building powerful AI systems bring. Dynamic systems require transparency regarding data provenance, bias, training methods, and more, to gain users trust. Carol will cover these topics and challenge us as design leaders, to represent our fellow humans by provoking conversations regarding critical ethical and safety needs. Presented at dmi:Design Leadership Conference in Boston in October 2018. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/designai-humanity-dmi-final-181115024134-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> As design leaders we must enable our teams with skills and knowledge to take on the new and exciting opportunities that building powerful AI systems bring. Dynamic systems require transparency regarding data provenance, bias, training methods, and more, to gain users trust. Carol will cover these topics and challenge us as design leaders, to represent our fellow humans by provoking conversations regarding critical ethical and safety needs. Presented at dmi:Design Leadership Conference in Boston in October 2018.
Designing AI for Humanity at dmi:Design Leadership Conference in Boston from Carol Smith
]]>
1792 17 https://cdn.slidesharecdn.com/ss_thumbnails/designai-humanity-dmi-final-181115024134-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Product Design in Agile Environments: Making it Work at ProductCamp Pittsburgh /slideshow/product-design-in-agile-environments-making-it-work-at-productcamp-pittsburgh/123046915 proddesign-agile-20180922-181115023855
Can Product Design work in Agile environments? Yes! Balancing people and process can be complicated, and in this talk, Carol will provide you guidance to make it work. You can inform good design with strong user experience (UX) research and support continuous releases in a fast-paced environment. We'll look at ways to achieve a flexible approach that meets the needs of these seemingly conflicting efforts. Participants will come away with the tools they need to successfully integrate design thinking methods, in an Agile environment, one sprint at a time. Selected for presentation at ProductCamp Pittsburgh in September 2018 at Carnegie Mellon University (CMU).]]>

Can Product Design work in Agile environments? Yes! Balancing people and process can be complicated, and in this talk, Carol will provide you guidance to make it work. You can inform good design with strong user experience (UX) research and support continuous releases in a fast-paced environment. We'll look at ways to achieve a flexible approach that meets the needs of these seemingly conflicting efforts. Participants will come away with the tools they need to successfully integrate design thinking methods, in an Agile environment, one sprint at a time. Selected for presentation at ProductCamp Pittsburgh in September 2018 at Carnegie Mellon University (CMU).]]>
Thu, 15 Nov 2018 02:38:55 GMT /slideshow/product-design-in-agile-environments-making-it-work-at-productcamp-pittsburgh/123046915 carologic@slideshare.net(carologic) Product Design in Agile Environments: Making it Work at ProductCamp Pittsburgh carologic Can Product Design work in Agile environments? Yes! Balancing people and process can be complicated, and in this talk, Carol will provide you guidance to make it work. You can inform good design with strong user experience (UX) research and support continuous releases in a fast-paced environment. We'll look at ways to achieve a flexible approach that meets the needs of these seemingly conflicting efforts. Participants will come away with the tools they need to successfully integrate design thinking methods, in an Agile environment, one sprint at a time. Selected for presentation at ProductCamp Pittsburgh in September 2018 at Carnegie Mellon University (CMU). <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/proddesign-agile-20180922-181115023855-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Can Product Design work in Agile environments? Yes! Balancing people and process can be complicated, and in this talk, Carol will provide you guidance to make it work. You can inform good design with strong user experience (UX) research and support continuous releases in a fast-paced environment. We&#39;ll look at ways to achieve a flexible approach that meets the needs of these seemingly conflicting efforts. Participants will come away with the tools they need to successfully integrate design thinking methods, in an Agile environment, one sprint at a time. Selected for presentation at ProductCamp Pittsburgh in September 2018 at Carnegie Mellon University (CMU).
Product Design in Agile Environments: Making it Work at ProductCamp Pittsburgh from Carol Smith
]]>
1484 14 https://cdn.slidesharecdn.com/ss_thumbnails/proddesign-agile-20180922-181115023855-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
Demystifying Artificial Intelligence: Solving Difficult Problems at ProductCamp Pittsburgh /slideshow/demystifying-artificial-intelligence-solving-difficult-problems-at-productcamp-pittsburgh/123046743 aiexp-pghprodcamp-20180922-181115023523
Artificially intelligent systems are becoming part of our everyday lives. This session will answer your questions about artificial intelligence, machine learning, and the ethical conflicts and the implications inherent in these technologies. Topics covered will include: discussions of bias in data; how to focus on the user experience; what is necessary to build a good cognitive computing systems; data needs; levels of accuracy; making safe and secure AI's; and discussions on ethics in AI and our role in leading those conversations. Carol will propose simple models for thinking about these systems and provide time for questions. You will walk away with an awareness of the weaknesses of AI and the knowledge of how these systems work. Selected by the audience to be presented at ProductCamp Pittsburgh in September 2018]]>

Artificially intelligent systems are becoming part of our everyday lives. This session will answer your questions about artificial intelligence, machine learning, and the ethical conflicts and the implications inherent in these technologies. Topics covered will include: discussions of bias in data; how to focus on the user experience; what is necessary to build a good cognitive computing systems; data needs; levels of accuracy; making safe and secure AI's; and discussions on ethics in AI and our role in leading those conversations. Carol will propose simple models for thinking about these systems and provide time for questions. You will walk away with an awareness of the weaknesses of AI and the knowledge of how these systems work. Selected by the audience to be presented at ProductCamp Pittsburgh in September 2018]]>
Thu, 15 Nov 2018 02:35:23 GMT /slideshow/demystifying-artificial-intelligence-solving-difficult-problems-at-productcamp-pittsburgh/123046743 carologic@slideshare.net(carologic) Demystifying Artificial Intelligence: Solving Difficult Problems at ProductCamp Pittsburgh carologic Artificially intelligent systems are becoming part of our everyday lives. This session will answer your questions about artificial intelligence, machine learning, and the ethical conflicts and the implications inherent in these technologies. Topics covered will include: discussions of bias in data; how to focus on the user experience; what is necessary to build a good cognitive computing systems; data needs; levels of accuracy; making safe and secure AI's; and discussions on ethics in AI and our role in leading those conversations. Carol will propose simple models for thinking about these systems and provide time for questions. You will walk away with an awareness of the weaknesses of AI and the knowledge of how these systems work. Selected by the audience to be presented at ProductCamp Pittsburgh in September 2018 <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/aiexp-pghprodcamp-20180922-181115023523-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> Artificially intelligent systems are becoming part of our everyday lives. This session will answer your questions about artificial intelligence, machine learning, and the ethical conflicts and the implications inherent in these technologies. Topics covered will include: discussions of bias in data; how to focus on the user experience; what is necessary to build a good cognitive computing systems; data needs; levels of accuracy; making safe and secure AI&#39;s; and discussions on ethics in AI and our role in leading those conversations. Carol will propose simple models for thinking about these systems and provide time for questions. You will walk away with an awareness of the weaknesses of AI and the knowledge of how these systems work. Selected by the audience to be presented at ProductCamp Pittsburgh in September 2018
Demystifying Artificial Intelligence: Solving Difficult Problems at ProductCamp Pittsburgh from Carol Smith
]]>
1440 20 https://cdn.slidesharecdn.com/ss_thumbnails/aiexp-pghprodcamp-20180922-181115023523-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
UX in the Age of AI: 鐃Leading with Design UXPA2018 /slideshow/ux-in-the-age-of-ai-leading-with-design-uxpa2018/103162621 ux-ai-leadwdesign-uxpa18-v1-180626171938
How can designers improve trust of cognitive systems? What can we do to make these systems transparent? What information needs to be transparent? The biggest challenges inherent with AI will be discussed, specifically the ethical conflicts and the implications for your work, along with the basics of these concepts so that you can strive for making great AI systems.]]>

How can designers improve trust of cognitive systems? What can we do to make these systems transparent? What information needs to be transparent? The biggest challenges inherent with AI will be discussed, specifically the ethical conflicts and the implications for your work, along with the basics of these concepts so that you can strive for making great AI systems.]]>
Tue, 26 Jun 2018 17:19:38 GMT /slideshow/ux-in-the-age-of-ai-leading-with-design-uxpa2018/103162621 carologic@slideshare.net(carologic) UX in the Age of AI: 鐃Leading with Design UXPA2018 carologic How can designers improve trust of cognitive systems? What can we do to make these systems transparent? What information needs to be transparent? The biggest challenges inherent with AI will be discussed, specifically the ethical conflicts and the implications for your work, along with the basics of these concepts so that you can strive for making great AI systems. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/ux-ai-leadwdesign-uxpa18-v1-180626171938-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> How can designers improve trust of cognitive systems? What can we do to make these systems transparent? What information needs to be transparent? The biggest challenges inherent with AI will be discussed, specifically the ethical conflicts and the implications for your work, along with the basics of these concepts so that you can strive for making great AI systems.
UX in the Age of AI: Leading with Design UXPA2018 from Carol Smith
]]>
8961 23 https://cdn.slidesharecdn.com/ss_thumbnails/ux-ai-leadwdesign-uxpa18-v1-180626171938-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
IA in the Age of AI: Embracing Abstraction and Change at IA Summit 2018 /carologic/ia-in-the-age-of-ai-embracing-abstraction-and-change-at-ia-summit-2018 iaai-embracingabstchange-ias18-final-180324192356
This session focuses on the questions we need to ask to create good, ethical experiences for our users. Information Architects must push to - Keep people at the center of our work. - Lead with our users goals. - Ease of use, usability, findability, effectiveness, efficiency We must work to mature organizations approach - Push back on technology first ideas. - Lead on ethics - for our users, humanity.]]>

This session focuses on the questions we need to ask to create good, ethical experiences for our users. Information Architects must push to - Keep people at the center of our work. - Lead with our users goals. - Ease of use, usability, findability, effectiveness, efficiency We must work to mature organizations approach - Push back on technology first ideas. - Lead on ethics - for our users, humanity.]]>
Sat, 24 Mar 2018 19:23:56 GMT /carologic/ia-in-the-age-of-ai-embracing-abstraction-and-change-at-ia-summit-2018 carologic@slideshare.net(carologic) IA in the Age of AI: Embracing Abstraction and Change at IA Summit 2018 carologic This session focuses on the questions we need to ask to create good, ethical experiences for our users. Information Architects must push to - Keep people at the center of our work. - Lead with our users goals. - Ease of use, usability, findability, effectiveness, efficiency We must work to mature organizations approach - Push back on technology first ideas. - Lead on ethics - for our users, humanity. <img style="border:1px solid #C3E6D8;float:right;" alt="" src="https://cdn.slidesharecdn.com/ss_thumbnails/iaai-embracingabstchange-ias18-final-180324192356-thumbnail.jpg?width=120&amp;height=120&amp;fit=bounds" /><br> This session focuses on the questions we need to ask to create good, ethical experiences for our users. Information Architects must push to - Keep people at the center of our work. - Lead with our users goals. - Ease of use, usability, findability, effectiveness, efficiency We must work to mature organizations approach - Push back on technology first ideas. - Lead on ethics - for our users, humanity.
IA in the Age of AI: Embracing Abstraction and Change at IA Summit 2018 from Carol Smith
]]>
7807 19 https://cdn.slidesharecdn.com/ss_thumbnails/iaai-embracingabstchange-ias18-final-180324192356-thumbnail.jpg?width=120&height=120&fit=bounds presentation Black http://activitystrea.ms/schema/1.0/post http://activitystrea.ms/schema/1.0/posted 0
https://cdn.slidesharecdn.com/profile-photo-carologic-48x48.jpg?cb=1689782486 Carol is recognized globally as a leader in UX and is a passionate evangelist who encourages teams to make data informed decisions with information gathered directly from the users. She has presented over 130 talks and workshops around the world on a variety of UX related topics. carologic.com https://cdn.slidesharecdn.com/ss_thumbnails/trustbostonuxpa2021carolsmith-210924212113-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/navigating-the-complexity-of-trust-at-uxpa-boston-2021/250290430 Navigating the Complex... https://cdn.slidesharecdn.com/ss_thumbnails/implementingethics-pycon2020-200424004434-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/implementing-ethics-developing-trustworthy-ai-pycon-2020/232544532 Implementing Ethics: D... https://cdn.slidesharecdn.com/ss_thumbnails/rsa-designtrustai-mlai-f03smith-200329172426-thumbnail.jpg?width=320&height=320&fit=bounds slideshow/designing-trustworthy-ai-a-user-experience-framework-at-rsa-2020/231061488 Designing Trustworthy ...