Sponsored Project
Summary
I facilitated meetings with Salesforce sponsors, and led research activities on a semester-long student project and designed 3 features using Slack design system to address data privacy and trust challenges in Slack AI integrations, particularly in the educational context.
Impact
Trust Score increased by 40%, measured by post testing surveys.
My Contribution
Facilitated Meetings
Designed Hi-Fi Screens
Led Research
Prepared Documentation
Team
8 UX Designers
2 Senior UX Sponsors from Salesforce
Timeline
5 Months
Tools Used
Figma
Notion
Zoom
Google Docs
Our 3 solutions built trust in Slack AI for the education industry by addressing student concerns around data privacy and transparency.
Screenshot from Salesforce website
We were given a broad and ambiguious problem statement "Build Trust With Data Privacy For Salesforce AI Tools".
We explored all of Salesforces tool and realized we had to scope down.
All tools by Salesforce
To scope down we did some stakeholder analysis we decided to focus on educational context as Salesforce had not explored much into it and we had access to students for primary research.
Conducted with graduate students to understand how they use slack and their concerns about AI and data privacy.
Analyzed how different AI tools provide data privacy and build trust with users about their data. We learned that most companies follow GDPR and CCPA guidelines and provide documentation on how their AI is trained.
Explored ethical AI practices and performed affinity mapping for themes that emerged from literature review, interviews and competitor analysis.
Realized we did not have access to Slack AI and it was not publicly available. So we decided to explore the speculative methods.
Brainstormed with a group of designers about the worst case and best case scenarios for the use of Slack AI.
Identified 3 scenarios we wanted to focus on. We explore 2 negative and 1 positive case and scoped to design for those.
All 8 of us made sketches for 3 scenarios to ideate and come up with concepts.
Would alert the user while they were typing the messages to make them aware before they get flagged.
Explanations for AI flagging decisions where it gives users guidance as soon as a message is sent.
Having transparency by showing users how the AI is trained when their content gets flagged and shows where they detected the flagged content.
To test how users would behave we used our solutions with users by pretending to be the AI. We used paper prototypes and the actual slack channels.
They mentioned something interesting about how different channels have different tones.
We built a strong collaboration with sponsors through weekly check-in calls, where they provided valuable critiques and insights on our designs.
We refined our designs using the Slack Design system and addressed feedback on each of the concepts to address the privacy concerns.
Users can enable/disable alerts for sensitive topics and control the topics and contexts they would want the alerts for.
AI explains flagged content + allows user appeals giving them more control over their data and the AI.
Users set channel tone (e.g., professional, relaxed) for better AI context understanding.
We tested our final prototypes which resulted in a 40% improvement in user trust scores, as measured through post-testing surveys.
We presented our slide deck to the sponsors and prepared a comprehensive documentation for all of our research inisights.
Planned for future steps to which Salesforce might take to enhance this product.
Big thanks to all my teammates, our sponsors from Salesforce and our professor for supporting us through this project.