Collaborated with Salesforce sponsors on a semester-long student project to address data privacy and trust challenges in AI-powered Slack integrations, particularly in educational environments. The project involved researching user concerns, analyzing potential risks, and proposing ethical AI solutions to enhance transparency and trust in data privacy.
What did I do?
Led research activities like literature review, and Wizard of Oz testing, and designed the interface using Slack Design System.
Why was it done?
To explore data privacy and trust issues with AI in Slack, particularly in educational settings.
How did the solution help?
The solutions developed provided users with more transparency and control over their data when using Slack AI in an Educational setting.
What did I learn?
I learned about project management, facilitating sponsor meetings and navigating speculative design. Also improved my visual design skills by creating wireframes using Slack Design System.
A socio-technological challenge
"How do we enable data privacy and trust for users while using Salesforce AI tools?"
We chose to go with Slack as that was one of the tools we were familiar with and had access to. Slack's main target is business-related operations, lacking a deep understanding of the use of education and related issues so we decided to take this opportunity to explore more about the use of Slack in an Educational Setting .
Admins
Students
Teachers
Researchers
We found four main stakeholders i.e. students, teachers, researchers and student administration, and we researched the different activities they perform on Slack and we chose to narrow the scope down to Graduate Students as we had easy access to them for research and testing.
AI Flags Data
AI might flag the data if any negative language has been used and it blocks the data.
Lack of Clarity and Guidance
Schools need to be clear about the types of data they collect, and why they collect it.
Incorrect Summaries
AI can summarize data incorrectly and labels can cause misunderstandings.
We examined how AI is implemented in platforms like Microsoft Teams, Google Chat, and Zoom, focusing on data protection practices.
All the platforms follow the GDPR and CCPA guidelines for privacy regulations. Most of the platforms have documentation pages about the data privacy guidelines and how the data can be controlled. We understand that with the help of in-app guidance, users will be able to easily understand how the app works and also the smart way to educate users about the data privacy policies and user data controls
Despite our research, we still didn't know exactly how Slack’s AI tool could pose risks to students directly as it was yet to be launched. This led us to speculate on potential problems so we could design solutions in advance. We used a brainstorming method to imagine worst-case scenarios.
We gathered a group of 8 designers and held a discussion on some of the worst-case and best-case scenarios that could take place because of Slack AI for a student.
Slack thread member makes a politically, racially, etc improper comment. AI flags them and reports them!
What could go wrong if AI is reading your conversation?
Tracking emotions through conversation and making deductions through that data.
Unable to understand the tone of the conversation and mislabeling it! (Ex: bullying)
Usage of slang or certain lingo during discussions, AI can misinterpret the meanings, resulting in improper outcomes when prompted.
An inline privacy alert to notify students while typing in the message.
Contextual training for users to use AI and this would help provide more control and transparency of the system.
A detection bot that detects the sensitive information used once the message is sent.
In our project, we decided to explore three different scenarios from the Black Mirror Brainstorming involving the use of AI in Slack—two negative and one positive—to balance perceptions of AI tools and explore the user acceptance of AI intervention in different contexts. With the help of paper prototypes, we conducted Wizard of Oz testing with 8 users of Slack at our university we tested how out Prototypes fit into these scenarios.
By taking the conversation literally and present some people in a negative light
As threatening and might add tags to these people on the backend for future reference
For international students who might not be able to read long messages in English.
Initially, our sponsors were pretty lenient towards us, and one of the things we struggled with was getting feedback on our work. Towards the final few weeks of our term, we were able to build a relationship with them where they were completely comfortable giving critical feedback on our designs and our rationale, and we received valuable insights and good feedback from them to iterate on our designs.
Enhanced and finalized three concepts for increasing transparency of the AI and giving users more control over the data so AI can be more context-aware.
You can see how the AI works and if you think it is misunderstood you can report it. It also links to the original conversations, where it picks up the context to make the summary and shows why it was flagged.
The tone can be set while creating the channel so the AI gets more context-aware and makes fewer misinterpretations (eg taking jokes literally).
An alert is sent while sending the messages according to the filters set by the user so there is control over users data privacy and the topics of discussion.