- Supermanagers TLDR Newsletter by Fellow
- Posts
- AI Security and Privacy - Supermanagers TLDR
AI Security and Privacy - Supermanagers TLDR
Tips to protect sensitive information and adopt smarter practices!
In 2024, the adoption of generative AI increased across industries. In fact, the percentage of organizations who reported using Gen AI in the workplace nearly doubled in one year, according to the latest McKinsey Global Survey.
Tools like ChatGPT, Claude, and Fellow’s AI Meeting Assistant are helping teams work more efficiently. But with great power comes great responsibility, and significant security considerations!
In this issue, we’ll explore how you can stay ahead of security risks while making the most of AI tools. From safeguarding sensitive data to adopting smarter practices, here’s what you need to know:
4 best practices for data privacy with Gen AI 🔐
In an article for TechInformed, tech news reporter Ricki Lee gathered expert insights on protecting data privacy when using generative AI tools. Here are some of our highlights:
1. Think before you input
Generative AI tools like ChatGPT can store and reuse data entered into them. Sebastian Gierlinger, VP of Engineering at Storyblok, highlights that human error — like sharing sensitive info in prompts — is a major risk. His advice? Educate your team on safe usage and consider tools that can scrub sensitive data from prompts automatically.
2. Set AI ground rules
A clear company policy on AI use is key. Angus Allan, Senior Product Manager at CreateFuture, says tailored guidelines not only mitigate risks but also empower teams to use AI effectively while staying compliant with data regulations like GDPR. “It’s about clarity,” he says, “so teams can focus on solving the right problems.”
3. Consider deleting your history
Many tools let you disable data storage or delete chat histories. Patrick Spencer from Kiteworks suggests regularly reviewing these settings.
4. Ensure your data isn’t used to train models
Some AI tools use user data to improve their models, which could compromise your sensitive information. Patrick Spencer from Kiteworks advises organizations to carefully vet their AI vendors and select tools that explicitly guarantee user data won’t be used for training. Always check the fine print in vendor policies.
AI meeting assistants are great productivity tools, but they come with important security considerations. Before choosing one for your organization, ask these eight essential questions to ensure your sensitive data stays protected:
Is your data being used to train LLMs?
How long is your data being retained?
What compliance certifications have been earned?
What are the security standards of the AI vendors?
Do participants get notified about the recording?
Can you pause recording or redact sections after a call is over?
Do you have control over what is recorded?
Do you have control over who can see recordings?
Don’t forget to forward this to your IT and Operations colleagues. They’ll thank you for it.
Fellow tip of the week 💡
When using an AI meeting assistant, it’s important to prioritize privacy, security, and control. This video shows you how to stay on top of these considerations with Fellow:
Use workspace settings to ensure that only the right people have access to meeting agendas, notes, and recordings. This is especially helpful for teams like HR or finance, who might share a workspace but need distinct privacy levels.
Set up automation rules for recordings. For example, you can prevent sensitive meetings (like those involving legal discussions) from being recorded automatically.
Take advantage of pause and redact features for recordings. This way, you can pause the recording or delete sensitive portions of a meeting without worrying about what gets stored.
Using Fellow as your AI meeting assistant will ensure that you’re making the most of AI while keeping your organization’s data secure and compliant with privacy standards.
We want your feedback for 2025 🚀
The Fellow team is planning some exciting updates for our podcast and newsletter – and we want your input.
What topics are top of mind for you in 2025? Whether it’s navigating hybrid work or adopting AI to increase team efficiency, we’d love to know what matters most to you.
Please hit reply and share your thoughts! Your feedback will help us curate content that’s relevant, practical, and inspiring for you next year.
Wishing you a happy holiday season ✨
As we step into 2025, adapting to AI-driven workflows will be more important than ever. With the right approach to security and privacy, we can unlock AI’s full potential while keeping our organizations safe and thriving.
If you enjoyed this issue, share it with a colleague or friend.
If this email was forwarded to you, you can subscribe here.
Thanks for being part of our community,
Manuela and the Fellow team