My organization collaborates with partners across several sectors that work with sensitive data, including medical, education, and environmental fields. Many of our projects run on the Nonprofit Google Workspace, which includes access to Gemini. Our team has been digging through Google’s official documentation on data privacy, human content review, model training, content ownership, and related topics—but as many of you probably know, it can get pretty convoluted.
I’ve been trying to find guidance or discussions beyond the official docs, ideally from sources that aren’t tied to vendors selling AI tools or consulting services. So far, I haven’t had much luck, and I’m hoping this community might have pointers.
Does anyone have resources or examples of best practices for equity-focused projects that use AI tools in general—or Gemini via Workspace specifically? We’re trying to understand how much trust to place in the documentation and to learn what best practices others have established when working with sensitive or mission-driven datasets in these tools.
Hi Rachel, I’ve been doing a lot of learning in this space and have started working to support stronger data governance within my organization when it comes to using Gen AI tools with associate data (I work within People Analytics and AI is coming at HR so hard right), aiming to embed a data equity and ethics lens within our approach. First I want to acknowledge how overwhelming things can be right now with how quickly AI is being pushed out without making sure the right precautions are in place. It’s a lot! I’m going to do a bit of a resource dump here that might be a bit more than you asked for.
Here’s some recommendations:
Data literacy and AI literacy are critical to responsible and equitable AI. I recommend creating learning paths that can support those who are using Gen AI in general, but especially for those who are using it with data that is explicitly about people and at an individual level. You don’t have to build from scratch. Maybe your organization already has something or there’s a lot of courses and content being developed now that you could source from. Within it make sure it address AI biases and harms.
Have it also reflect how your organization interacts with AI. Are y’all building your own AI tools and models? If so, you might need to provide more education on how to check for biases and harms in development, training, and after deployment. Are y’all primarily using 2rd party AI tools, then you need to understand how to assess those tools properly and work with vendors to transparently share how they test for biases and harms as well as once you’ve procured a tool how to use it appropriately.
Review your data governance framework and mechanisms within your organization and conduct a risk assessment to identify areas where you need to make changes as a result of emerging tech like AI.
If you don’t already have a data governance framework, start creating one. Prioritizing the most critical pieces first.
Create policies and guidelines for AI usage within your organization
What are approved use cases for AI
Which AI tools are approved and for what use cases. Using approved, enterprise tools often means greater security and data isn’t used to train models
When considering enterprise tools, assess them based on an ethics, privacy, and equity stance
Are there specific guidelines for using AI with data explicitly about people
do’s and don’ts, risk level assessment for use cases, human in the loop applications, information on AI biases and harms, list of people data types and their level of sensitivity
what data shouldn’t be used with AI tools, and if there are different risk assessments based on this
whether individual or aggregate data can be used, and if there are different risk assessments based on this
whether certain analysis should be conducted by a particular team who has the skills and tools to do the work and ensure proper data privacy
What governance steps do people need to go through before deploying AI (AIA, work council agreements, legal and data privacy review, stakeholder engagements)
In what ways can data governance be automated through systems and tools
Here are resources I’ve been engaging in to help my learning and application, many of which are from the perspective of HR because that’s the function I work within:
Continuously engage in learning relating to AI and responsible AI (All Tech is Human released 5 free responsible AI courses last month)
Find sources, individuals, and organizations you trust and are relevant to you and your work and sign-up for their newsletters and follow them online. This helps to reduce the cognitive load of keeping up in a rapidly changing space. Also helps with finding new resources (articles, reports, and trainings)
Use AI to help you understand these concepts and draft frameworks, documents, policies, etc. Make sure you’re validating them alongside SMEs and making sure they’re relevant to your space.
For example I have a Gem I created in Gemini that helps me integrate concepts of data ethics, equity, and justice into my AI related work. I’ve given it the below instructions and can add specific knowledge to it that it can reference when I’m using it. Often times when I’m prompting a tool, I’m giving the tool examples of concepts or experts I trust to reference. And then asking it to validate the insights it’s sharing and to cite sources so I can review them. This helps expedite my brain storming, ideation, and draft create phases while supporting me in being able to easily validate the content so I can ensure it makes sense and is accurate.
You’re a data equity, data ethics, responsible data, and data justice expert (this also applies to AI ethics, justice, and responsible usage), centering in the beliefs that all data is human and aiming for co-liberation and mutual aid. You’ve done the learning to understand power dynamics and how those have led to inequitable experiences and outcomes for marginalized and oppressed communities across the world and also understand how intersectionality plays into this.
Not only are you and expert in this space, you recognize those who’ve come before you, see the connections to other work like inequities in housing, healthcare, wealth generation, justice systems, employment, and education; partnering with the experts in those fields too.
You’re well practiced in applying this knowledge and expertise to the workplace including how talent is attracted, supported, and retained; making clear connections to organizational strategic workforce planning strategies and consulting with leaders and teams on how to integrate data & AI justice, equity, and ethics into these strategies to mitigate harm to associates, especially those who are part of marginalized communities.
You are also an excellent facilitator, coach, and teacher. Being able to clearly identify what people are interested in, how they learn best, hoe to scaffold someone’s understanding of simple and complex topics, and supporting them in then gaining confidence and understanding in how to apply these new concepts.
You are thorough in your analysis and responses providing enough information for someone to understand a topic well and providing additional resources for them to dive more deeply. You use a casual, coaching tone that’s professional and approachable to connect with others. You also cite your sources and embed links to make it easy for others to follow along and validate your insights.
First, if you haven’t, you should ask your collaborators what their AI policies are for using various types of AI tools to process the level of data you have access to. You will need to document your plan and their consent within a data use agreement in case something goes awry.
My organization works with sensitive data, including medical data. Within our organization, we have decided to permit some organizational users with access to less sensitive data to use MS Copilot (embedded in MS for approved users within our secure enterprise account) . We require others to submit additional authorization requests detailing how they will use the tools before they’re granted access to anything that can read our organizational data. We can use the MS Copilot Chat (that only sources information from the internet), but we never include individually identifying information or other data internal to our organization within our Chat.
Within my division, no one has been approved to use AI for highly sensitive data and no one is approved to use free versions of AI. I’m not familiar with Google products. Does Google offer enterprise-level licenses?
Thank you so much @dshipley2@bergman and @rachellynn for this super practical and useful discussion. We All Count is preparing some case studies of how to use the Data Equity Framework to help align AI projects with your communities and values. It’s a bit of a struggle since at its foundations, most AI is simply built on stolen data.
Danica’s list of topics is one of the best I’ve seen. One additional resource I’d add that is currently helping to inform my thinking is the folks over at DAIR
@dshipley2 Wow! Thank you so much for the time you put into sharing your experience and resources. I appreciate it so much and will be sharing with my team and working my way through it all.
@bergman All great points - we’re in a unique position that the majority of our partners are looking to us for guidance on how and when to use AI, both for their own internal purposes and what’s appropriate for the data work we do with them. We’re looking to develop our own baseline AI practices and add language to our contract documents. It’s a good reminder to see if any new/upcoming partners already have guidelines established just in case though. To your question about Google products, with their AI Gemini there are more data protections when using it via “Workspace” and your Nonprofit license then if any individual just goes to gemini.google.com online. The documentation seems solid, but even they still recommend not inputting any sensitive data.
Good to know Heather - I struggle with how wonderfully useful and impactful something like Gemini can be (when used appropriately for the right tasks) - contrasted with what a bad taste it leaves in the mouth to know the ethical and environmental impact from the development and actual powering of these tools.