Content warning: This post covers sensitive topics including sexual violence (no graphic descriptions), racism and discussion of im/migration status. It is my hope that including content warnings can open up, rather than shut down, important discussions.
The last time I was in the Mile High City, I was “sprinting” with my friends from the Salesforce.org ecosystem. Today, I’m back (altitude sickness and all!) for a conference with Grantmaking Professionals. We had the privilege of hearing a keynote yesterday from a social entrepreneur who has created a platform for survivors to report data about sexual harassment, coercion, and violence in an effort to aggregate information and reveal repeat offenders. Learn more about the Callisto platform here (link to TED talk). (They are currently pivoting from focusing on college campuses to expanding among Silicon Valley startups, where there is rampant sexual misconduct and little-to-no accountability).
The aspect that resonated with me the most (beyond building infrastructure for collective action… can you say AWESOME?!) was their emphasis on privacy and security. Jessica explained (in lay-person terms) some of the complexity behind the platform’s encryption mechanisms to protect survivors from hackers, legal woes, and re-traumatization. I was fascinated when she explained that there’s no single server that holds all of the website’s data, so even if one server was hacked/compromised, another would still be in tact. Additionally, she shared that they use a new programming language with sophisticated encryption, so even staff at Callisto can’t dis-aggregate the data and use it to contact survivors. This can make it difficult to track metrics about survivors who have reported incidents through the system, but the plus side is that Callisto is truly private and secure. Perhaps follow-up metrics are best tracked through other means. Or, in other words, let’s solve one problem at a time.
“All the rage”
Data privacy is all the rage these days… both in the “popular” sense of the term and “aggravating/enraged” sense of the term. I think part of the reason why data privacy has become such a common topic of conversation is because there is actually shockingly LITTLE data privacy left anymore… which is … somewhat terrifying. Partially in response to this reality, you’ve probably seen pop-ups on common websites that ask for your permission to hold on to your data. This is a new feature to enable companies to comply with a new set of regulations from the European Union called GDPR. (Want to learn more? I think this e-booklet is a great resource). Under this policy, when companies or organizations collect data on an individual, they are required to provide:
- The purpose of acquiring the data and how it will be used
- Whether the data will be transferred internationally
- How long the data will be stored
- Timely updates if there is a data privacy/integrity breach
- and more
The GDPR law took years to refine and represented somewhat of a seismic shift for many companies when they were required to comply last year. And I think it’s really important, given the scale and frequency of major data breaches. Plus, the on-going fight for a free and open internet. (Subject for a future blog post?)
Who collects what? and when?
I know that many of us in the TDAA community collect data through either Google Forms or through a database systems (not so much through website views / Google Analytics) . But how often to you ask yourself about what actually happens when the data come in? And how you responsibly hold onto and maintain those data? These are great questions to consider from GDPR, although GDPR itself isn’t so relevant for most of us.
Maybe when the data come in, you take action! That’s what I do when someone signs up for an Amplify study group. I look at their name and study group interest, and add them to a waiting list for the next course cycle.
Alternatively, you may simply accumulate data! In this scenario, you let the responses pile up, so that you can take action later, like sending an email invitation… delivering petition signatures… or recruiting from that list for meaningful volunteer roles.
More is more?
No matter what you do when data come in, you’re now responsible for the information people shared with you. Name, address, email, gender identity, race, phone number, you name it. I think it’s important to evaluate the risks and rewards of accumulating lots of data. You should be accountable to those people and not do things like sell their info to another group… or spam them… or print out a copy and leave it in a local cafe. All of these things would violate the privacy of the people who signed up, even if that privacy isn’t regulated by international law 😉
Let’s look at privacy from a different angle. Say you’re with an organization that serves communities of im/migrants and refugees. You may collect information about visas, citizenship, and legal residential status. You might even have a checkbox to indicate if a person is “undocumented” or not. And you might use this information to assess your programs, protect your community, and inform your staff and volunteers about meaningful and sensitive updates. These are all good reasons to have data in your system! But you should also keep track of who sees the data and make sure that it’s protected. Do new volunteers automatically have access? What about the fundraising department? How are you managing that risk?
At this conference, I attended many sessions about how to track metrics for Diversity, Equity and Inclusion in grantmaking. When it comes to the organizations we might work with as grantees, is it more important to track the racial composition of organization staff… or the racial composition of the communities they serve? Both? What about gender, sexual orientation, socio-economic status? Who’s job is it to collect these statistics and crunch the numbers? And how often? Assuming we do collect the data, how do we make meaning out of it? What do we compare these data to? Other organizations? Internal goals/benchmarks? Local/national demographic trends? Change over time?
Some people might walk away from these sessions with a “more is more” attitude. From collecting little/no demographic data, it’s easy to let the pendulum swing in the other direction!
Maybe less is more?
On the other hand, we can’t track everything under the sun (for reasons of fairness to grantwriters, limited capacity for grantmakers, limited utility, and the fact that we aren’t in charge of the world!). In these cases, less is more!
I want to elaborate on this less is more idea in the context of capacity (it takes TONS of time and resources to collect all of this demographic data… couldn’t that time be better put actually supporting target communities?) AND in the context of privacy/data security.
There are lots of situations where it’s actually important to make decisions in the absence of demographic indicators…. for example, managing a job interview process. Study after study demonstrate that removing names, photos, and other racial indicators from resumes substantially mitigates bias in hiring. In this example, less is more. I think the ban the box movement (referring to a checkbox on many job applications that indicates criminal record) comes from a similar conclusion. If certain info makes you more likely to be influenced by bias, it’s better not to know.
There’s (at least!) two sides to every data quandary. The flipside to this advice is… how can we change what we aren’t measuring? This points to a mandate to collect demographic info under the assumption that (a) this will lead to change and (b) we will be able to observe/monitor/measure the change. Are these assumptions true?
In my blogpost on the #WeWillNotBeErased movement, I started to explore these ideas and I made a recommendation about how (and when!) to structure fields to collect data on gender identity. I stand by these recommendations! But I also warn readers that there are good reasons to NOT collect this info, especially when we’re living under an increasingly totalitarian regime that could target members of the trans community. In that case, perhaps better to not have identifying information stored here, there, and everywhere!
I don’t want to be mistaken here… I’m generally PRO collecting meaningful, relevant data to support social justice initiatives and I *certainly* support efforts that allow people to self-identify, enabling us to address them how they’d like to be addressed.
I used the “less is more” approach in a recent recruitment form that I created for the Amplify community. In fact, I asked only 3 questions!
- What is your pronoun? (she/he/ze/they/prefer to self describe)
- Do you identify as an underrepresented voice in tech? (Yes/No/No but I identify as an Ally)
- Is there anything else you would like to share about your racial, ethnic or other intersecting identities? (free response)
For our purposes, this was enough. It doesn’t allow us to dis-aggregate by race/ethnicity, but for now that’s okay. I don’t think it’s ethical to collect data that we might use on a rainy day, but we don’t currently have a strategy for.
I guess what I want to say here is…. I think collecting some of these datapoints, without a rationale or a coherent strategy, is a dangerous red herring. You don’t necessarily need data to TELL you if you’re not reaching certain populations of people. Sometimes it’s pretty easy to tell… even right under your nose… and collecting the info will result in undue burdens without changing the underlying cause of the lack of equity (cultural insensitivity? lack of access? lack of authentic relationships? etc) Guess what! Data can’t inherently solve those problems, y’all! It’s clear to me that cultural and behavioral work has to happen first. Data collection can be a meaningful accountability and monitoring tactic, but it’s not a stand alone solution. Especially when collecting potentially sensitive datapoints (gender identity, criminal history, immigration status, dis/ability status, medical diagnoses, and more), if we collect them at all, we need to have a plan for how to handle them with safety and respect.
So, here are some key takeaways:
- Let’s design systems and practices that protect the privacy of our communities
- Let’s keep questioning how and when to collect demographic data
- There’s no silver bullet or one-size-fits-all approach