Hi all – My team has been using equity gap scores to good effect for a client since 2020. We (and the client) appreciate the simplicity of it, but we need help resolving a methodological question. For demonstration of the issue we are struggling with, I’ll use the made-up literacy data that Heather used in a post about the EGS (Supercharge your Averages with an Equity Gap Score » We All Count). In various applications of the EGS that I have seen demoed on the We All Count site, the data can either be in a positive or negative orientation (e.g., literacy rate or illiteracy rate). Using the faux data, the EGS on the illiteracy rate is a whopping 15.3 (worst (46%)/best (3%)). Using the same data but in a positive orientation – the literacy rate – the EGS drops to 1.8 (best(97%)/worst(54%)). My question for the forum is which EGS is a closer reflection of the disparity? Does the negative orientation EGS of 15.3 overstate the disparity, or does the positive orientation EGS of 1.8 understate it? Recognizing that the EGS is an interpretive tool and not an absolute reflection of reality, how have forum users approached this? Also, with apologies for tacking on a related but different EGS question: I would like to hear how forum users approach or explain EGS scores from metric to metric. For example, let’s say we have calculated the EGS on 10 disparate metrics (e.g., literacy, educational attainment, income, home ownership, etc.). The illiteracy rate EGS is 15.3 and let’s say the income EGS is 2.3. It looks like we have a much bigger equity problem with literacy, but given the ability to calculate EGS using positive or negative orientations (and given that an income calculation is based on large figures where differences have less relative impact on the EGS), if we used the Literacy rate EGS of 1.8, it would appear that we have a bigger income problem. How do forum practitioners approach this nuance with clients or the public at large while still benefitting from the analytical and communication simplicity of the EGS? Thanks!
Based on the way you explained this is appears that there are two parts here.
Part 1 literacy is less effected by the variable you’re studying.
Part 2 illiteracy is more effected by the variable your studying.
The question I would ask myself is what is the hypothesis and what am I studying? And how does my organization want to view data. My department has taken an asset based model with data. So we would represent the success metric. Yet people may try to increase success by looking at what works (do more of this) or what doesn’t work (do less of this).
I know this isn’t an answer but more of food for thought that may help you explore this project you’re working on.
Hey there @kariparsons this is SUCH an excellent question. I am so happy to hear that the EGS has been useful to you and your clients. And the question about the choice to calculate and display a positive or negative gap is so important. Somewhere in the archives we have an example about this using cancer scores - just to demonstrate that is really does make a big difference. Most people simply assume that the gap will be symmetric, and it’s definitely not. I cannot find it at the moment, but when I do I’ll link it here.
@Jadevieira_Bristol makes a good point about considering how to align the choice with the purpose and priorities of your project.
Neither one is “scientifically better” or is reflecting the real difference. They are reflecting very slightly different questions.
The literacy question is “Who is attaining the positive outcome — and how evenly is that opportunity distributed?”
The illiteracy question is “Who is burdened by the problem — and how unevenly is that harm distributed?”
The first is often preferred if your goal is to promote success and evaluating gains. However, it can hide who’s being left behind. The second is often better if your’e targeting need, or resourcing justice-focused interventions. It can sound deficit-focused without context.
It is validating that I am not the only one who has struggled with how to orient the EGS in different contexts! Thank you both for these responses – they have expanded my thinking on the issue. My next little experiment (when time permits) will be to play around with normalizing the EGS across different metrics so it would be possible, at least in theory, to measure relative burdens or positive outcomes across disparate measures. I’m curious about this because a challenge we have encountered is when clients want to know if that EGS score for, say, arrest rates is better or worse than the EGS for, say, maternal mortality. It is not very satifying to say to them “don’t compare.” Normalizing seems like a way to allow for some level of comparison, but again, I have not thought deeply about this yet. If anyone has tried this, I’d love to hear!
That’s a good idea @kariparsons if I have time, I also will give that a try.