You cannot protect what you have not measured. And right now, most professionals have a feeling about their AI exposure rather than an actual number. That feeling, vague and untested, is much harder to act on than a clear score you arrived at through a real process.
This framework gives you a way to score your job’s exposure to AI in a structured way. It is not a precise machine. No framework can be. But it is considerably more useful than trying to assess your risk based on a job title search or a LinkedIn hot take.
Why Scoring Matters More Than Just Knowing
There is a meaningful difference between knowing “my job has some AI exposure” and knowing “roughly 65 percent of my current tasks sit in a high-exposure category.” The first statement produces anxiety. The second produces a plan.
When you put a number to your exposure, even an approximate one, you give yourself something to work with. You can track it over time. You can have a real conversation with yourself about whether you are moving in the right direction. You can see, clearly, which tasks are pulling your score up and which are holding it down.
The full context for how this connects to real AI job risk is covered in Is Your Job Actually at Risk From AI? How to Tell. This framework is the practical scoring tool that goes alongside that broader understanding.
The Four Dimensions of AI Exposure
The framework scores each task in your role across four dimensions. Each dimension gets a score from 1 to 5. A higher score means higher exposure for that dimension.
Dimension 1: Repeatability
How much does this task follow the same pattern every time?
| Score | Description |
|---|---|
| 1 | Every instance is genuinely novel, no two are alike |
| 2 | Similar pattern, but significant variation each time |
| 3 | Clear pattern with moderate variation |
| 4 | Mostly repeatable with minor differences |
| 5 | Near-identical every time, clear rules to follow |
Dimension 2: Judgment Required
How much does doing this task well require genuine human judgment rather than following instructions?
| Score | Description |
|---|---|
| 1 | Deep expertise and contextual judgment are essential |
| 2 | Significant judgment required, rules are a starting point |
| 3 | Mix of rules-following and judgment calls |
| 4 | Mostly rule-based with occasional judgment |
| 5 | Almost entirely rule-following, minimal judgment needed |
Dimension 3: Human Relationship Dependence
How much does this task’s value depend on a specific human relationship or trust?
| Score | Description |
|---|---|
| 1 | The value is almost entirely in the relationship, not the output |
| 2 | The relationship significantly affects the value of the work |
| 3 | The relationship matters, but the output also has standalone value |
| 4 | The relationship is marginally relevant |
| 5 | The task is fully transactional, no relationship element |
Dimension 4: Output Evaluability
How easily can the quality of this task’s output be evaluated without deep human expertise?
| Score | Description |
|---|---|
| 1 | Only a genuine expert with deep context can evaluate quality |
| 2 | Requires significant expertise to assess properly |
| 3 | Mixed, some aspects are objectively assessable, others require judgment |
| 4 | Mostly objectively assessable by a reasonably informed person |
| 5 | Quality is easily measured, clear right and wrong answers |
How to Apply the Scoring
Start with the task list you built during your job audit. If you have not done that yet, How to Audit Your Own Job Before AI Does It for You walks through exactly how to build that list.
For each task, score it across all four dimensions (1 to 5 each) and then calculate the average. That average is the task’s composite AI exposure score.
An example: say you are a marketing analyst and one of your tasks is “compile weekly performance dashboard from three data sources.”
- Repeatability: 5 (same every week)
- Judgment Required: 2 (some interpretation but mostly pulling numbers)
- Human Relationship Dependence: 1 (no relationship element)
- Output Evaluability: 4 (clearly measurable metrics)
Average score: 3.0. That task sits in the moderate-to-high exposure range. Now compare that to “present findings to CMO and recommend Q4 budget reallocation.”
- Repeatability: 2 (varies significantly each quarter)
- Judgment Required: 2 (significant strategic judgment)
- Human Relationship Dependence: 2 (CMO trust in your read matters a lot)
- Output Evaluability: 2 (quality is hard to assess without deep context)
Average score: 2.0. Much lower exposure. The difference in those two tasks is not complexity in the abstract. It is the specific structural features that make work easier or harder for AI to replicate.
Reading Your Overall Exposure Score
Once you have scored all your significant tasks, calculate a weighted average based on how much time each task takes. A task you spend 30 percent of your week on should carry much more weight than one you do for an hour per month.
Your weighted average is your overall AI exposure score. Here is how to interpret it.
| Overall Score | What It Means | Recommended Response |
|---|---|---|
| 4.0 to 5.0 | High exposure. Most of your time on highly automatable work. | Act with urgency. Shift your task mix now. |
| 3.0 to 3.9 | Moderate-high exposure. Real pressure likely within 2-4 years. | Begin shifting deliberately. Use AI on Category A tasks. |
| 2.0 to 2.9 | Moderate exposure. A mix that requires active maintenance. | Monitor closely. Invest in protected skills. |
| 1.0 to 1.9 | Low exposure. Work is deeply human and contextual. | Stay current on AI tools. Lower immediate urgency. |
Most analytical and office-based professionals who do this exercise honestly land between 2.5 and 3.5. That range is not cause for alarm. It is cause for deliberate action.
Using Your Score to Make a Plan
Your score is most useful not as a single number, but as a map of where your exposure sits at the task level. Look at the five or six tasks with the highest individual scores. Those are the clearest near-term risks in your role. Now ask: is this task genuinely central to how my manager and organization value my contribution? Or is it peripheral work I have always done but nobody would really miss if it got automated?
If it is central and high-scoring, your priority should be to develop a plan for either using AI to do it faster yourself (so you stay the one who directs it) or shifting your role’s identity away from that task toward adjacent work with a lower exposure score.
If it is peripheral and high-scoring, that is actually an opportunity. Use AI tools to handle that task more efficiently and reclaim the hours for deeper, more protected work. You lose nothing strategically and gain time in the parts of your role that matter more.
Remember, too, that the score is a snapshot, not a permanent verdict. Rescore yourself every six to twelve months and watch how your overall number moves. Moving your score from 3.2 to 2.7 over a year is a real, meaningful improvement in your career position, even if nobody at your company ever knows you ran this exercise.
Not sure where your role actually stands with AI? I built MedscopeHub’s free AI Impact Assessment specifically for this. It gives you a personalized score, shows your exact risk and leverage areas, and builds you a custom action plan in minutes. Take it free at MedscopeHub.com.
Frequently Asked Questions
How accurate is a self-scored AI exposure assessment?
Accurate enough to be useful, which is the point. This framework is not a precision instrument, and it does not need to be. What matters is that it gives you a structured, honest way to think about your exposure rather than relying on a gut feeling. The biggest source of error is self-flattery, so push yourself to be honest, especially on the Repeatability and Output Evaluability dimensions where people often overestimate how unique their work is.
Should I share my score with my manager?
That depends entirely on your relationship with your manager and your company culture. For most professionals, this is a private planning tool rather than something to share formally. The goal is to help you make better decisions about your career, not to flag yourself as vulnerable in your organization. That said, if you have a manager who is genuinely forward-thinking about these questions, a conversation about evolving your role could be valuable.
What if I disagree with a score I have given myself?
That discomfort is worth sitting with. When your honest assessment conflicts with how you prefer to see your work, that tension usually points to something real. Try to articulate specifically why you disagree. Is it because the task is genuinely more nuanced than the score suggests? Or is it because acknowledging the exposure is uncomfortable? Both are valid, but they call for different responses.