Why Two People in the Same Job Can Face Very Different AI Risk

A
MedScopeHub Team
· Mar 11, 2026 · 8 min read · views

Two people. Same company. Same title on their LinkedIn. One of them is watching AI tools absorb a growing chunk of their daily work, quietly anxious about where that leaves them in eighteen months. The other is busier than ever, using those same tools to produce more than they could before, and becoming harder to replace by the month. Same job. Genuinely different situations.

This is one of the most misunderstood aspects of AI job risk, and it matters enormously for how you assess your own position. AI risk is not evenly distributed within a job title. The specific people who hold it, what they actually do all day, how they have shaped their role over time, these differences can produce dramatically different risk profiles even when the org chart shows identical positions.


The Core Reason: Tasks Vary Enormously Within the Same Title

Job titles are organizational shorthand. They describe a general category of work, not a precise specification of what any individual spends their day doing. Two people with the title “Business Analyst” at the same company might share twenty percent of their actual daily tasks and diverge completely on the other eighty.

One analyst spends most of her time producing weekly reporting dashboards, formatting Excel files, and responding to data requests from other teams. The other spends most of his time in stakeholder meetings, framing business problems, writing strategic recommendations, and presenting findings to senior leadership. Same title. One role is significantly more exposed to AI at the task level. The other is significantly more protected.

This is not a hypothetical. It is the reality in almost every professional office environment. And the implication is important: you cannot know your AI risk from your job title alone. You have to look at what you actually do, which is exactly what the audit process in How to Audit Your Own Job Before AI Does It for You helps you work through properly.


The Five Differences That Create the Gap

When two people in the same job face different AI risk, it almost always comes down to one or more of these five differences.

1. Task Composition

The most obvious factor: what percentage of each person’s working week involves tasks that AI tools can already perform adequately? One person’s role may be heavily weighted toward production tasks, pulling data, generating outputs, formatting documents. Another’s may be weighted toward judgment tasks, interpreting results, advising stakeholders, making recommendations. Same title, very different task compositions, very different exposure profiles.

2. Relationship Depth

How embedded is each person in the trust networks of their organization? One person may process work that flows through their function, competently and professionally but without building deep relational capital. Another may have become the person senior stakeholders call when they need to think something through, someone whose judgment is trusted specifically because of who they are and the history they have built. That relational depth is genuinely harder to automate away.

3. Visibility of Contribution

Is the value each person delivers visible to the people who make decisions about headcount? A professional whose contribution is clearly understood and valued by their manager and senior leadership has a different risk profile from one whose work is genuinely important but largely invisible at the level where automation decisions get made. This is not fair, but it is real.

4. Personal Initiative in Using AI Tools

This one is increasingly important. A person who is already using AI tools to do the routine parts of their job faster, and reinvesting that recovered time in higher-value work, is actively improving their risk profile every week. A person in the same role who is not using those tools, or who is aware of them but avoiding them, is sitting still while their exposure grows. Same job title. Diverging trajectories.

5. Organizational Context

Two people with the same title at different organizations, or even in different departments of the same organization, can face very different real-world timelines for disruption. A team that is actively piloting AI tools, led by a manager who is invested in automation, inside a company that is moving fast on this, will feel the pressure earlier than the same role inside a slower-moving division. The technical exposure may be identical. The practical timeline differs significantly.


A Scenario Worth Recognizing

Picture two marketing managers at the same mid-size company. Same salary band, same manager, hired within six months of each other.

Maria has spent the last two years building relationships with the agency partners, attending strategy sessions with the CMO, and developing a reputation as the person who understands what the brand actually stands for and what it should not do. A decent chunk of her week is still spent on coordination and reporting, but that is not where her value is perceived to live.

James has spent those same two years becoming extremely efficient at the production side: campaign trafficking, performance reporting, briefing documents, agency communications. He is well-liked, reliable, and very good at what he does. But almost everything he is known for doing well is something AI tools are now handling adequately.

Same title. Same team. The risk profile is genuinely different, and the gap has been building quietly over two years of compounding daily choices.

The professionals who are best positioned right now are not necessarily the most senior or the most technically skilled. They are the ones whose daily work skews toward the tasks that AI cannot easily replicate, and who have built relationships where their specific judgment is trusted and visible.


What You Can Actually Do With This Information

The purpose of understanding this is not to make you feel better or worse about your current position relative to colleagues. It is to give you a useful lens for your own deliberate choices going forward.

Start by being honest about which version of your role you have been building. Is your daily work skewing toward production and structure, or toward judgment and relationships? If the answer is mostly production and structure, that is not a verdict on your value or your talent. It is a signal about where to direct your energy from here.

Look for opportunities, even small ones, to shift your mix. Volunteer to present findings rather than just produce them. Get into rooms where decisions are being made and offer your interpretation, not just your data. Build genuine relationships with the stakeholders who matter in your organization, not as networking strategy but because those relationships are where a lot of real professional value now lives.

And use AI tools on the production work so you have the time to do this. That is the practical engine behind the shift: reclaim hours from the automatable work and spend them on the protected work. The scoring framework in A Simple Framework for Scoring Your Job’s Exposure to AI can help you put a more precise measure on where your task mix currently sits.

The bigger picture on what AI can and cannot do today, which helps you understand where the task-level line currently sits, is covered in Which Parts of Your Job AI Can Do Today and Which It Still Cannot.


Not sure where your role actually stands with AI? I built MedscopeHub’s free AI Impact Assessment specifically for this. It gives you a personalized score, shows your exact risk and leverage areas, and builds you a custom action plan in minutes. Take it free at MedscopeHub.com.


Frequently Asked Questions

Does having the same job title as someone mean you face the same AI risk?

No. Job titles describe a general category of work, not the specific tasks any individual spends their day doing. Two people with the same title can have significantly different AI risk profiles based on what they actually do, how relationship-dependent their contribution is, how visible their value is to decision-makers, and whether they are actively using AI tools to evolve their own role. The title is a starting point for inquiry, not an answer.

Can two people doing the same job have different levels of AI exposure based on soft skills?

Absolutely. The depth of trust relationships someone has built, how visibly they contribute to judgment-heavy work, how effectively they communicate and advise, these factors substantially affect their practical AI risk even when the formal job description is identical. Soft skills are not a soft protection against AI risk. They are one of the clearest and most durable sources of career resilience in this environment.

If I am the “James” in this situation, what should I do?

Start with an honest audit of what you actually spend your time on and score it against the task-level risk criteria. Then identify the highest-value, most protected work in your organization that you are not currently doing but could plausibly grow into. Use AI tools to recover hours from the production work and redirect them toward building the relationships and developing the judgment-heavy skills that are harder to automate. This is not a quick fix, but it is a clear and achievable direction from where you are now.

Tags

Share this article

© 2026 MedScopeHub  • Privacy  • Terms  • Contact  • About