Mathematical framework

METR benchmarks measure AI capability growth and serve as the baseline for the model's task horizon estimates. The calculator loads the latest METR dataset, identifies the current SOTA model (e.g., GPT-5.1-Codex-Max, November 2025), and derives doubling time via linear regression across SOTA points (typically ~6-7 months). Reliability penalties shrink the usable horizon: at 95% reliability, a ~162-minute median horizon becomes ~4.8 minutes (33x shorter), or ~235x shorter at 99% reliability. See the Guide for background.

Jobs are decomposed into five task duration buckets: [5, 30, 120, 360, 720] minutes. The hazard channel opens when AI can reliably complete enough of these categories based on your task weights.

METR data provides the starting task length, reliability penalty, and doubling time. Your questionnaire answers set domain friction and contribute to the industry slider, which together shrink the effective horizon that feeds the task gates. All coefficients can be tweaked in the Model Tuning section.

Survival relationship
$$P_{\text{loss}}(t) = 1 - \exp\left(-\int_0^t \lambda_{\text{total}}(s)\, ds\right)$$
Stacked hazard model
$$\lambda_{\text{total}}(s) = \lambda_{\text{AI}}(s) + \lambda_{\text{macro}}(s) + \lambda_{\text{firm}}(s) + \lambda_{\text{role}}(s) + \lambda_{\text{personal}}(s)$$

Model Flow: How Your Inputs Become Predictions

Your Inputs → Calculations → Results Questionnaire Your answers to Q1-Q19 AI capability, task characteristics, friction factors, and adaptability questions that shape all outputs Hierarchy Level Level 1-5 Position in organizational ladder. Affects compression vulnerability and task distribution patterns Sliders Reliability and Industry Friction Adjusts how strict automation needs to be (reliability) and how much harder your industry is to automate (friction) Task Distribution How your job splits across 5 task duration buckets (short to long) Hierarchy level, Q5 (decomposability), Q6 (standardization), Q7 (context) Can override manually in calculator Automation Timing When AI becomes technically capable of doing your job (blue curve basis) Task buckets, reliability slider, industry friction, Q2/Q6-Q16 Compression Risk Job loss from task reallocation to AI-amplified seniors (green modifier) Q10 (reallocation ease), Q4-Q9 (AI learnability), hierarchy vulnerability Blue Curve Timeline estimation of your role's technical automation feasibility When AI becomes capable enough to perform your job, based on METR's task horizon data, your task distribution, and reliability/friction settings Green Curve Timeline estimation of your actual role elimination When job loss actually happens in practice. Combines blue curve + implementation delay + compression hazard. Can arrive earlier than blue if compression is high Legend: Your inputs Calculations Blue curve Green curve Hover over boxes to highlight connections

Model Defaults

$H_{50,0}$~162 min (GPT-5.1-Codex-Max SOTA)
$D$ (doubling, months)~6-7 (computed from METR data)
$r$ (reliability)0.95 (default)
$L_i$ (gate thresholds)5, 30, 120, 360, 720 min
$w_i$ (weights)15%, 30%, 30%, 15%, 10%
$s$ (softness)0.35 (lower = sharper transition; range 0.20–0.55)
$h_{\text{AI}}$ (max hazard)0.45/yr
$\gamma$ (steepness)8.0
$\theta$ (threshold)0.50
userMult range0.33× to 3.0×
$t$ (timeline)Years from now
$H_r(t)$Task horizon at reliability $r$
$A_{\text{job}}$Readiness ∈ [0,1]
$f(r)$Reliability factor
Task buckets<10 min, 10-45 min, 45 min-3 hr, 3-8 hr, >12 hr
Hazard floor0.03/yr minimum risk
Hazard cap0.95/yr ceiling after multipliers
Domain clampPenalty clamped to 0.8–1.6×
Compression: readiness mix70% immediate / 30% amplified
Compression: cap gain1.04 (scales with $A_{\text{job}}$)
Compression: floor0.15 (min $A_{\text{job}}$ to activate)
Compression: amp2.0 (max productivity boost)
Compression: gate$\theta_c$=0.33, $\gamma_c$=6.0, $h_{\max,c}$=0.45/yr

Friction Decay Parameter

The model tracks how quickly organizational barriers weaken over time using a decay rate (λ). This parameter is calculated from your questionnaire responses about company adoption readiness, infrastructure, labor market dynamics, and role characteristics:

Friction decay rate
$$\lambda = 0.02 + 0.01 \times (n_{12} - 0.5) + 0.01 \times (n_{15} - 0.5) + 0.005 \times (n_{13} - 0.5) + 0.005 \times (n_{14} - 0.5) - 0.01 \times (n_{11} - 0.5) - 0.01 \times (n_{10} - 0.5)$$

where $n_i$ is the normalized answer to question $i$ (Likert 1-5 scaled to 0-1), and $\lambda$ is clamped to $[0.005, 0.05]$

Q12 (Physical presence)+0.01 weight
Q15 (Labor market tightness)+0.01 weight
Q13 (Company AI adoption)+0.005 weight
Q14 (Labor cost pressure)+0.005 weight
Q11 (Human judgment)-0.01 weight
Q10 (Task reallocation)-0.01 weight

This parameter controls how quickly implementation barriers shrink as AI capability grows. The model implements dynamic friction decay where implementation delay decreases exponentially: $\Delta(t) = \Delta_0 \cdot e^{-\lambda t}$. At time $t$, the green curve (actual job loss) equals technical feasibility at the earlier time $t - \Delta(t)$. Higher $\lambda$ means barriers collapse faster once AI crosses initial thresholds, while lower $\lambda$ means barriers persist even as capability advances. This captures how regulatory frameworks and organizational inertia may initially delay adoption by several years, though these delays shrink as AI improves and more companies adopt it.

The decay accelerates once $\lambda$ exceeds 0.025 (the knee point). Two additional parameters shape the dynamics: a saturation boost (0.6) that amplifies decay as capability matures, and a capability gain factor (0.8) that links delay compression to job readiness growth. Together these determine how quickly barriers collapse once AI capability and organizational readiness combine.

Role Presets

Different roles face different automation barriers. Software engineering maps most directly to AI's training data and requires less physical presence, so it gets lower friction (1.05x). Legal work and traditional engineering involve more tacit knowledge, regulatory constraints, and liability concerns, earning higher friction multipliers (1.45–1.50x). These multipliers reflect how much harder it is to automate each domain compared to software.

Two-layer friction system: Industry friction applies to all roles in a sector (e.g., finance ~1.35×), while your questionnaire adds role-specific friction from tacit knowledge, physical presence, and context. Both layers multiply together (friction presets widen the domain clamp, but capability doubling stays from METR).

  • Domain friction (preset baseline): Role presets set an initial friction multiplier (software ~1.05x, admin/data ~1.10x, finance/consulting ~1.35x, legal ~1.45x, engineering ~1.50x) before individual job characteristics are considered.
  • Recommended reliability: Presets suggest a reliability starting point per role based on typical error tolerance and stakes (finance/legal 97–98%, software/data 92–93%, creative/customer service 88–90%). Seniority adjusts these baselines: entry-level roles tolerate slightly lower reliability (−0.05), while executive decisions demand higher confidence (+0.05). Higher reliability requirements significantly reduce effective task horizon. You can override both friction and reliability in the UI to test different scenarios.

Beyond the preset baseline, the calculator derives a role-specific domain alignment penalty from your answers. Digitized, decomposable, and standardized work (Q4/Q5/Q6 high) reduces the penalty, while context dependence, tacit knowledge, judgment/relationship load, and physical presence (Q7/Q9/Q11/Q12 high) increase it. The model forms a signed weighted sum and maps it to a penalty multiplier via an exponential, clamped to a reasonable range. This penalty multiplies with the sector slider, so domain misalignment adds to friction.

AI hazard core
$$\lambda_{\text{AI}}(s) = \frac{h_{\max}}{1 + \exp\!\big(-\gamma (A_{\text{job}}(s) - \theta)\big)} \cdot M_{\text{user}}(s)$$

The hazard activates once AI capability crosses a coverage threshold. Two gates control this: task-level gates determine which individual tasks AI can handle, while a job-level gate opens when enough of your job is automatable to justify replacement. Here's the sequence:

  1. Task-level gates open gradually: As AI capability $H_r(t)$ grows via METR's doubling trend, each task bucket's gate $G_i(H)$ opens. Short tasks automate first (gates open when $H$ exceeds their threshold $L_i$), then medium tasks, then long tasks. Your personalized weights $w_i$ determine how much each gate contributes to overall readiness.
  2. Job readiness accumulates: $A(t) = \sum w_i G_i(H)$ sums up all the open gates, weighted by your job profile. This gives a number between 0 and 1 representing what fraction of your job AI can perform.
  3. Job-level hazard gate activates: When $A(t)$ crosses the coverage threshold $\theta$ (typically 0.50), the hazard function $\lambda_{\text{AI}}(t)$ rapidly increases via the logistic gate. Before this point, hazard is near zero; after, it approaches the maximum $h_{\max} \times M_{\text{user}}$.
Task horizon
$$H_r(t) = \frac{H_{50,0} \cdot 2^{(t \cdot 12)/D}}{p_{\text{domain}} \cdot f_{\text{industry}} \cdot f(r)}$$

where $p_{\text{domain}}$ = role-specific penalty from questionnaire, $f_{\text{industry}}$ = industry friction slider

The numerator's exponential term $2^{(t \cdot 12)/D}$ doubles roughly every 6–7 months per METR data—this is the capability growth. Domain friction, industry friction, and the reliability penalty are constant divisors that scale down this baseline; they don't affect the growth rate, only the starting point.

Reliability penalty: $\;f(r)=e^{-\sigma \cdot \text{logit}(r)}$ with $\sigma = \ln(H_{80}/H_{50}) / \ln(4)$. At 95% reliability, $f(r)\approx 33.4$, so the median 162-minute horizon becomes ~4.8 minutes of production-ready work.

Domain penalty: weighted Q4/5/6 (−) vs Q7/9/11/12 (+) → exponential penalty clamped (default 0.8–1.6×; widened by friction presets). Industry slider multiplies on top. Neutral answers include a modest baseline bias (~1.2×) before clamping to reflect average domain mismatch.

  • Personalized task weights $w_i$: Your answers to Q5 (task decomposability), Q6 (task standardization), Q7 (context dependency), and your hierarchy level determine how your job breaks down across the five buckets. Entry-level roles skew toward shorter tasks; executive roles skew toward longer tasks. Highly structured jobs concentrate weight in the short buckets; complex jobs shift weight to longer buckets. High-context work means more weight in the 1–3 hr and >12 hr buckets. The weights always sum to 1.0 and represent what fraction of your job falls into each duration category.
  • Gate softness parameter $s$: Controls how sharply tasks transition from "AI can't do this" to "AI can do this". Entry-level structured work has sharper transitions; senior complex work has smoother, more gradual automation curves. Base value is 0.35, adjusted by seniority (±0.03 per level) and task complexity (Q5, Q6). Range: [0.20, 0.55].
  • Readiness $A_{\text{job}}(t)$: $$A_{\text{job}}(t) = \sum_i w_i G_i(H_r(t))$$ This weighted sum tells us what fraction of your job AI can perform at time $t$. If you're an entry-level role with 40% of tasks under 5 minutes and AI's capability $H_r(t)$ has reached 10 minutes, the <5 min gate is fully open ($G_1 \approx 1.0$), contributing $0.40 \times 1.0 = 0.40$ to readiness. Jobs with more long-duration tasks need higher $H_r$ before their readiness $A(t)$ crosses the coverage threshold $\theta$, delaying automation.
  • Coverage bar $\theta$: Baseline is 0.50, adjusted by seniority (thetaLift from -0.015 to +0.040), domain alignment (coefficient 0.09, ~±0.09 at extremes), and role explicitness (coefficient 0.08, ~±0.08 at extremes). Data-rich jobs, standardized workflows, and fast feedback loops lower this threshold; tacit, high-context, and senior roles push it higher. Clamped to [0.50, 0.82]. This threshold controls how much of the job must be automatable before displacement risk activates.
  • User multipliers $M_{\text{user}}(s)$: Questionnaire responses exponentiate into amplifier and friction sums, then are capped between $0.33\times$ and $3\times$ for responsiveness.

How your answers move the curve

Your answers (on a 1–5 Likert scale) are converted so that neutral (3) maps to 0, allowing symmetric effects above and below the midpoint. The model combines these into amplifier and friction scores, then converts them to multipliers ranging from 0.33× to 3.0×.

The prompts fall into four themes:

  • AI readiness (Q1-Q4): High scores strengthen the amplifier channel when you indicate strong capability for AI learning and completing your tasks. Additionally, Q1 (current AI performance), Q2 (example work availability), Q3 (benchmark clarity), and Q4 (work digitization) all contribute to role explicitness ($s_e$), which lowers the coverage threshold $\theta$, making automation viable at lower overall capability levels.
  • Task structure (Q5–Q9): These questions affect the blue curve (technical feasibility) through multiple mechanisms: (1) Q5, Q6 shift the task duration profile (structured → shorter buckets, complex → longer buckets), (2) Q2, Q5, Q6, Q8 contribute to $s_e$ which lowers $\theta$ (data-rich, decomposable, standardized, fast-feedback jobs become viable at lower capability), and (3) Q7, Q9 increase the domain penalty $p_{\text{domain}}$ (high context, tacit knowledge damp capability). Together, these determine when your specific job crosses the automation threshold.
  • Human moat (Q11-Q12): Relationship intensity and physical presence both load the friction side and stretch the METR baseline via the domain penalty, pushing the curve out.
  • Firm and personal context (Q13-Q19): Company levers shape implementation delay. Job performance (Q19) delays displacement for top performers while also helping re-employment. Adaptability (Q18) and transferability (Q17) drive re-employment probability.

How role clarity affects the threshold: Questions Q1, Q2, Q3, Q4, Q5, Q6, and Q8 (positive factors) are averaged together, while Q7 and Q9 (context dependency and tacit knowledge, the protective factors) are averaged separately. The formula $s_e = 0.65 \times \text{norm}(s_{\text{pos}}) - 0.35 \times \text{norm}(s_{\text{neg}}) + 0.10$ produces a score between 0 and 1. Higher scores (more explicit roles) reduce $\theta$ via the shift $\Delta_{s_e} = 0.10 \times (0.5 - s_e)$, meaning AI needs less total job coverage before automation becomes economically viable.

The model also adjusts how sharply the hazard ramps up ($\gamma$) based on task characteristics. Structured work, fast feedback, and standardized tasks make the transition sudden; messy, collaborative, or custom work makes it gradual. This affects whether adoption happens quickly once the threshold is crossed or drags out over years.

The green curve: Implementation delay and workforce compression

The green curve (actual job loss) differs from the blue curve (technical feasibility) through two mechanisms: organizational adoption barriers that delay automation, and compression-driven workforce reductions that can cause job loss earlier.

How the green curve is calculated

The model integrates two separate hazard channels to create the green curve:

Green curve total hazard
$$\lambda_{\text{total}}(t) = \lambda_{\text{AI}}(t - \Delta(t)) + \lambda_{\text{compression}}(t)$$
Where implementation delay decreases over time
$$\Delta(t) = \Delta_0 \cdot e^{-\lambda t}$$

This structure captures two different ways jobs disappear:

  • Delayed automation hazard: The AI automation hazard $\lambda_{\text{AI}}$ is evaluated at an earlier time $(t - \Delta(t))$ due to organizational friction. Early in the timeline, implementation barriers create a significant delay ($\Delta_0$), but this delay shrinks exponentially as AI capability grows and adoption accelerates. The friction decay rate $\lambda$ determines how fast these barriers collapse.
  • Compression hazard: A separate hazard channel $\lambda_{\text{compression}}(t)$ that activates when AI makes senior workers productive enough to absorb junior work. This hazard is evaluated at the current time $t$ (no delay) and can cause job loss well before full automation becomes technically feasible. The two hazards are summed and capped at 0.60/year total.

Mechanism 1: Implementation delay (shifts automation hazard forward)

Organizational barriers slow AI automation adoption. The initial delay $\Delta_0$ derives from company context (Q13-Q16: adoption appetite, labor cost pressure, market tightness, infrastructure) and individual leverage (Q19: job performance protects top performers). This delay ranges from 0.3 to 4.0 years depending on your situation.

Initial delay calculation
$$\Delta_0 = \operatorname{clip}\big(1.75 - 1.25\, s_{\text{delay}} + \Delta^{\text{eff}}_{\text{seniority}},\; 0.3,\; 4.0\big)$$
$$\Delta^{\text{eff}}_{\text{seniority}} = \Delta_{\text{seniority}} \cdot \Big[1 + 0.5\, (\operatorname{norm}(Q19) - 0.5)\, \operatorname{sign}(\Delta_{\text{seniority}})\Big]$$

$s_{\text{delay}} \in [-2,2]$ from Q13-Q16; $\Delta_{\text{seniority}} \in \{-0.10,-0.03,+0.06,+0.10,+0.12\}$ years by level; result clamped to 0.3–4.0 years

Dynamic friction decay: The delay doesn't stay constant. As AI capability increases and adoption spreads, organizational barriers shrink over time: $\Delta(t) = \Delta_0 \cdot e^{-\lambda t}$. The decay rate $\lambda$ (typically 0.02–0.05/year) is derived from your questionnaire answers (see Friction Decay Parameter above).

Example delays: An early-adopting entry-level role starts with a 0.3-year delay. A defensive environment with senior leadership starts with a 4.0-year delay. Performance (Q19) adjusts these: high performers get extended delays; low performers face shortened timelines.

Mechanism 2: Workforce compression (earlier job loss via task reallocation)

Compression activates at 33% job coverage versus 50% for direct automation. For conceptual explanation, see the Guide. Technical implementation:

Compression hazard (base calculation)
$$\lambda_{\text{compression}}(t) = \frac{h_{\max,c}}{1 + \exp(-\gamma_c (R_c(t) - \theta_c))} \cdot V_{\text{hierarchy}} \cdot G_{\text{readiness}}(t)$$
Where compression readiness uses a hybrid formula with capability scaling:
$$R_c(t) = s_{\text{realloc}} \times \big(0.7 + 0.3(1 + B_{\text{amp}})\big) \times \big(1 + 1.04 \cdot A_{\text{job}}(t)\big)$$
And readiness gate prevents premature compression:
$$G_{\text{readiness}}(t) = \max\Big(0,\, \min\Big(1,\, \frac{A_{\text{job}}(t) - 0.15}{1 - 0.15}\Big)\Big)$$
Amplification boost is:
$$B_{\text{amp}} = f_{\text{effective}} \cdot A_{\text{job}}(t) \cdot \text{norm}(Q1) \cdot 2.0$$
Where effective digital fraction accounts for AI learnability:
$$f_{\text{effective}} = \text{norm}(Q4) \times \max\big(0.1,\, 1 - 0.35\,\text{norm}(Q7) - 0.30\,\text{norm}(Q9) - 0.20\,(1{-}\text{norm}(Q5)) - 0.15\,(1{-}\text{norm}(Q6))\big)$$

The hybrid formula means compression readiness has two components: 70% comes from reallocation feasibility alone (immediate effect), and 30% grows over time as AI amplifies senior productivity. This ensures Q10 has a noticeable immediate impact while preserving the time-dependent behavior where compression risk accelerates as AI becomes more capable.

The capability scaling factor $(1 + 1.04 \cdot A_{\text{job}}(t))$ further amplifies compression readiness as AI capability grows. When AI can perform none of your job ($A_{\text{job}} = 0$), the factor is 1.0 (no scaling). When AI can perform your entire job ($A_{\text{job}} = 1$), the factor reaches 2.04, meaning compression readiness roughly doubles at full capability. This captures how organizational willingness to restructure accelerates as AI proves itself capable.

The readiness gate $G_{\text{readiness}}(t)$ ensures compression hazard remains at zero until AI reaches at least 15% job readiness ($A_{\text{job}} \geq 0.15$), then ramps linearly to full strength by 100% readiness. This prevents spurious compression risk predictions when AI capability is still minimal.

  1. Reallocation feasibility ($s_{\text{realloc}}$): Combines Q10 (direct reallocation question: 50% weight), task structure (Q5: 18%, Q6: 12%, Q7: 8% inverted), tacit knowledge (Q9: 10% inverted), and physical presence (Q12: 2% inverted). Q10 is the dominant factor since it directly measures how easily your responsibilities could be redistributed to existing team members. Higher scores mean your work can be easily absorbed by others. Range: [0, 1].
  2. Senior productivity amplification ($B_{\text{amp}}$): AI only amplifies productivity for work that is both digital and AI-learnable. Raw digitization (Q4) is discounted by factors that make digital work harder for AI to learn from: context dependency (Q7: 35% weight), tacit knowledge (Q9: 30% weight), low decomposability (Q5: 20% weight), and low standardization (Q6: 15% weight). A senior consultant whose deliverables are 100% digital but require extensive context, relationships, and tacit judgment gets minimal amplification because while the work is digital, AI cannot learn to replicate it by observing outputs. Conversely, highly digital, standardized, decomposable work with low context enables strong amplification. The boost ranges from 0 to 2.0 (up to +200% output, or 3× productivity when used in $(1 + B_{\text{amp}})$), scaled by current AI performance (Q1) and job readiness $A_{\text{job}}(t)$. This factor grows over time as AI capability increases.
  3. Hierarchy vulnerability ($V_{\text{hierarchy}}$): Your position in the workflow determines exposure. Calculated as $(6 - \text{level}) / 5$. Level 1 (many layers above) = 100% vulnerable. Level 4 = 40% vulnerable. Level 5 (top of domain, no one above can do your work) = 20% vulnerable. A principal engineer who owns a system faces minimal compression risk despite being in a "senior" role title.

Parameters differ from full automation: The compression gate opens at $\theta_c = 0.33$ (vs. $\theta \approx 0.50$ for automation), has gentler slope ($\gamma_c = 6.0$ vs. $\gamma = 8.0$), and equal maximum hazard ($h_{\max,c} = 0.45$/year, matching automation's $h_{\max} = 0.45$/year). For vulnerable positions, AI-driven workforce compression poses significant risk alongside direct automation—earlier onset but more gradual ramp-up. Combined with automation hazard, the total is capped at 0.60/year to reflect real-world institutional friction that prevents instant mass layoffs.

Re-employment probability

Re-employment starts at a 60% baseline and blends five forces:

  • Adaptability core (Q17, Q18, Q19): Transferability, learning speed, and performance move odds up or down (0.2 per weighted point, clamped to avoid runaway extremes).
  • Task structure (Q5, Q6, Q8): Highly decomposable, standardized, and feedback-rich tasks are easier to automate and harder to pivot from, reducing re-employment odds by up to ~12 percentage points at the defaults. More tacit, varied work with slower feedback loops protects re-employment prospects.
  • Labor tightness (Q15): A modest multiplier (+/-5% at defaults) reflecting how forgiving the market is.
  • Global AI saturation penalty: As global AI capability matures (measured by when the blue technical feasibility curve hits 50%), the total number of available jobs shrinks. This penalty affects everyone equally based on the global state of AI: $\text{penalty}_{\text{global}} = p_{\max}\left(\dfrac{\max(0, p_{\text{blue,global}} - f_g)}{1 - f_g}\right)^{\alpha_g}$ with defaults $p_{\max}=0.35$, floor $f_g=0.20$, exponent $\alpha_g=1.3$.
  • Relative timing bonus: Being displaced later than the global median provides an advantage—you've watched peers navigate transitions and can learn from their paths. The bonus scales linearly with delay: $\text{bonus}_{\text{relative}} = b_{\max} \cdot \min\left(1, \dfrac{t_{\text{yours}} - t_{\text{global}}}{s_r}\right)$ where $b_{\max}=0.15$ and the scale $s_r=5.0$ years at defaults. Full bonus applies when displaced 5+ years after the global median.

Seniority applies a small boost, and results clamp between 10% and 85%.

Hierarchy levels and seniority effects

Your position in the organizational hierarchy significantly affects both your automation timeline and compression vulnerability. The model defines five hierarchy levels, each with distinct parameter adjustments:

Level 1: Many layers aboveEntry-level; highest compression risk
Level 2: Several layers aboveJunior; moderate compression risk
Level 3: Few above or peersMid-level; baseline parameters
Level 4: Top of domainSenior; reduced compression risk
Level 5: Unique/irreplaceableExecutive/owner; minimal compression risk

Seniority profile adjustments

Each hierarchy level applies four adjustments to the base model parameters:

  • Theta lift ($\Delta\theta$): Adjusts the coverage threshold. Entry-level: -0.015 (automation triggers earlier); Level 5: +0.040 (requires higher AI capability before hazard activates). This reflects that senior roles typically require more comprehensive automation before replacement is viable.
  • Hazard shield: Percentage reduction to base hazard rate. Ranges from 0% (Level 1) to 5% (Level 5). Senior roles have more organizational protection and replacement friction.
  • Delay shift ($\Delta_{\text{seniority}}$): Adjusts implementation delay. Entry-level: -0.10 years (faster replacement); Level 5: +0.12 years (longer runway). Modified by job performance (Q19).
  • Re-employment boost: Multiplier on re-employment probability. Ranges from 1.00× (Level 1) to 1.09× (Level 5). Senior experience improves job market prospects.

Task distribution shifts by seniority

Hierarchy level also shifts your task duration profile. Entry-level roles skew toward shorter, more automatable tasks; senior roles skew toward longer, strategic tasks:

Seniority task weight shifts (added to base weights)
Level<10m10-45m45m-3h3-8h>12h
1 (Entry)+12%+8%-5%-8%-7%
2 (Junior)+5%+3%-2%-3%-3%
3 (Mid)0%0%0%0%0%
4 (Senior)-6%-4%+3%+4%+3%
5 (Exec)-12%-9%+6%+8%+7%

These shifts are applied to base weights before normalization. Entry-level concentrates work in short tasks; executive level shifts to longer strategic tasks.

The combined effect is substantial: an entry-level role faces earlier automation (lower theta), faster implementation (negative delay shift), higher compression vulnerability (100% hierarchy exposure), and a task profile weighted toward short, easily-automated tasks. A Level 5 executive faces the opposite on all dimensions.

Worked Example: Mid-Level Data Analyst

Taylor is a Level 2 data analyst at a 500-person SaaS company. Key questionnaire responses: Q1=4 (strong AI tools in domain), Q4=5 (fully digital), Q5=4 (decomposable), Q6=3 (mixed standardization), Q7=3 (moderate context), Q10=4 (easily reallocated).

Calculation summary
StepResult
Task distribution~60% in buckets 1-2 (short tasks), ~11% in buckets 4-5 (long tasks)
Domain friction0.80× (high digital alignment, modest context)
Effective starting capability~5.2 min (after 95% reliability penalty)
50% job readiness reached~20 months (~1.7 years; automation hazard activates)
33% job readiness reached~14 months (~1.2 years; compression hazard activates)
Hierarchy vulnerability80% (Level 2 → $(6-2)/5$)
Compression readiness~1.15 at 50% job readiness
Implementation delay~1.7 years initial, decaying at ~0.018/year

Taylor's displacement forecast

Median timeline: ~3.1 years

Risk breakdown: ~21% by year 2, ~47% by year 3, ~75% by ~4.3 years, ~84% by year 5

Re-employment: Moderate (neutral adaptability with moderately decomposable work)

Guide to Model Tuning

The tuning panel exposes every parameter used in the hazard and compression calculations. Presets are grouped by the math they touch so you can mix capability, friction, compression, and rollout assumptions independently:

  • Model Capability: Conservative / Baseline / Fast-Takeoff adjust only capability growth (METR doubling time and gate softness).
  • Task Friction: Less / Baseline / More rescale domain and industry penalties that damp effective capability before it reaches the gates.
  • Workforce Compression: Less / Baseline / More move the green-curve mechanics (reallocation weights, amplification, gate threshold, readiness floor).
  • Adoption Guardrails: More drag / Baseline / Less drag tighten or loosen hazard caps/thresholds and implementation delay decay to explore deployment speed.

These presets only set parameter values; you can override any field afterward. They do not change the underlying equations, only the assumed coefficients for capability, friction, compression, and rollout speed.