The Professional Impact Dictionary
The Professional Impact Dictionary
The Law: If your bullet has no verb, you did nothing. If it has no metric, you achieved nothing. If it has no context, nobody cares.
This is not a list of words. This is the formula for translating work into value.
Every bullet on your resume must follow the formula. No exceptions.
The Formula (Non-Negotiable)
Verb → Context → Metric → Outcome
- Verb: What you actually did (Led, Built, Reduced, Scaled)
- Context: What made it hard (System complexity, timeline pressure, scale, constraint)
- Metric: The quantified result ($, %, time saved, volume)
- Outcome: Why it mattered (revenue, cost, speed, risk, retention)
The Anti-Pattern (What 90% Write)
❌ "Responsible for sales" → No verb. No action. Delete.
❌ "Worked on infrastructure" → No metric. No proof. Delete.
❌ "Improved team performance" → No context. By how much? 2%? 200%? Delete.
❌ "Managed projects" → Generic. Every PM writes this. Delete.
The Pattern (What Gets Hired)
✅ "Drove [Verb] enterprise revenue in EMEA [Context] by 30% YoY ($4.5M) [Metric] by launching 3 new partnerships [Outcome]"
✅ "Reduced [Verb] cloud infrastructure costs across 50-service architecture [Context] by 35% ($400K annually) [Metric] by consolidating compute and eliminating zombie resources [Outcome]"
✅ "Built [Verb] data pipeline under 3-week deadline [Context] processing 2M records/day with 99.9% uptime [Metric] enabling real-time analytics for sales team [Outcome]"
The difference: The first set is job duties. The second set is business value. Only the second gets hired.
📖 Dictionary by Category
Leadership & Management
The Problem: Most resumes say "Managed team" or "Led department". This is meaningless without scope and outcome.
The Fix: Add team size, constraints, and measurable outcomes.
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Managed | Directed / Orchestrated | Orchestrated 3-department collaboration (Eng, Sales, Ops) to launch product 2 months ahead of schedule. |
| Led | Spearheaded / Drove | Spearheaded company-wide migration to Agile, reducing sprint cycle time by 40%. |
| Helped | Enabled / Facilitated | Enabled 12-person team to achieve 95% on-time delivery through daily standups and blocker resolution. |
| Responsible for | Oversaw / Commanded | Oversaw $5M annual budget allocation across 4 product lines with 98% forecast accuracy. |
Real Bullets (Gold Standard):
- Directed cross-functional team of 15 engineers and designers to ship MVPs for 3 SaaS products in 6 months, driving $2M ARR.
- Orchestrated quarterly all-hands planning (100+ attendees), aligning OKRs across 5 departments and reducing conflicting priorities by 60%.
- Spearheaded talent acquisition strategy, hiring 20 mid-to-senior engineers in Q2/2025, reducing avg. time-to-hire from 90 to 45 days.
Stakeholder Coordination: Proving Communication Through Alignment
Most resumes claim "excellent communication" without proof. Stakeholder management is measurable through stakeholder count + meeting cadence + alignment outcomes. The formula: who you coordinated, how often, what changed. This proves coordination ability, not just activity. For complete stakeholder management proof with role-specific examples, see our Stakeholder Management Metrics guide.
Stakeholder Management Verbs:
- Coordinated / Aligned / Facilitated / Resolved
Proof Pattern: [Action] + [Stakeholder count + departments] + [Cadence] + [Outcome: conflict reduced, decision velocity, time saved]
Examples:
- Coordinated 15 cross-functional stakeholders (Engineering, Product, Sales, Legal) through weekly alignment meetings, reducing conflicting priorities by 40% and accelerating feature launch by 3 weeks.
- Facilitated bi-weekly reviews with 8 department heads, cutting roadmap approval time from 6 weeks to 2 weeks and eliminating 12 cross-team blockers.
- Resolved roadmap conflicts between Engineering and Sales through structured sync process, reducing feature scope changes by 60%.
IC Leadership: Influence Without Authority
Individual contributors without direct reports can prove leadership through initiative ownership, RFC adoption, mentorship outcomes, and cross-team influence. The formula: what you started + who adopted it + what improved. This is leadership through technical decision influence, not title-based authority. For comprehensive IC leadership metrics including RFC adoption and de-risking impact, see our Leadership Without Title guide.
IC Leadership Verbs:
- Authored / Proposed / Initiated / Mentored / Identified
Proof Pattern: [Initiative] + [Adoption count: teams/people] + [Outcome: time saved, risk prevented, quality improved]
Examples:
- Authored RFC for microservices migration strategy adopted by 8 engineering teams, reducing deployment time from 2 hours to 15 minutes.
- Mentored 3 junior engineers through structured 1-on-1s and code reviews, all promoted to mid-level within 12 months.
- Built CLI tool for database migrations adopted by 10 engineers, cutting manual setup time from 2 hours to 5 minutes.
- Identified security vulnerability in auth flow, proposed fix adopted across 10 services, preventing potential breach affecting 2M users.
Engineering & Tech
The Problem: Engineering resumes drift into tool lists. Recruiters do not hire “React”. They hire “shipped X that moved Y metric”.
The Fix: Anchor every technical bullet to an outcome: latency, cost, reliability, conversion, developer velocity, security risk, incident rate.
Strong Verbs (Engineering):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Worked on | Built / Implemented | Implemented event-driven pipeline for 12 services, reducing job latency by 45%. |
| Fixed | Resolved / Eliminated | Eliminated P1 outage class by adding circuit breakers, cutting incidents by 60%. |
| Improved | Optimized / Accelerated | Optimized Postgres indexes, cutting p95 query time from 900ms to 120ms. |
| Helped | Enabled / Unblocked | Enabled 25 engineers to ship weekly by automating CI checks, saving 30 hrs/week. |
| Maintained | Hardened / Stabilized | Hardened auth flow, reducing account takeover risk via MFA rollout to 92% adoption. |
Real Bullets (Gold Standard):
- Built feature flag system for 8 microservices, enabling safe rollouts and reducing rollback time from 30 minutes to 3 minutes.
- Implemented caching + pagination on search endpoints, increasing throughput by 2.7× and reducing infra cost by 18%.
- Stabilized CI by eliminating flaky tests (top 20), reducing pipeline failures by 55% and improving deploy frequency.
Engineering Impact: Four Performance Dimensions
Engineering value extends beyond "shipped features." Technical impact is measured across performance (latency/throughput), reliability (uptime/error rates), scalability (load capacity), and technical debt reduction (maintainability improvements). Each dimension proves a different type of business value: speed improvements drive conversion, reliability prevents revenue loss, scalability enables growth, and debt reduction accelerates future development velocity. For comprehensive guidance on quantifying each dimension with calculation methodologies and role-specific examples, see our Engineering Metrics guide.
Performance & Reliability Examples:
- Reduced API response time from 850ms to 120ms (86% improvement), improving checkout conversion by 4%.
- Improved system uptime from 99.2% to 99.8% by implementing retry logic and circuit breakers (60% fewer outages).
- Increased throughput from 500 req/s to 1,200 req/s by implementing connection pooling and caching layer.
- Decreased MTTR from 45 minutes to 12 minutes by building automated rollback and health-check systems.
Scalability & Debt Reduction Examples:
- Redesigned data pipeline to handle 10x traffic growth (from 5k to 50k requests/minute) without infrastructure cost increase.
- Refactored legacy authentication system, reducing bug fix time by 60% (from 8 hours to 3 hours average).
- Increased test coverage from 45% to 85%, reducing production incidents by 40%.
Mobile Engineering: Platform-Specific Impact Metrics
Mobile engineers face a unique translation challenge: their impact is measured through platform-specific metrics that don't exist in web engineering. App store ratings (4.2→4.7 stars), crash-free rates (98%→99.8%), app binary size reduction, cold launch time optimization, and download-to-retention funnels are the proof categories that separate mobile resumes from generic software engineer resumes. The formula stays identical—Verb + Context + Metric—but the metrics live in a different ecosystem (App Store Connect, Google Play Console, Firebase Crashlytics). For the complete mobile-specific translation framework covering iOS, Android, and cross-platform roles, see our Mobile Developer Resume Guide.
Database administration introduces a proof category where the metric vocabulary centers on availability and recovery: uptime percentages (99.99% = 52 minutes downtime/year), Recovery Time Objective (RTO) vs. actual recovery time, query performance optimization (p95 latency reduction), storage efficiency gains, and backup/restore validation frequency—the complete DBA-specific translation framework is in our Database Administrator Resume Guide, with ATS-optimized keyword lists covering SQL, NoSQL, cloud databases, and performance tuning in our DBA Resume Keywords guide.
Within the iOS ecosystem, the proof categories narrow to platform-specific signals that generic mobile metrics miss: Swift/SwiftUI proficiency proven through published App Store apps, crash-free rates measured via Xcode Organizer where 99.5%+ is table stakes, and Human Interface Guidelines compliance as an implicit quality signal that separates platform-native developers from cross-platform transplants—see the complete iOS Developer Resume Guide for the iOS-specific translation framework.
The Android translation runs parallel but uses a distinct metric vocabulary: Play Console's Android Vitals where ANR rate below 0.47% and crash rate below 1.09% are Google's own thresholds, App Bundle size optimization as a deliverable metric, and Jetpack Compose migration progress as the clearest signal of modern Android fluency—the complete framework is in our Android Developer Resume Guide.
Site Reliability Engineering introduces a proof category where the metric vocabulary shifts from features shipped to reliability maintained: SLO attainment percentages (99.95% availability = 21.9 minutes downtime/month), error budget burn rate, Mean Time to Recovery (MTTR) measured in minutes not hours, and incident response automation coverage—the complete SRE-specific translation framework including toil reduction quantification and on-call optimization is in our Site Reliability Engineer Resume Guide, with the full ATS keyword taxonomy covering observability stacks, chaos engineering, and SLI/SLO/SLA terminology in our SRE Resume Keywords guide.
Systems administration translates through a proof vocabulary centered on infrastructure scale and uptime: fleet size managed (500+ servers), patch compliance rates (98%+ within SLA), ticket resolution time reduction, and Active Directory/Group Policy scope (10,000+ user objects)—the ATS-optimized keyword framework covering Linux, Windows Server, cloud hybrid environments, and emerging infrastructure-as-code tooling is in our Systems Administrator Resume Keywords guide.
Data engineering introduces a proof category where the metric vocabulary centers on scale and reliability of data movement: daily data volume processed (TB/day), pipeline SLA achievement rates, cost optimization figures, and data quality scores—the complete data engineering translation framework covering ETL/ELT positioning, career-level metric scaling, and pipeline impact quantification is in our Data Engineer Resume Guide, with the full ATS keyword taxonomy covering Spark, Airflow, dbt, Snowflake, and the modern data stack in our Data Engineer Resume Keywords guide.
Network engineering translates through a proof vocabulary built on vendor-specific precision and infrastructure availability: uptime percentages across fleet sizes, latency reduction metrics, MPLS/SD-WAN migration scope, and automation coverage (switches configured via Ansible/Terraform)—the ATS-optimized keyword framework covering Cisco, Juniper, cloud networking, and protocol-level terminology organized by specialization is in our Network Engineer Resume Keywords guide.
Platform engineering introduces a proof category where the metric vocabulary centers on developer productivity as a product outcome: deployment frequency improvements (weekly → 50+/day), self-service adoption rates across engineering organizations, golden path completion percentages, and developer satisfaction scores (NPS)—this shifts the translation challenge from "infrastructure I maintained" to "developer product I built," where adoption rate is the ultimate proof that your platform delivers value rather than adding complexity. The complete platform-specific translation framework covering IDP metrics, Backstage adoption, and platform-as-product positioning is in our Platform Engineer Resume Guide, with the full ATS keyword taxonomy covering Kubernetes, Terraform, ArgoCD, GitOps, internal developer platforms, and developer experience terminology organized by platform capability in our Platform Engineer Resume Keywords guide.
Solutions architecture translates through a proof vocabulary that uniquely spans technical depth and business strategy simultaneously: architecture decisions documented (ADRs adopted across 200+ engineer organizations), cost optimization achieved ($4M+ annually), system scale supported (50M daily transactions), and stakeholder influence demonstrated (C-suite presentations securing $10M investments)—this dual-proof requirement makes SA resumes distinct from senior engineering resumes, where technical competence alone is sufficient. The complete framework covering cloud SA, enterprise architect, and technical architect positioning is in our Solutions Architect Resume Guide, with the ATS keyword taxonomy spanning cloud platforms (AWS, Azure, GCP), architecture patterns, enterprise frameworks (TOGAF, Well-Architected), and the business communication vocabulary that differentiates architects from senior engineers in our Solutions Architect Resume Keywords guide.
Engineering management introduces a proof category where the metric vocabulary shifts from individual technical output to team-level leverage: team size and growth trajectory (built from 4 to 18 engineers), retention rates vs. department benchmarks (94% vs. 78% average), delivery velocity improvements (weekly to daily deployments), and business impact of shipped features ($120M revenue pipeline supported)—this transition from IC to multiplier metrics is the core translation challenge for EM resumes, where proving you make engineers more productive matters more than proving you can still write code. The complete engineering management translation framework covering people metrics, delivery outcomes, and technical strategy positioning is in our Engineering Manager Resume Guide, with the full ATS keyword taxonomy covering leadership, Agile delivery, stakeholder management, and organizational design terminology organized by seniority level in our Engineering Manager Resume Keywords guide.
Tech lead introduces a distinct proof category that sits between IC depth and management leverage: the architecture multiplier, where impact is measured through the radius of technical decisions rather than personal code output. The unique translation challenge is demonstrating that your architecture choices, RFC authorship, and mentoring produced team-level outcomes—system-wide latency reduction, cross-team standard adoption rates, and engineers mentored to promotion—without claiming people management authority you don't hold. The formula shifts from "I built X" to "I designed the approach that enabled 6 engineers to build X 40% faster," where the multiplier effect is the proof, not the individual contribution. The complete tech lead translation framework covering architecture ownership, influence scope, and career path positioning (senior IC → tech lead → staff engineer) is in our Tech Lead Resume Guide, with the full ATS keyword taxonomy spanning system design, technical leadership, delivery management, and cross-functional influence terminology organized by seniority progression in our Tech Lead Resume Keywords guide.
Technical program management introduces a proof category where the metric vocabulary centers on complexity navigation at organizational scale: program scope (teams coordinated, engineers supported, budget managed), delivery track record (on-time milestone percentage across concurrent programs), risk management outcomes (blockers resolved, dependencies untangled before impact), and stakeholder communication reach (VP/C-level engagement cadence)—this makes TPM resumes fundamentally different from both project manager and engineering manager resumes, because the proof must simultaneously demonstrate technical depth (you understand the architecture you're coordinating) and execution breadth (you drive programs no single team could deliver alone). The complete TPM translation framework covering program-based resume structuring, technical context embedding, and risk management proof patterns is in our Technical Program Manager Resume Guide, with the full ATS keyword taxonomy covering program delivery, dependency management, cross-functional coordination, methodology, and industry-specific TPM terminology organized by seniority in our Technical Program Manager Resume Keywords guide.
Sales & Revenue
The Problem: Sales bullets often sound like “did activity” (calls, meetings). Activity is not value. Value is pipeline, win rate, ARR, renewal, retention.
The Fix: Tie activity to money outcomes and deal context: segment, cycle length, ACV, objections, stakeholders.
Strong Verbs (Sales):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Called | Prospected / Qualified | Qualified 120 inbound leads/month, improving SQL rate from 18% to 31%. |
| Sold | Closed / Converted | Closed $480K ARR across 6 enterprise accounts in Q4, exceeding quota by 22%. |
| Negotiated | Secured / Won | Secured 2-year renewal at 15% uplift, preventing churn of $250K ARR. |
| Presented | Pitched / Influenced | Influenced 5-stakeholder buying group, shortening cycle from 90 to 62 days. |
Real Bullets (Gold Standard):
- Closed $1.2M ARR across mid-market accounts by rebuilding discovery workflow and improving close rate from 14% to 21%.
- Secured renewal of top 3 accounts (total $900K ARR) by redesigning adoption plan, increasing product usage by 35%.
Sales Efficiency: Beyond Quota Attainment
Quota attainment is the baseline—the strategic signal comes from how you achieved it. Sales excellence is measured across three efficiency dimensions: pipeline velocity (how fast deals move through stages), win rate (conversion percentage vs. company benchmark), and deal size progression (average contract value growth over time). The formula: quota percentage + ranking + efficiency metric (win rate, cycle time, or deal size growth). This proves systematic execution, not just hustle or luck. For comprehensive sales metrics including pipeline coverage, CAC payback, and multi-stakeholder engagement patterns, see our Sales Metrics guide.
Operations & Efficiency
The Problem: Ops resumes bury the lede. Operations is about throughput, waste, cycle time, quality, and reliability.
The Fix: Use verbs that imply systems thinking and measurable throughput improvements.
Strong Verbs (Ops):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Handled | Streamlined / Standardized | Standardized SOPs across 4 sites, reducing rework by 28% and improving QA pass rate. |
| Managed | Coordinated / Orchestrated | Orchestrated vendor transition under 2-week deadline, avoiding stockout risk. |
| Improved | Reduced / Eliminated | Reduced order-to-ship time from 4.2 days to 2.6 days by redesigning pick/pack flow. |
| Tracked | Instrumented / Monitored | Instrumented KPI dashboard, enabling weekly root-cause reviews and 12% cost reduction. |
Real Bullets (Gold Standard):
- Reduced monthly operating cost by $65K by renegotiating vendor contracts and consolidating tooling across 3 teams.
- Standardized onboarding workflow, cutting time-to-productivity from 14 days to 5 days for 40 new hires.
Operations Excellence: SLA Compliance & Process Efficiency
The strongest operations resumes prove reliability at scale. Operational value is measured across three dimensions: SLA compliance (service reliability and target achievement), cycle time & throughput (speed and capacity), and defect reduction (quality under pressure). The formula: maintain quality + improve speed + increase capacity = operational leverage. Operations metrics prove execution reliability—they show you can balance speed and quality without breaking things. For comprehensive methodologies on SLA metrics, error rate reduction, cost per unit optimization, and capacity planning with role-specific examples, see our Operations Metrics guide.
SLA & Quality Examples:
- Maintained 99.8% SLA compliance across 12,000 monthly service requests while reducing resolution time by 35%.
- Reduced order error rate from 2.3% to 0.4% by implementing automated validation checks.
- Improved platform uptime from 98.5% to 99.7% by implementing proactive monitoring and automated failover.
Throughput & Efficiency Examples:
- Increased daily processing capacity from 500 to 1,200 orders without additional headcount through workflow automation.
- Reduced cost per transaction from $4.20 to $2.80 through process redesign and automation.
- Cut average ticket resolution time from 48 hours to 12 hours while maintaining 99.5% SLA compliance.
Partnerships & Business Development
The Problem: BizDev resumes confuse deal-signing with deal-making. "Established 12 strategic partnerships" proves you can negotiate contracts, not that you built revenue engines.
The Fix: Partnerships are measured by revenue leverage: partner-sourced pipeline, co-sell wins, integration adoption, channel ROI, and activation rate.
Strong Verbs (Partnerships/BizDev):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Partnered | Activated / Launched | Launched AWS co-sell partnership generating $6.3M in partner-sourced pipeline. |
| Established | Built / Scaled | Built channel partner network driving $3.9M in partner-sourced revenue within 9 months. |
| Managed | Enabled / Accelerated | Enabled 14 of 16 partners to generate pipeline (88% activation vs. 35% industry avg). |
| Negotiated | Secured / Closed | Secured Salesforce integration adopted by 41% of enterprise customers. |
Real Bullets (Gold Standard):
- Built AWS co-sell partnership generating $6.3M in partner-sourced pipeline (31% of enterprise pipeline) with 54% win rate, 22 points higher than direct sales.
- Reduced partner time-to-first-deal from 13 months to 5.2 months (60% improvement) through structured enablement program, accelerating channel ROI by $2.1M in Year 1.
Partnerships as Leverage: Proving Multiplier Effect
The strongest partnerships resumes prove revenue multipliers, not relationship counts. Partnership value is measured across three layers: revenue attribution (partner-sourced vs. partner-influenced pipeline and closed deals), channel efficiency (partner activation rate, time-to-first-deal, co-sell win rate), and ecosystem scale (integration adoption, marketplace performance, developer engagement). The formula: signed partnerships + activation rate + revenue contribution + efficiency metric (cycle time, win rate, or ROI). This proves systematic channel building, not just handshake collection. For comprehensive partnership metrics including integration adoption patterns and channel ROI calculation methodologies, see our Partnerships & BizDev Metrics guide.
🛠️ The "Weak vs Strong" Translator
| Weak Word | Strong Replacement | Context Example |
|---|---|---|
| Helped | Facilitated | Facilitated cross-departmental workshops... |
| Worked on | Executed | Executed migration of 5TB database... |
| Managed | Orchestrated | Orchestrated launch of 3 products... |
How to Use This Dictionary (Fast)
If you want this to actually improve your resume (not just look smart), use it like a filter:
- Pick one bullet from your resume that feels “fine” but not impressive.
- Identify the verb you used. If it is weak (“Helped”, “Worked on”, “Responsible for”), replace it.
- Add context: scope, constraints, stakeholders, system size, budget, region, timeline.
- Add a metric: dollars, %, hours, users, volume, cycle time, error rate, quality score, tickets.
- Add business impact: revenue, cost, risk, speed, retention, reliability, customer outcomes.
If you can do steps 2–5, you are writing “Gold”. If you cannot, the bullet is fluff and should be removed or rewritten.
Upgrade weak bullets into quantified, high-impact bullets
Building Your Metrics Inventory (Data Mining)
The #1 objection to quantified bullets? "I don't have metrics." But the data exists—in your calendar, tickets, dashboards, retros, and feedback. The metrics aren't missing; you haven't extracted them yet.
The 8 hidden data sources for every role:
- Calendar → stakeholder count, meeting frequency, cross-functional breadth
- Ticket systems → throughput, resolution time, backlog reduction
- Dashboards → user growth, conversion rates, performance trends
- Git/code → commits, PRs merged, code reviews completed
- Performance reviews → manager-cited achievements, goal attainment
- Project docs → budget size, team size, timeline delivery
- Retrospectives → efficiency gains, velocity increases, time saved
- Email/Slack praise → client outcomes, peer recognition with specific numbers
Build a quarterly metrics inventory from these sources—you'll never scramble for proof again. For complete step-by-step data mining methodology across all 8 sources, see our metrics inventory system.
Metric Bank (Steal These Categories)
If you “don’t have metrics”, you usually have the wrong mental model. You might not have revenue, but you almost always have scope and constraints.
High-Impact Bullet Templates (Fill the Blanks)
If you want a bullet that reads like a recruiter magnet, use templates. Do not freestyle.
Template 1: Speed
Accelerated [process/system] for [scope], cutting [metric] from [baseline] to [result], enabling [business outcome].
Template 2: Cost
Reduced [cost line item] by [percent/$] by [lever], saving [amount] annually while maintaining [quality/reliability].
Template 3: Quality
Improved [quality metric] from [baseline] to [result] by [action], reducing [risk/defects/rework] by [metric].
Template 4: Scale
Scaled [system/program] to [new volume/users/regions] without adding headcount by [automation/standardization], improving [metric] by [result].
Template 5: Risk
Mitigated [risk] across [scope] by [control/process], reducing [incidents/findings/churn] by [metric].
Common Resume Verb Traps (Avoid These)
Some verbs are not “wrong”, but they are expensive because they force the recruiter to guess what you did. Replace them unless you immediately follow with proof.
Customer Success & Support (Proof Over Politeness)
The Problem: Support resumes list “handled tickets” and “helped customers”. That reads like basic competence.
The Fix: Show outcomes: CSAT, first response time, resolution time, deflection, churn risk reduced, escalations prevented.
Strong Verbs (CS/Support):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Helped | Resolved / Unblocked | Resolved 180 tickets/month at 96% CSAT, reducing escalations by 35%. |
| Answered | Diagnosed / Triaged | Triaged P1 issues and drove handoffs, cutting time-to-resolution by 28%. |
| Explained | Educated / Enabled | Enabled customers to adopt feature, increasing activation by 14%. |
| Handled | De-escalated / Retained | Retained at-risk accounts by rebuilding onboarding, protecting $220K ARR. |
Real Bullets (Gold Standard):
- Reduced average first response time from 6h to 45m by redesigning triage rules and staffing schedule.
- Built 25-article help center and macros, deflecting 18% of inbound tickets and freeing 12 hrs/week.
Customer Success: Retention, Expansion & Net Revenue
The strongest Customer Success resumes don't just show "customer satisfaction"—they prove revenue retention and account growth. CS value is measured across three core dimensions: retention (churn reduction, renewal rate), expansion (upsell rate, expansion MRR, cross-sell adoption), and Net Retention Rate (the ultimate CS metric showing existing customer revenue growth). The formula: portfolio size + retention metric + expansion outcome = CS revenue impact. NRR above 100% proves you're not just preventing churn—you're growing accounts without new customer acquisition. For comprehensive methodologies on calculating NRR, reducing churn, measuring health score improvement, and segment-specific CS metrics (SMB vs. Enterprise vs. PLG), see our Customer Success Metrics guide.
Retention & NRR Examples:
- Maintained 121% NRR across $8M portfolio by identifying expansion opportunities in 40% of accounts.
- Reduced monthly gross churn from 6.2% to 2.8% by launching proactive health monitoring across 200+ accounts.
- Achieved 118% NRR across enterprise segment ($250K+ ARR), driven by 40% upsell rate and 12% cross-sell adoption.
Expansion & Account Growth Examples:
- Generated $360K in annual expansion MRR by identifying upsell triggers and delivering targeted QBRs to 50 enterprise accounts.
- Improved avg. customer health score from 62 to 78 (scale 0-100) by launching automated engagement campaigns and usage analytics.
- Reduced enterprise onboarding time from 45 to 21 days, increasing 90-day retention from 82% to 94%.
Product Management (Outcomes Over Outputs)
The Problem: PM resumes list features shipped ("Launched 10 features in Q2") without proving those features drove business value. Shipping volume is an output metric—it proves activity, not impact.
The Fix: Every PM bullet must connect features to measurable outcomes: adoption rate (proof users wanted it), engagement lift (proof it stuck), revenue impact (proof it drove business value), or efficiency gains (proof it improved the product development process itself). The formula: feature shipped + adoption/engagement metric + business outcome.
Strong Verbs (Product Management):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Managed | Led / Prioritized | Led roadmap prioritization for 3 products, shipping features that increased DAU by 15%. |
| Shipped | Launched / Delivered | Launched onboarding redesign with 67% adoption rate, improving activation by 18 pp. |
| Worked on | Defined / Drove | Defined success metrics for checkout flow, driving 23% conversion improvement. |
| Coordinated | Aligned / Enabled | Enabled 8-person team to ship search improvements, increasing CTR by 27%. |
Real Bullets (Gold Standard):
- Launched collaborative workspace feature, increasing DAU by 12% (from 45K to 50.4K users) and weekly session frequency by 18%.
- Shipped premium tier features, driving $1.8M in incremental ARR within first 6 months (22% of total new revenue).
- Defined product analytics framework adopted by 5 product teams, improving feature instrumentation coverage from 34% to 89%.
Product Management: The Four Outcome Categories
Product managers prove value through adoption (did users use it?), engagement (did it stick?), revenue (did it drive business value?), and efficiency (did you ship faster or smarter?). The strongest PM bullets pair feature context (what you shipped and why) with outcome metrics (what changed because you shipped). Adoption metrics prove product-market fit at the feature level. Engagement metrics prove lasting value, not novelty spikes. Revenue metrics tie product work to company financial goals. Efficiency metrics show you improved how the team works, not just what they built. For comprehensive guidance on quantifying PM impact across all four dimensions with calculation methodologies and senior vs. junior PM differentiation, see our Product Manager Resume Metrics guide.
Adoption & Engagement Examples:
- Launched in-app messaging with 67% adoption rate among active users within 30 days, exceeding 50% target.
- Redesigned onboarding flow, increasing activation rate from 34% to 52% (18 pp lift) and reducing time-to-first-value from 5 days to 1.5 days.
- Shipped personalized content feed, improving Day-30 retention from 28% to 41% (13 pp lift) across 120K user cohort.
Revenue & Efficiency Examples:
- Rebuilt checkout flow, increasing trial-to-paid conversion from 14% to 19% (5 pp lift), adding $420K ARR.
- Streamlined feature flagging process, reducing average time-to-production from 4 weeks to 1.5 weeks across engineering team.
- Launched self-serve admin tools, reducing internal support tickets by 38% (2,400 fewer tickets/month).
Data & Analytics (Make the Insight Measurable)
The Problem: Analytics bullets often read like “made dashboards”. Dashboards are output. Business decisions are value.
The Fix: Tie analysis to decisions and outcomes: revenue protected, costs reduced, forecast accuracy improved, experiments shipped.
Strong Verbs (Data):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Created | Instrumented / Operationalized | Operationalized KPI dashboard for 15 stakeholders, improving decision cadence. |
| Analyzed | Identified / Diagnosed | Identified churn driver and launched fix, reducing churn risk by 9%. |
| Reported | Forecasted / Modeled | Modeled demand and improved forecast accuracy from 72% to 88%. |
Real Bullets (Gold Standard):
- Identified 3 conversion drop-offs via funnel analysis, driving fixes that increased signup-to-activation by 6%.
- Automated weekly reporting pipeline, saving 10 hours/week and enabling faster exec reviews.
Data Science: Model Performance to Business Value
The strongest data science resumes don't list model accuracy in isolation—they prove deployed models drove measurable business outcomes. DS value is measured across four layers: model performance (accuracy, precision, recall with baseline comparison), deployment scale (predictions per day, users served, production uptime), business impact (revenue driven, costs reduced, decision quality improved), and infrastructure efficiency (training time reduced, serving costs optimized). The formula: model quality + production scale + business outcome = DS credibility. Model performance proves technical competence—but only when paired with deployment context. Production deployment at scale proves engineering rigor, not just research skill. Business impact metrics separate senior DS (models that drive decisions) from junior DS (models that sit in notebooks). For comprehensive guidance on quantifying DS work including calculation methodologies for deployment metrics, A/B test lift analysis, and distinguishing research contributions from production impact, see our Data Science Resume Metrics guide.
Model Performance & Deployment Examples:
- Built churn prediction model with 0.87 AUC-ROC (vs. 0.71 heuristic baseline) and 82% precision at 65% recall, deployed to 340K user base.
- Deployed recommendation engine serving 2.3M predictions/day to 450K active users with 99.6% uptime and <50ms p95 latency.
- Improved demand forecasting accuracy from 76% to 89% (13 pp lift), reducing RMSE from 12.4 to 7.8 units.
Business Impact & Efficiency Examples:
- Deployed dynamic pricing model increasing revenue by $3.2M annually (8% lift) across 45K transactions/month.
- Built fraud detection model reducing losses by $1.8M/year while decreasing false positive review queue by 42%.
- Rebuilt feature engineering pipeline reducing model training time from 6 hours to 22 minutes, enabling daily retraining vs. weekly.
Marketing & Growth (Prove the ROI)
The Problem: Marketing resumes claim "creative campaigns" and "brand awareness" without tying them to pipeline, revenue, or conversion.
The Fix: Marketing is a revenue function. Every bullet should connect activity to business metrics: CAC, LTV, conversion rate, pipeline, MQLs, retention.
Strong Verbs (Marketing):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Ran | Launched / Executed | Launched email nurture campaign, increasing MQL→SQL conversion by 18%. |
| Created | Designed / Built | Built landing page A/B test framework, improving conversion from 2.1% to 3.4%. |
| Managed | Optimized / Scaled | Scaled paid search spend from $50K to $200K/month while reducing CAC by 22%. |
| Promoted | Drove / Generated | Drove 2,400 event registrations via multi-channel campaign, converting 320 SQLs. |
Real Bullets (Gold Standard):
- Reduced customer acquisition cost (CAC) from $180 to $125 by optimizing ad targeting and landing page conversion, saving $220K annually.
- Generated $1.8M pipeline via content marketing program (SEO + lead magnets), increasing organic traffic by 140% YoY.
Marketing Attribution: Connecting Spend to Revenue
The strongest marketing resumes don't just show CAC reduction—they prove attribution clarity. Marketing value is measured through three layers: revenue linkage (CAC, LTV:CAC ratio, ROAS), conversion efficiency (funnel stage conversion rates, cost per conversion, pipeline contribution), and audience quality (qualified traffic growth, engagement tied to conversions). The formula: specify your attribution model (first-touch, multi-touch, last-touch) + show the economic impact + prove funnel optimization. For complete methodologies on calculating marketing ROI, attribution models, and efficiency metrics across performance and brand marketing, see our Marketing Metrics guide.
Content & Community: Engagement Over Vanity
The strongest content and community resumes don't list follower counts—they prove engagement quality and conversion outcomes. Content/community value is measured across three layers: engagement stickiness (DAU/MAU ratio, return visitor rate), interaction depth (engagement rate vs. platform benchmarks, comments-to-views ratio, time on page), and business conversion (content-attributed leads, community-to-customer conversion). The formula: audience size + engagement metric (DAU/MAU or engagement rate) + conversion outcome = content ROI proof. DAU/MAU above 20% proves users return frequently, not just sign up and ghost. Engagement rate above platform benchmarks proves content resonates. For comprehensive methodologies on calculating DAU/MAU, platform-specific engagement benchmarks, content attribution, and role-specific metrics for Content Marketing, Social Media, and Community Management, see our Content & Community Metrics guide.
Engagement & Retention Examples:
- Grew community DAU/MAU from 12% to 31% by implementing daily discussion prompts and gamification system across 15K members.
- Maintained 6.2% avg. engagement rate across LinkedIn (4x industry avg.) by shifting from promotional to education-first content strategy.
- Increased return visitor rate from 22% to 58% by launching weekly newsletter series and topic-based content hubs.
Content Conversion & Attribution Examples:
- Achieved 8.3% blog-to-signup conversion rate by creating bottom-of-funnel guides and optimizing CTAs, driving 2,400 qualified leads in 6 months.
- Generated $4.2M in content-influenced pipeline by creating targeted guides for each buying stage, tracked via HubSpot attribution.
- Grew UGC from 15 posts/month to 320 posts/month by launching creator incentive program and featured contributor series.
Finance & Accounting (Show the Controls)
The Problem: Finance bullets say "prepared reports" or "managed accounts". Reports are not value. Accurate forecasts, risk mitigation, and cost savings are value.
The Fix: Tie finance work to accuracy, controls, audit findings, cost reduction, variance, compliance, and stakeholder decision-making.
Strong Verbs (Finance):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Prepared | Delivered / Produced | Delivered monthly variance analysis, enabling CFO to reallocate $2M budget faster. |
| Reconciled | Validated / Audited | Audited GL transactions, identifying $340K discrepancy and implementing SOX controls. |
| Tracked | Monitored / Forecasted | Forecasted quarterly revenue with 98% accuracy, supporting board decision-making. |
| Managed | Controlled / Optimized | Optimized AP workflow, reducing Days Payable Outstanding (DPO) from 52 to 38 days. |
Real Bullets (Gold Standard):
- Reduced close cycle from 10 days to 6 days by automating journal entries and implementing new reconciliation process across 4 entities.
- Identified $1.2M in cost-saving opportunities via spend analysis, driving vendor consolidation and contract renegotiation.
Finance & Analytics: Model Accuracy & Decision Impact
The strongest finance and analytics resumes prove analytical work drives decisions. Financial value is measured across three layers: model accuracy (forecast variance, prediction precision), cost identification (opportunities surfaced, risks flagged), and decision enablement (adoption rate, actions taken, business outcomes). The formula: accurate models + identified opportunities + decisions informed = analytical credibility. Finance metrics prove you turn data into action—they show your analysis doesn't sit in decks, it drives resource allocation and strategic choices. For comprehensive methodologies on forecast accuracy, attribution analysis, scenario planning, and decision impact measurement with calculation frameworks, see our Finance & Analytics Metrics guide.
Model Accuracy & Forecasting Examples:
- Maintained forecast accuracy within 5% variance across 12 consecutive quarters, enabling confident capital planning.
- Built revenue prediction model with 92% accuracy that reduced inventory carrying costs by $400K.
- Improved budget variance from 18% to 7% through enhanced baseline modeling and assumption tracking.
Decision Impact & Cost Identification Examples:
- Financial model informed $15M acquisition decision (board-approved based on NPV and payback analysis).
- Identified $1.2M in vendor consolidation opportunities through spend analysis across 200 contracts.
- Built attribution model that identified $2M in undervalued marketing channels, informing budget reallocation.
Revenue Impact for Non-Sales Roles (Cost, Efficiency, Speed)
The Problem: Most non-sales professionals believe revenue metrics only apply to sales teams. But revenue isn't just generated at the point of sale—it's affected by cost structure (savings = profit), operational efficiency (capacity = revenue potential), and time-to-market (speed = earlier monetization).
The Fix: Prove revenue contribution through three impact levers: cost savings (direct profit improvement), efficiency gains (increased output without proportional cost), and time-to-market acceleration (capturing revenue earlier). Every function affects at least one of these dimensions.
Revenue Measurement Categories:
| Impact Type | What It Measures | Proof Pattern |
|---|---|---|
| Cost Savings | Expense reduction, waste avoided | (Baseline Cost - New Cost) × Volume × Timeframe |
| Efficiency | Output per input, capacity gains | Time Saved × Hourly Cost × Frequency × Annual Multiplier |
| Time-to-Market | Revenue acceleration | (Revenue Per Month × Months Saved) + Market Share Protected |
Cost Savings Examples:
- Renegotiated vendor contracts, reducing annual software licensing costs by $120k (18% savings).
- Automated invoice processing, eliminating 25 hours/week of manual work ($65k annual labor savings).
- Consolidated 3 legacy systems into unified platform, reducing operational costs by $200k/year.
Efficiency & Speed Examples:
- Reduced customer onboarding time from 6 weeks to 10 days, enabling 3x faster revenue recognition.
- Improved deployment pipeline speed by 40%, allowing team to ship 2 additional features per quarter.
- Accelerated product launch by 6 weeks, capturing $300k in early-quarter revenue.
For complete methodologies on calculating and defending non-sales revenue impact, including role-specific translation examples for Engineering, Product, Operations, Marketing, and Design, see our Revenue Metrics for Non-Sales Roles guide.
Healthcare & Clinical (Patient Outcomes + Compliance)
The Problem: Healthcare resumes list duties ("Administered medications", "Documented care") without showing patient outcomes, safety improvements, or compliance.
The Fix: Healthcare is measured by outcomes, safety, compliance, throughput, and patient satisfaction. Every bullet should tie clinical work to measurable improvement.
Strong Verbs (Healthcare):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Provided | Delivered / Coordinated | Coordinated care for 18-bed unit, achieving 94% patient satisfaction (HCAHPS). |
| Administered | Managed / Optimized | Managed medication administration protocol, reducing errors by 35% in 6 months. |
| Assisted | Supported / Enabled | Enabled discharge readiness for 120 patients/month, cutting readmission by 12%. |
| Documented | Maintained / Ensured | Ensured 100% EHR compliance during Joint Commission audit with zero findings. |
Real Bullets (Gold Standard):
- Reduced central line infection rate from 2.1 to 0.4 per 1,000 line-days by leading bundle compliance training for 45-nurse unit.
- Improved patient throughput by 18% (ED to inpatient bed) by redesigning triage workflow and staffing model.
Education & Training (Learning Outcomes, Not Just Teaching)
The Problem: Education resumes say "Taught classes" or "Developed curriculum". Teaching is the activity. Learning is the value.
The Fix: Show learning outcomes: test scores, pass rates, engagement, skill acquisition, behavior change, retention, graduation rates.
Strong Verbs (Education):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Taught | Delivered / Facilitated | Delivered AP Biology to 90 students, achieving 85% pass rate (vs. 68% district). |
| Developed | Designed / Built | Built project-based curriculum, increasing student engagement scores by 24%. |
| Managed | Led / Coordinated | Led IEP team for 12 students, improving goal attainment from 62% to 89%. |
| Tutored | Coached / Mentored | Mentored 15 at-risk students, improving semester GPA from 2.1 to 2.9 average. |
Real Bullets (Gold Standard):
- Increased AP Calculus pass rate from 71% to 88% by redesigning problem sets and implementing weekly review sessions.
- Reduced absenteeism by 22% through student engagement program, improving overall course completion rate by 14%.
Legal & Compliance (Risk Mitigation + Audit Results)
The Problem: Legal resumes say "Reviewed contracts" or "Ensured compliance". Reviewing is activity. Preventing risk, passing audits, and closing deals are outcomes.
The Fix: Show risk prevented, contracts closed, audit findings, dispute resolution speed, compliance rate, regulatory approvals.
Strong Verbs (Legal):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Reviewed | Negotiated / Finalized | Negotiated 28 vendor contracts, reducing liability exposure and saving $180K/year. |
| Drafted | Prepared / Delivered | Prepared merger documentation for $50M acquisition, closing deal 3 weeks early. |
| Advised | Guided / Protected | Guided product team through GDPR compliance, avoiding regulatory penalties. |
| Managed | Resolved / Mitigated | Resolved 14 disputes via mediation, avoiding $2.3M litigation cost. |
Real Bullets (Gold Standard):
- Achieved zero audit findings during SOX 404 compliance review by implementing new internal controls across 6 subsidiaries.
- Reduced contract cycle time from 45 days to 18 days by standardizing templates and redline process.
Legal & Compliance: Risk Reduction as Measurable Value
Legal and compliance professionals face a unique challenge: proving you prevented disasters is harder than proving you generated revenue. But prevention is measurable when you establish baselines. The strongest legal resumes quantify contract cycle time (enablement speed), audit findings (control effectiveness), regulatory compliance rates (risk mitigation), and incident response time (operational reliability). The formula: baseline metric (before your intervention) + improved metric (after) + business impact (cost avoided, deals accelerated, or penalties preventing). This proves you don't just review contracts—you accelerate business velocity while reducing exposure. For comprehensive methodologies on quantifying legal impact including contract turnaround metrics, audit finding reduction, and regulatory penalty avoidance calculations, see our Legal & Compliance Resume Metrics guide.
Risk Reduction Examples:
- Reduced contract review cycle from 14 days to 7 days by creating standardized playbooks, enabling $5M in deals to close 50% faster.
- Maintained zero regulatory penalties for 36 consecutive months after implementing compliance program (previous 3 years: $500K in fines).
- Reduced audit findings from 22 to 3 (86% reduction) over 18 months by implementing automated compliance monitoring.
Human Resources & Talent (Hiring Speed + Retention)
The Problem: HR resumes say "Recruited candidates" or "Managed onboarding". Recruiting is not the win. Time-to-fill, quality of hire, retention, and offer acceptance are wins.
The Fix: HR is measured by hiring speed, retention, cost-per-hire, diversity, performance ratings of new hires, and employee satisfaction.
Strong Verbs (HR):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Recruited | Sourced / Hired | Hired 32 engineers in Q2, reducing time-to-fill from 67 to 42 days. |
| Managed | Streamlined / Optimized | Optimized onboarding program, improving 90-day retention from 82% to 91%. |
| Conducted | Led / Facilitated | Led diversity hiring initiative, increasing underrepresented hires by 35%. |
| Handled | Resolved / Mediated | Resolved 18 employee relations cases, maintaining zero legal escalations. |
Real Bullets (Gold Standard):
- Reduced cost-per-hire from $8,200 to $5,900 by building employee referral program that sourced 40% of new hires.
- Improved new hire performance ratings (first-year average) from 3.2 to 3.8/5.0 by redesigning interview rubrics and training.
QA & Testing (Bug Prevention Impact)
The Problem: QA resumes say "Executed test cases" or "Found bugs". Test execution is activity. Quality improvement—fewer production defects, faster releases, higher test coverage—is value.
The Fix: Show defect density reduction, test coverage growth, automation ROI, release quality (zero-defect releases), and testing efficiency improvements.
Strong Verbs (QA/Testing):
| Weak Verb | Strong Replacement | Impact Example |
|---|---|---|
| Tested | Validated / Verified | Validated 200+ test cases per sprint with 95% bug detection rate. |
| Found | Detected / Identified | Detected 92% of critical bugs in QA (23 of 25), reducing defect escape rate to 8%. |
| Automated | Built / Implemented | Built automated test suite covering 70% of regression tests, reducing test time by 65%. |
| Ran | Executed / Optimized | Optimized test suite runtime from 90 minutes to 15 minutes via parallel execution. |
Real Bullets (Gold Standard):
- Reduced production defects from 50 per release to 15 (70% reduction) over 12 months through expanded regression coverage and exploratory testing.
- Increased automated test coverage from 55% to 82% over 18 months, reducing regression testing time by 65% (from 40 hours to 14 hours per release).
QA & Testing: Prevention Metrics That Prove Quality
The strongest QA resumes don't count bugs found—they prove bug prevention impact. QA value is measured across four dimensions: defect density (bugs per release trending downward), defect escape rate (percentage of bugs caught in QA vs. production), test coverage (code/feature coverage percentage), and automation ROI (time saved or velocity gained through automated testing). The formula: baseline defect count + improved defect count + test efficiency improvement = measurable quality gains. Defect escape rate below 10% proves your testing process is catching problems before customers see them. Test automation that reduces regression time from days to hours proves you're enabling faster releases. For comprehensive methodologies on quantifying QA impact including defect detection efficiency, flaky test reduction, and release quality metrics, see our QA & Testing Resume Metrics guide.
Defect Prevention & Coverage Examples:
- Reduced defect escape rate from 25% to 8% by implementing risk-based testing and edge case scenario coverage.
- Achieved 95% code coverage across critical services, enabling continuous deployment with <15min test suite execution.
- Maintained defect escape rate below 10% for 8 consecutive releases (industry average: 20-30%).
Automation & Efficiency Examples:
- Automated 400 regression test cases, reducing full regression cycle from 5 days to 8 hours and enabling weekly releases (previously monthly).
- Reduced flaky test rate from 15% to 3% by refactoring timing-dependent waits, improving CI pipeline reliability from 70% to 95%.
- Built CI/CD-integrated test suite executing in <10 minutes, enabling 50+ daily deployments vs. previous 2-3 weekly releases.
QA roles are heavily tool-specific in ATS filtering—listing "Selenium" versus "Selenium WebDriver" or "Cypress" versus "Cypress.io" can determine pass/fail at the keyword matching stage. The keyword landscape spans test automation frameworks (Selenium, Cypress, Playwright, Appium), programming languages used for test scripts (Python, Java, JavaScript), CI/CD integration tools, and methodology terms (TDD, BDD, shift-left testing) that signal modern QA thinking. For the complete ATS-optimized keyword taxonomy organized by testing type and automation tool, see our QA Engineer Resume Keywords guide.
The Verb Selection Framework: Function-Specific Taxonomy
Generic verbs kill resumes before recruiters finish reading the first bullet. "Managed," "worked on," "helped with"—these apply to every role and signal nothing about your domain expertise or seniority level.
The verb you choose is the first signal of your functional expertise. It tells recruiters:
- What you do (your function: engineering, sales, operations)
- How senior you are (your decision-making level)
- How technical you are (your depth of expertise)
Why Function-Specific Verbs Matter
Every function has its own action vocabulary:
- Engineers architect, optimize, and scale systems
- Sales professionals close, negotiate, and convert deals
- Operations managers orchestrate, streamline, and coordinate workflows
- Product managers launch, prioritize, and iterate on features
- Marketers drive, optimize, and generate measurable outcomes
Using the wrong verb category immediately flags you as either:
- Junior/inexperienced (using support verbs like "helped" or "assisted")
- Outside your claimed function (an engineer saying "managed stakeholders" instead of "architected systems")
- Generic/commodity ("managed projects" could be anyone)
The Core Verb Categories by Function
Engineering: Build, Optimize, Scale
Core verbs: Architected, Engineered, Deployed, Optimized, Refactored, Migrated, Automated, Debugged, Scaled, Instrumented
Avoid: Worked on, Fixed, Improved (too generic)
Why these work: They signal technical execution and architectural thinking, not just code writing
Sales: Acquire, Negotiate, Retain
Core verbs: Closed, Negotiated, Sourced, Converted, Landed, Cultivated, Prospected, Qualified, Upsold, Retained
Avoid: Called, Sold, Presented (activity, not outcomes)
Why these work: They prove revenue generation and deal execution, not just activity logging
Operations: Coordinate, Streamline, Optimize
Core verbs: Orchestrated, Streamlined, Coordinated, Standardized, Optimized, Implemented, Automated, Monitored, Resolved, Scaled
Avoid: Handled, Managed, Tracked (administrative, not strategic)
Why these work: They signal process management and efficiency improvement, not just task completion
Product Management: Launch, Prioritize, Deliver
Core verbs: Launched, Prioritized, Validated, Defined, Delivered, Partnered, Analyzed, Iterated
Avoid: Managed, Worked on, Helped (no ownership signal)
Why these work: They show product ownership and strategic decision-making
The Ownership Hierarchy: Senior vs Junior Verbs
Verbs signal your level in the org chart:
Junior/Subordinate Verbs:
- Helped, Assisted, Supported, Participated in, Contributed to, Worked on
Senior/Owner Verbs:
- Led, Architected, Drove, Orchestrated, Launched, Negotiated, Built, Scaled
If you're applying for a senior role but your resume uses subordinate verbs, you've positioned yourself as support staff before the recruiter reads your metrics.
How to Choose the Right Verb
Step 1: Identify your core function (Engineering? Sales? Ops? Product? Marketing?)
Step 2: Match your bullet to the type of work within that function:
- Building something new? → Built, Engineered, Developed, Launched
- Improving existing systems? → Optimized, Streamlined, Refactored, Enhanced
- Managing scale or complexity? → Scaled, Orchestrated, Coordinated, Managed
- Acquiring or converting? → Closed, Converted, Sourced, Landed
Step 3: Check ownership level:
- Did you lead this initiative? → Use ownership verbs (Led, Drove, Architected)
- Did you support someone else? → Use enabling verbs (Enabled, Unblocked, Facilitated)
Critical Rule: Never use subordinate verbs ("helped," "assisted") unless you're genuinely describing support work. If you drove the outcome, use an ownership verb.
Common Verb Mismatches (And How to Fix Them)
Mismatch #1: Technical Role Using Non-Technical Verbs
❌ "Managed database performance"
✅ "Optimized PostgreSQL queries, reducing DB load by 40%"
Why: Engineers optimize and architect—"managed" is a project management verb
Mismatch #2: Sales Role Using Activity Verbs
❌ "Called 100 prospects weekly"
✅ "Prospected 100 leads weekly, converting 15% to qualified opportunities"
Why: Prospecting signals sales methodology, "called" signals telemarketing activity
Mismatch #3: Operations Role Using Passive Ownership
❌ "Responsible for logistics coordination"
✅ "Orchestrated logistics across 4 warehouses, improving on-time delivery to 98%"
Why: "Orchestrated" signals complex multi-stakeholder coordination—"responsible for" signals job description, not achievement
For a complete breakdown of function-specific verb taxonomies with before/after examples across Engineering, Sales, Operations, Product, Marketing, Finance, HR, and Design roles, see our comprehensive Action Verbs by Function guide.
How to Use This Dictionary
Bookmark the Professional Impact Dictionary and return whenever you're writing bullets.
The Dictionary Usage Framework: The dictionary contains 500+ formulas, but formulas are not sentences to copy—they're structures to adapt. Navigate by function first (Engineering, Sales, Marketing, Operations), then by seniority level (IC, Manager, Director). Select 3-5 formulas where you have real data to insert. Replace every placeholder with your specific numbers, methods, and outcomes. The copy-paste trap: borrowed language without personal data creates generic resumes that fail both ATS and recruiter scanning. The adaptation test: after filling a formula, can you explain every number in an interview? If any answer is no, the adaptation is incomplete. For the complete step-by-step process on navigating, selecting, and adapting formulas without falling into copy-paste traps, see our Professional Impact Dictionary usage guide.
The Process:
- Open your current resume
- Pick one bullet that feels weak or generic
- Identify the verb. If it's "Helped" or "Responsible for" → replace it using the tables above
- Add context: What was the constraint? Team size? Budget? Timeline? System complexity?
- Add a metric: $, %, hours, volume, error rate, adoption, retention
- Ask: "So what?" Connect the metric to a business outcome (revenue, cost, speed, quality, risk)
If you can't do steps 4–6, the bullet is fluff. Delete it or dig deeper until you find the real story.
Why Most Resumes Fail the "Gold Filter"
Here's the uncomfortable truth: 95% of resumes are full of air.
They list duties, not results. They say "Managed projects" without saying what the projects achieved. They claim "Strong communication skills" without proving it through outcomes.
The Gold Filter is simple:
Can this sentence be turned into a quantifiable bullet point?
- YES → Keep it. Polish it. Make it dense.
- NO → Delete it immediately.
Example of Fluff:
"Passionate marketing professional with strong analytical skills and a proven track record of success."
What's wrong? Zero facts. Zero proof. Zero weight.
Example of Gold:
"Marketing Manager with 7 years leading $1M+ ad budgets. Scaled user base from 10K to 100K in 12 months via SEO + PPC. Reduced CAC by 30%."
What's right? Three facts. Three metrics. Three proof points.
The difference between these two candidates is not skill—it's translation. The first one didn't bother to translate their work into the language employers buy.
How to Apply the Formula (Role-Specific Proof)
The formula works for every role. The difference is in the metrics and constraints you choose.
Example: Software Engineer
Bad: "Built feature X"
Formula Applied: "Built feature X that reduced latency by 40%, enabling 2M more daily transactions and $500K additional revenue"
Translation:
- Verb: Built
- Context: Feature X (latency problem)
- Metric: 40% reduction, 2M transactions
- Outcome: $500K revenue
See Software Engineer Resume Guide for technical-to-business translation.
Example: Product Manager
Bad: "Managed product roadmap"
Formula Applied: "Led roadmap for 3 products, shipping 12 features in 6 months that increased user retention by 18% and added $2M ARR"
Translation:
- Verb: Led
- Context: 3 products, 12 features, 6 months
- Metric: 18% retention, $2M ARR
- Outcome: Revenue growth
See Product Manager, Project Manager, Business Analyst, Scrum Master, Construction PM. For PM-specific ATS keywords covering roadmapping, stakeholder management, and Agile methodologies, see our Product Manager keywords guide. For Scrum Master ATS optimization, the keyword vocabulary shifts to facilitation, servant leadership, and Agile ceremony terms that distinguish coaching-oriented roles from management-oriented roles—see our Scrum Master Resume Keywords guide. IT project management introduces a proof category where the metric vocabulary centers on project portfolio value and delivery discipline: total budget managed ($30M+), on-time delivery rate (93%+), cost savings against estimates, and methodology versatility (Agile/waterfall/hybrid)—the complete IT PM translation framework covering project-based resume structuring, budget evidence formatting, and certification positioning is in our IT Project Manager Resume Guide, with the full ATS keyword taxonomy covering delivery methodology, budget management, stakeholder communication, IT domain terminology, and tools organized by seniority in our IT Project Manager Resume Keywords guide. For project managers targeting non-IT roles, the keyword vocabulary shifts from Agile/Jira terminology to business outcomes and scope management—see our Project Manager Resume Keywords for the complete cross-industry PM keyword taxonomy. Business analysts who need to differentiate technical BA from process BA roles require a distinct keyword set covering requirements methods, SDLC stages, and modeling tools—our Business Analyst Resume Keywords guide organizes these by BA specialization and seniority level. Construction project management is a multi-domain proof problem: ATS systems at GCs and owners filter simultaneously on scheduling credentials, safety record, and contract administration—a resume that excels in one domain but omits the others fails all three filters at once. See Construction PM Resume Keywords for the domain-by-phase keyword taxonomy covering Procore, Primavera P6, OSHA compliance, and contract administration vocabulary.
Example: Marketing Manager
Bad: "Ran marketing campaigns"
Formula Applied: "Launched 5 campaigns across SEO + PPC that reduced CAC from $180 to $125 and generated $1.8M pipeline"
Translation:
- Verb: Launched
- Context: 5 campaigns, SEO + PPC
- Metric: CAC -30%, $1.8M pipeline
- Outcome: Cost efficiency + revenue
See Marketing Manager, Marketing ROI, Social Media Manager, SEO Specialist, Content Manager, Copywriter, Brand Manager, PR Specialist. Product marketing introduces a distinct proof layer where the formula metric shifts from campaign engagement to pipeline influence and win rate improvement—see our Product Marketing Manager Resume Guide for the GTM-to-revenue translation framework. For ATS keyword optimization by marketing function: Marketing Manager keywords, Product Marketing Manager keywords, Social Media Manager keywords, SEO Specialist keywords, Content Manager keywords, Copywriter keywords. Brand management introduces a dual-vocabulary ATS challenge: CPG and tech employers filter simultaneously for creative strategy terms (brand positioning, consumer segmentation, brand health tracking) and P&L management language (margin management, revenue forecasting, trade spend optimization)—a resume strong in one domain but weak in the other fails both filters, which is why Brand Manager keywords organizes the complete taxonomy by strategy, analytics, go-to-market, and financial accountability. PR specialist resumes compound this with crisis communications keywords that carry outsized ATS weight—"crisis communications plan," "media holding statements," and "rapid response" signal senior readiness even at mid-level, and media relations vocabulary alone won't differentiate when 80%+ of 2026 postings include digital PR requirements like SEO-driven PR and influencer relations. See PR Specialist keywords for the media-to-digital keyword framework.
Example: Sales Representative
Bad: "Exceeded quota"
Formula Applied: "Closed $1.2M ARR across 18 enterprise accounts, exceeding quota by 135% and improving win rate from 14% to 21%"
Translation:
- Verb: Closed
- Context: 18 enterprise accounts
- Metric: $1.2M ARR, 135% quota, 21% win rate
- Outcome: Revenue + efficiency
See Sales Representative, Recruiter, Account Executive, Business Development Manager, Customer Success Manager. The formula for customer success shifts from "closed deals" to "expanded and retained revenue"—NRR, GRR, and expansion ARR replace pipeline and quota as the primary metrics. For ATS keyword optimization by revenue function: Account Executive keywords, Sales Representative keywords, Business Development Manager keywords, Customer Success Manager keywords, Recruiter keywords.
The Pattern Across All Roles
Every role has different metrics, but the formula stays the same:
-
Engineering: Latency, uptime, cost, velocity → Full Stack, Backend, Frontend, DevOps, Cloud Architect, Cybersecurity, QA, Systems Admin, Network Engineer, IT Support, Prompt Engineer, AI Engineer, ML Engineer, Data Scientist. For ATS-optimized keyword lists by specialty: Software Engineer keywords, Frontend keywords, Backend keywords, Full Stack keywords, Mobile keywords, iOS keywords, Android keywords, DevOps keywords, Cloud Architect keywords, ML Engineer keywords, Cybersecurity keywords, Prompt Engineer keywords, AI Engineer keywords, IT Support Specialist keywords. Prompt engineering lacks standardized job titles—"prompt engineer," "LLM specialist," and "AI interaction designer" appear interchangeably across postings, making keyword selection the primary differentiator where "chain-of-thought prompting," "few-shot learning," and "retrieval-augmented generation" are the technical signals that separate prompt engineers from general AI users in ATS scoring. AI engineering keyword profiles must simultaneously signal ML research depth (fine-tuning, RLHF, model evaluation) and production deployment capability (inference optimization, model serving, MLOps pipelines)—missing either dimension means failing the ATS filter at companies that need engineers who can both train and ship models. IT support filtering is the most certification-driven in all of tech—CompTIA A+, Network+, and ITIL Foundation function as hard ATS gates that reject applications before technical troubleshooting skills are even evaluated, making certification keywords the single highest-leverage element on an IT support resume
-
Finance & BI: Close cycle, variance, forecast accuracy, dashboard impact → Accountant, Financial Analyst, Data Analyst, BI Analyst. Business intelligence analysis introduces a proof category where impact is measured through decisions influenced rather than dashboards built—the formula shifts from "created dashboard" to "built dashboard that identified $3M in at-risk revenue, enabling recovery actions." For keyword optimization: Data Analyst keywords, Data Scientist keywords, BI Analyst keywords, Accountant keywords, Financial Analyst keywords. Investment banking requires a deal-first proof model where transaction type, deal size, and model depth replace conventional output metrics—see our Investment Banking Analyst Resume Guide and Investment Banking keywords. Management consulting shifts the formula entirely: client context replaces company name, engagement type replaces job title scope, and savings delivered replaces activity count—see our Management Consultant Resume Guide and Management Consultant keywords for the engagement-based impact framework.
-
Healthcare: Patient outcomes, safety, compliance → Registered Nurse, Nursing Clinical, Medical Assistant, Pharmacy Technician. For ATS keyword optimization: Registered Nurse keywords. Medical assistant ATS filtering splits between clinical competencies (phlebotomy, vitals, injections) and administrative workflow (EHR documentation, insurance verification, ICD-10 coding)—postings at healthcare networks require both vocabularies simultaneously, and the Medical Assistant keywords guide organizes the complete taxonomy by clinical, administrative, and specialty-specific categories. Pharmacy technician ATS filtering is certification-gated: CPhT (PTCB) and pharmacy system names (Epic Willow, QS/1, Pyxis MedStation) function as hard keyword requirements that reject applications automatically before compounding skills or dispensing accuracy are evaluated—see Pharmacy Technician keywords for the setting-specific keyword framework covering hospital, retail, and specialty pharmacy.
-
Creative: Campaign performance, conversion, engagement → Graphic Designer, Portfolio Guide, UX/UI Designer, Design Metrics, Technical Writer. For design keyword optimization: UX Designer keywords, Graphic Designer keywords, UI Designer keywords. Technical writing requires proving both content creation depth and toolchain mastery—DITA, MadCap Flare, and docs-as-code terminology (Markdown, Git, static site generators) separate technical writers from general content writers in ATS scoring, where listing "wrote documentation" matches zero filters but "DITA-structured API documentation" matches three. See Technical Writer keywords for the authoring-to-toolchain keyword taxonomy.
-
Admin/Support: Time saved, efficiency, CSAT → Administrative Assistant, Executive Assistant, Customer Service, Retail Store Manager. For ATS keyword optimization: Administrative Assistant keywords, Executive Assistant keywords, Customer Service keywords, Retail Store Manager keywords.
-
Education/Research: Learning outcomes, test scores → Teacher, Research Assistant, Teacher keywords. Research assistant keyword vocabulary must bridge academic methodology and professional ATS language—"literature review" and "statistical analysis" are universal terms, but the specific software stack (SPSS, R, NVivo, MATLAB) and compliance terminology (IRB protocols, IACUC, informed consent) function as the hard ATS filters at universities and research institutions, where missing a single methodology-specific term can reject otherwise qualified applicants. See Research Assistant keywords for the methodology-to-tools keyword taxonomy.
-
HR & People Operations: Hiring speed, retention → HR & People Metrics, HR Manager, HR Manager keywords.
-
Supply Chain \u0026 Logistics: Flow efficiency, cost management → Supply Chain \u0026 Logistics Metrics. Supply chain is a function-vocabulary problem: procurement, demand planning, and logistics each use entirely separate keyword sets—a resume written in generic “supply chain operations” language fails all three filters simultaneously regardless of actual scope. See Supply Chain Manager keywords for the function-by-function taxonomy covering S\u0026OP, freight management, ERP platforms, and APICS certifications.
-
Legal: Risk mitigation → Paralegal, Paralegal keywords
-
Trades: Safety, quality, project completion → Electrician, Mechanical Engineer, Real Estate Agent. Engineering resumes require named tools, not capability claims: “SolidWorks, ANSYS, GD\u0026T” matches three ATS filters; “strong CAD skills” matches zero—see Mechanical Engineer keywords for the CAD-to-analysis-to-manufacturing taxonomy. Electrical engineering compounds this further: a power systems EE and a PCB designer hold the same degree but use entirely different keyword sets with minimal overlap, making domain targeting the single highest-leverage decision on an EE resume—see Electrical Engineer keywords for the domain-segmented taxonomy. Civil engineering applies the same domain-segmented approach across structural design, site development, and permitting—see Civil Engineer keywords. For real estate agent ATS optimization: Real Estate Agent keywords. Electrician ATS filtering is license-driven: state journeyman/master electrician licenses and NEC code compliance are non-negotiable keyword gates, while specialty system terminology (PLC programming, VFD installation, fire alarm systems) determines which specific postings match—see Electrician keywords for the license-to-specialty keyword taxonomy. The impact formula for electrical engineers follows domain-segmented translation: power systems EEs prove value through load analysis and grid reliability metrics, while PCB designers prove it through design-for-manufacturability yields and signal integrity specifications—the same degree, entirely different proof vocabularies. See Electrical Engineer Resume Guide for the domain-specific translation framework. Civil engineering applies this same segmentation across structural design (STAAD.Pro, ETABS), site development (AutoCAD Civil 3D, grading calculations), and water resources (HEC-RAS, stormwater management)—each domain uses distinct metric categories for the impact formula, and a structural engineer's resume reads nothing like a transportation engineer's despite sharing the PE license. See Civil Engineer Resume Guide for the cross-domain translation framework.
-
Operations: Throughput, cost efficiency → Operations Manager, Supply Chain Manager, Logistics, Operations Manager keywords.
-
Career Entry: Transferable skills, academic projects, internship scope → Entry-Level Resume Keywords Guide, Internship keywords. Entry-level candidates apply the same formula but substitute project scope, academic context, and internship volume for professional metrics—the structure stays identical, only the data source changes.
-
Industry-Specific: Consulting, Education, Energy, Finance, Government, Healthcare, Hospitality, Insurance, Legal, Manufacturing, Non-Profit, Real Estate, Retail, Event Planning, Remote Work, Remote Work keywords, Tech Startup. Event planning keyword profiles must demonstrate both logistics precision (vendor management, budget tracking, timeline coordination) and creative execution (theme development, experiential design)—ATS systems at agencies and venues filter for operational and creative vocabulary simultaneously, and candidates who lead with only one dimension fail the other filter entirely. See Event Planner keywords for the logistics-to-creative keyword taxonomy. Healthcare is the most keyword-dependent industry for ATS filtering: certifications (RN, BLS, ACLS) are scanned before any other criteria, EHR system experience must specify exact modules (Epic Cadence, Cerner PowerChart), and compliance terminology (HIPAA, Joint Commission, CMS) is non-negotiable across every role from bedside nurse to hospital administrator—see Healthcare industry keywords for the clinical-to-administrative keyword taxonomy. Finance demands sub-sector vocabulary precision: "M&A advisory" and "deal execution" signal sell-side investment banking while "portfolio construction" and "alpha generation" signal buy-side asset management, and using the wrong lexicon tells human reviewers you do not understand the sector before they finish your first bullet—see Finance industry keywords for the banking-to-investment keyword taxonomy. Consulting resumes fail when candidates use generic business language instead of the precise vocabulary that defines how consulting firms describe their own work—"helped a company improve" is invisible while "led workstream delivering operating model redesign" speaks the language that MBB, Big Four, and boutique firm ATS systems all scan for, with critical vocabulary differences between strategy firms (corporate strategy, CEO advisory, hypothesis-driven) and implementation firms (digital transformation, system integration, change management)—see Consulting industry keywords for the strategy-to-implementation keyword taxonomy. Manufacturing is terminology-dependent at every level because every process, methodology, and compliance standard has an exact name: "quality control" is invisible when the ATS scans for "SPC," "CAPA," and "ISO 9001," and sub-sector vocabulary diverges sharply between automotive (IATF 16949, APQP, PPAP), pharmaceutical (GMP, FDA, validation protocols), and aerospace (AS9100, NDT, configuration management)—see Manufacturing industry keywords for the production-to-compliance keyword taxonomy
The formula doesn't change. Only the metrics do.
For forbidden words and power verbs: 5 Words Killing Your Resume, Resume Action Verbs & Power Words, Prompt Engineer Frameworks, AI Engineer Deployment Spec.
Final Thoughts
This dictionary will be updated continuously. New roles, new industries, new metrics. Bookmark it. Return when you need to upgrade weak bullets into gold.
Your resume is not a biography. It is a sales document. Every line either sells you or fails you. There is no middle ground.
Use this dictionary to filter out fluff and keep only the gold that gets you hired.