AI Strategist & Design Leader

Hi, I'm Omid Jesmi.

With nearly two decades of experience, I bridge strategy and creativity to drive digital transformation. I lead design teams in crafting AI-driven solutions that optimize IT operations and enhance user experiences for the enterprise.

AIX Adoption Guide

The "Single Source of Truth" governance framework for AI UI patterns across the ecosystem.

Agentic AI HR

Orchestrating multi-agent workflows to automate complex HR life-events like parental leave.

AI Data Explorer

Replacing SQL with Natural Language to democratize data analytics for business users.

About Me

With nearly two decades of experience, I am an AI strategist and design leader specializing in digital transformation and creative visual communication. I currently spearhead an AI initiative that harnesses advanced data analytics and generative tools to enhance user experience and service design across upstream IT operations.

My background is deeply rooted in solving complex, enterprise-scale problems. At Enbridge, I worked on the "Emma" project, optimizing help desk workflows and network infrastructure for the oil and gas sector. By merging deep upstream domain knowledge with modern IT operations, I bridge the gap between high-level strategy and pixel-perfect creativity.

I am proficient in the Adobe Suite, Figma, and data analytics, using these tools not just to design screens, but to craft impactful, innovative solutions that drive business success. I am committed to thoughtful, AI-driven design and effective leadership that empowers my team to do their best work.

Design System Enterprise UX

AIX Adoption Guide

A governance framework for scaling standardized AI experiences across the enterprise.

Role

Lead Product Designer

Timeline

4 Months (Q2-Q3)

Team

Design Systems (2 PD, 4 Eng)

AI Summary

01. Discover: The Fragmentation Crisis

As ServiceNow rapidly integrated Generative AI into its vast ecosystem, individual product teams were building their own "chat windows" in silos. This resulted in a disjointed experience where an AI assistant in the HR portal behaved completely differently from one in the IT agent workspace.

Our audit revealed 15+ variations of the same "Chat" component existed in the codebase. Some opened as sidebars, others as floating modals. This lack of consistency created significant cognitive load for our users.

Before: Inconsistent side panels and chat layouts confusing users.

UI Debt

Engineers were maintaining 12+ variations of a simple chat input component.

Vocabulary Gap

Design called it "Modal," Dev called it "Popover," PM called it "Assistant."

Unscalable

Legacy patterns couldn't support new multi-turn agentic workflows.

02. Define: Anatomy vs. Capabilities

"How might we create a rigid container that ensures consistency, while allowing flexible content that enables product-specific innovation?"

I defined a strategy based on decoupling Anatomy from Capabilities.

1. Anatomy (Rigid): The "Shell." Window controls, history retention, input mechanisms. These are immutable to ensure the user always knows how to operate the AI.
2. Capabilities (Flexible): The "Skill." Conversational catalogs, data visualization cards, and interactive forms.

Input Anatomy

Standardizing the input field structure for consistency.

Contextual Attachments

Defining how files and context are added to the conversation.

03. Design: Flexible Modes

We needed the AI to adapt to the user's context. I designed a responsive system that supports three distinct modes: Float (for quick Q&A), Pinned (for side-by-side work), and Modal (for immersive, complex tasks).

Defining the 3 core interaction modes: Float, Pinned, and Modal.

Interactive Capabilities

Beyond simple text, we introduced "Interactive Views." This allows the AI to render a mini-application (like a Setup Wizard or a Data Dashboard) directly within the chat stream, keeping the user in flow.

Agentic Workflows

AI guiding the user through a complex integration setup.

Interactive Views

Rendering full dashboards within the conversational modal.

We also standardized advanced inputs like Voice, ensuring accessibility and ease of use for mobile workers.

04. Deliver: Impact

The AIX Adoption Guide launched in Q3 and became the mandatory standard for all new AI features. By providing pre-built, accessibility-tested components, we reduced the design-to-dev handoff time by approximately 40%.

100%
Adoption Rate in Q3
-40%
Design Cycle Time
Unified
Experience Across Clouds
Innovation Visual Design

AI Design Spike

Rapidly defining the visual soul of "Now Assist" to beat the clock.

Role

Design Lead

Timeline

2 Weeks (Sprint)

Team

"Tiger Team" (3 Designers)

AI Summary

01. Discover: The Identity Crisis

With a major conference ("Knowledge") deadline looming, ServiceNow's AI features risked looking like a "Frankenstein" product. Multiple teams were building AI features in parallel with zero visual alignment. One team was using green sparkles, another blue bolts. This lack of cohesion threatened to dilute the brand impact of the launch and confuse users about which features were actually AI-powered.

We audited the current state and found that AI inputs were indistinguishable from standard search bars. Users didn't know they could "talk" to the system. We needed a way to signal "magic" without breaking the utilitarian aesthetic of the enterprise platform.

02. Define: The "Tiger Team" Approach

"How might we use color and motion to signal 'Intelligence' without overwhelming the utilitarian enterprise UI?"

I proposed a "Design Spike"—a concentrated, time-boxed effort to solve this singular problem. We formed a small squad of 3 senior designers and gave ourselves 2 weeks to define the visual language.

Our criteria for success were: 1. Distinctiveness: It must not look like a success message (Green) or a link (Blue). 2. Accessibility: It must pass contrast ratios on light and dark modes. 3. Scalability: It must work as a 16px icon and a full-page hero background.

03. Design: The Coral Solution

We explored over 20 colorways and motion studies. We landed on "Coral"—a vibrant orange-pink gradient. It was warm, human, and fundamentally different from the cool, clinical blues of the main UI.

We built a "Kit of Parts" including:
The Sparkle Icon: The universal signifier for AI.
Shimmer Loaders: A gradient animation that implies "thinking" rather than just "loading."
Response Containers: A subtle border treatment to differentiate AI-generated content from human-written notes.

Coral Component Library & Motion Study

04. Deliver: Unblocking Execution

The impact was immediate. We distributed the library via Figma, and within 48 hours, 5 different engineering teams had updated their UIs. The consistent branding was a key talking point in the keynote presentation, creating a unified "Now Assist" identity that looked intentional and polished.

5
Teams Unblocked
100%
Deadline Met
High
Brand Recognition
Product Vision No-Code

Admin Studio: AI Agents

Democratizing AI creation with a No-Code "Wizard" for System Administrators.

Role

Lead Product Designer

Timeline

3 Months

Team

Admin Experience Unit

AI Summary

01. Discover: The Technical Barrier

ServiceNow is a powerful platform, but it has a notoriously steep learning curve. Historically, creating a "Virtual Agent" required knowing proprietary scripting languages and navigating a complex flow chart interface.

Our research identified a key persona: the "Accidental Admin"—often an IT generalist who needs to set up a simple Q&A bot (e.g., "What is the Wi-Fi password?") but gets blocked by the technical complexity of our legacy tools. They were spending weeks configuring basic logic that should have taken minutes.

02. Define: The No-Code Vision

"How might we hide the complexity of the backend and allow admins to define agents using plain English, effectively 'programming' via conversation?"

We defined the vision for "Admin Studio"—a centralized hub for AI creation. The core philosophy was "Progressive Disclosure." We would show the simple, happy-path options first, and tuck the advanced configuration (temperature settings, API hooks) behind an "Advanced" toggle.

03. Design: Guided Setup

I designed a "Guided Setup" wizard. Instead of dropping the user onto a blank canvas (which causes "blank page paralysis"), the system asks 4 simple questions:
1. Who is this agent for? (e.g., Employees in London)
2. What triggers it? (e.g., Questions about VPN)
3. What should it do? (e.g., Search the Knowledge Base)
4. Where should it live? (e.g., Slack and Web Portal)

Based on these inputs, the system auto-generates the underlying flow logic. I built high-fidelity prototypes demonstrating this "text-to-app" flow, which clarified the requirements for the engineering team and proved that a no-code solution was feasible.

Guided Setup Wireframes

04. Deliver: Strategic Clarity

These prototypes were used to secure executive buy-in for the Q4 roadmap. We shifted the engineering strategy from "building more flow features" to "building a simplified layer on top of flow." This pivot is expected to reduce the Time-to-Deployment for a standard agent from 2 weeks to 30 minutes.

Q4
Roadmap Approved
30m
Target Deployment Time
High
Stakeholder Alignment
Work in Process Analytics Natural Language

AI Data Explorer | Coming Soon

Replacing SQL queries with Natural Language to democratize data analytics.

Role

Lead Designer

Timeline

TBD

Team

Analytics Unit

AI Summary

01. Discover: The Data Barrier

ServiceNow holds vast amounts of enterprise data, but accessing it required using complex "Report Builders" that demanded knowledge of database schemas, table names, and filtering logic.

Our research showed that Business Analysts often knew what they wanted (e.g., "Show me P1 incidents in London last week") but failed to build the report because they couldn't find the right table or filter operator. They were dependent on data scientists, creating a bottleneck that delayed decision-making by days or weeks.

02. Define: Conversational Analytics

"How might we allow business users to 'converse' with their data as if they were talking to a data scientist, removing the syntax barrier entirely?"

We focused on "Natural Language Querying" (NLQ). The key challenge wasn't just translating text to SQL; it was handling ambiguity. If a user says "Show me high priority tickets," do they mean Priority 1, or Priority 1 and 2? We defined a "Refinement Loop" strategy where the AI asks clarifying questions rather than guessing.

Agentic AI Automation

Agentic AI in Employee Hub

Transforming fragmented HR processes into a seamless, orchestrated conversation.

Role

Lead Product Designer

Timeline

6 Months (End-to-End)

Team

HRSD Unit (3 Devs, 1 PM)

AI Summary

01. Discover: The Fragmentation of Work

In the modern enterprise, simple life events trigger complex administrative burdens. Our research revealed that an employee going on parental leave touches an average of 4 distinct systems: the Time-Off portal to log dates, the Benefits portal to add dependents, the Payroll system to adjust tax withholding, and often a separate IT portal to return equipment.

This fragmentation creates high cognitive load during an already stressful life transition. Employees were "swivel-chairing" between tabs, often missing critical steps which resulted in pay discrepancies or lapsed insurance coverage. HR Service Centers were overwhelmed with Tier-1 tickets simply guiding users on where to click.

Context Switching

Users lose context when jumping between disparate legacy systems.

Process Errors

Manual coordination leads to missed steps and compliance risks.

Resolution Latency

Simple requests took days due to back-and-forth ticketing.

The Current State: A density of information that requires the user to hunt for the right module.

02. Define: The Orchestrator Model

"How might we invert the model—bringing the services to the user in a single conversation—rather than forcing the user to visit multiple services?"

We moved away from the concept of a "Chatbot" (which typically just retrieves articles) to an "Agentic Orchestrator." The core strategy was to build a "General Contractor" agent that sits at the top level. It doesn't know the specifics of tax law, but it knows which sub-agent does.

When a user expresses a complex intent like "I'm having a baby," the Orchestrator breaks this down into a dependency tree and spins up the specialized sub-agents (Leave Agent, Benefits Agent, IT Agent) to execute the tasks in the correct order, shielding the user from the backend complexity.

03. Design: Seamless Handoffs

The primary design challenge was Trust & Transparency. If an AI is acting on your behalf (changing payroll), you need to feel in absolute control. I developed a "Plan & Execute" UI pattern.

Instead of the agent simply saying "Done," it presents a Summary Card: "I understand you want to go on leave. To do this, I will: 1. Submit request for May 1st, 2. Update benefits, 3. Set OOO reply. Do you want me to proceed?" This "Human-in-the-loop" confirmation step was critical for adoption.

1. Natural Language Trigger

User speaks naturally; Orchestrator parses intent.

2. Contextual Action

Sidebar agent handles approvals without leaving the hub.

Proactive Intelligence

We went a step further with "Smart Operations." The system detects patterns (e.g., a sudden spike in checkout wait times or inventory depletion) and proactively prompts the manager with a solution, shifting the paradigm from "Request/Response" to "Sense/Respond."

04. Deliver: Impact

The Agentic HR Assistant has fundamentally changed how employees interact with enterprise systems. We effectively collapsed 4 different portals into 1 chat window.

60%
Reduction in Time-to-Resolution
High
Deflection of Tier-1 Tickets
4.8/5
Employee CSAT Score
Design System Guidelines

AI UX Patterns

Establishing trust through Explainability, Progressive Disclosure, and Ethical Design.

Role

Principal Designer

Timeline

Ongoing

Team

Design Systems

Deliverable

Pattern Readout Deck

AI Summary

01. Discover: The "Black Box" Problem

Our internal research on "Ethical AI" uncovered a disturbing trend: users were rejecting valid AI suggestions because they didn't understand the provenance of the data. In user testing, participants referred to the AI as a "black box" and hesitated to delegate high-stakes tasks (like approving a budget) without oversight.

Furthermore, we found significant Discoverability Issues. Fulfillers struggled to recognize AI-enabled features because every product team used a different icon—some used a robot, some a lightning bolt, some a brain. This cognitive friction meant valuable productivity tools were being ignored simply because users didn't know they existed.

Opacity

"I don't know where this data came from."

Invisibility

Users missed AI features due to poor iconography.

Ethical Risk

Lack of "human-in-the-loop" controls.

02. Define: Ethical Principles

"How might we design interaction patterns that prioritize Transparency, Accountability, and Human Control, converting skepticism into trust?"

We aligned our design strategy with the core principles of Ethical AI: Fairness, Explainability, and Oversight. We determined that every AI interaction must answer three questions for the user: 1. Why are you showing me this? 2. Where did you get this info? 3. How can I undo this?

03. Design: The Trust Library

I architected a library of 3 core patterns to solve these specific friction points:

1. Inline Assist ("Ghost Text")

The Problem: Popups break flow.
The Solution: Borrowing from coding IDEs, we implemented "Ghost Text" inside form fields. This is low-friction and non-blocking. Crucially, we standardized the interaction: Tab to accept, Keep typing to ignore. This puts the user in the driver's seat, framing the AI as a copilot, not a commander.

2. Explainer Modals ("Why this?")

The Problem: Hallucination fear.
The Solution: Every high-stakes prediction (e.g., Risk Score) now includes a standardized "Why this?" link. Clicking it opens a non-intrusive modal that cites the source data (e.g., "Based on 5 similar incidents resolved by Agent Smith"). This citation view is the cornerstone of our Explainability strategy.

Explainability & Trust Patterns

04. Deliver: Governance

To ensure long-term compliance, we established an AI Governance Board. I created role-based documentation for "AI Stewards"—a new persona we identified in our research—who are responsible for auditing AI performance. The patterns I designed are now baked into the "Polaris" design system, ensuring that any future AI feature automatically inherits these ethical safeguards.

Key
Standard for Trust
High
User Confidence
Safe
Compliance Ready

AI Performance KPI

Loading KPI…

AI Performance KPI

Loading KPI…