AI Stock Screener

Describe your investable idea in plain English

Role: Lead UI/UX Designer at CommSec
Timeline: July 2025
Team: Individual
Tools: Figma, Figjam, Figma Make, Chat GPT, Bard, Grok, Claude
Type: Strategic Innovation
Scope: Validated prototype; documented for delivery (not yet shipped).
Recognition: AI Innovation Award (July 2025).

About the Project

This was an individual innovation project that stemmed from observing investors' pain points, behaviors, latent needs, and market gaps. CommSec, the brokerage arm of the Commonwealth Bank of Australia, is the leader in the Australian online trading market, serving over 2.6 million clients. Its web and mobile platforms offer various investment options, including Australian and international Stocks.

In my role as Lead UI & UX Designer, I handled end-to-end research, conversation design (flows, microcopy, repair), LLM policy prompting authorship (do/don't/how), pattern library creation (reusable conversation components), and measurement leadership (metrics, dashboards, test cadence).

The AI Stock Screener is a natural-language interface that lets people find investable ideas by describing them in plain English—no financial jargon or complex filters. Queries like “a company in remote healthcare diagnostics” or “a sustainable beauty brand under $25” are interpreted into intents and entities, returning explainable shortlists of stocks, ETFs, or derivatives.

Problem & Challenges

Finding investments today is harder than it should be. The stakes are high; poor recommendations can bankrupt beginners. Non-transparent systems and social manipulation skew outcomes, leaving genuine investors, especially newcomers and values-driven users. Most people never get close to opportunities that ought to be broadly accessible.

Even most stock screeners recommended for finding stocks are not particularly helpful, as they are ratio-driven tools that require investors to pre-select metrics (e.g., P/E, D/E, sector) and hard thresholds. Stock screener works well for financially fluent users chasing risk-adjusted returns, but breaks down for those seeking values-led discovery. When a user types “ethical Australian energy stock with dividends,” there’s no obvious, standard mapping to filters—and data makes “ethical” a moving target. In Australia’s retail market—where many newer investors aren’t fluent in finance—this filter-first model effectively hides opportunities from new investors and anyone who starts with beliefs, not ratios.

We needed to close this gap with a safer, more explainable path to discovery—one that meets people where they are and still respects regulation and trust.

A Modern Stock Screener

Problem Statement:

How might we open the screener to investors who think in ideas, not ratios?

The opportunity?

Reposition discovery from “advanced tool” to “plain-English entry point,” growing penetration beyond power users.

Research & Discovery

To ground the project in real user needs, I started with thorough research. I analysed 150+ interviews from CommSec's existing customers, revealing that 70% of investment discovery queries involved plain-language descriptions rather than technical filters.

I conducted 10 generative interviews with novice and intermediate investors, including newcomers (e.g., millennials entering the market) and values-driven users (e.g., those prioritizing ESG factors). Key insights: Users felt overwhelmed by jargon, with one interviewee saying, "I know I want 'green energy stocks,' but I don't know what P/E means or where to start." Pain points included lack of transparency ("Why did this stock show up?") and fear of making uninformed decisions in a regulated space.

Competitive analysis of tools like Yahoo Finance, Seeking Alpha, TradingView, MooMoo and emerging AI assistants (e.g., ChatGPT for finance queries) showed gaps in explainability and compliance.

From my technology research, the clearest finding was that AI uniquely bridges the gap between values-led intent and investable outputs. Modern NLU reliably maps messy, plain-English queries to entities and constraints; retrieval + normalisation layers reconcile inconsistent ESG signals; and policy-guided generation can produce explainable shortlists with a concise Reason and Caution per result while refusing advice. In controlled tests, this stack reduced reformulations and time-to-first-useful-answer versus filter-first baselines, confirming that AI isn’t just “nice to have” here—it’s the only practical way to make values-aligned discovery usable at scale without demanding financial fluency.

Ideation & Design Process

Building on research insights, I brainstormed alternatives like enhanced filter UIs, voice interfaces, and full conversational AI, ultimately prioritizing a natural-language screener for its accessibility. The process addressed key requirements for the product, encompassing both technological and useruser needs.

Key requirements

  • Translate free-text intent into clear, investable universes (stocks, ETFs, derivatives) without jargon.
  • Respond safely and consistently, avoiding advice and predictions while disclosing limits and risks.
  • Explain why each result appears, in plain English, so users can judge relevance and make informed next steps.
  • Keep users self-serving with obvious refinements and actions to reduce effort and hand-offs.
  • Be instrumented for learning from day one, so we can measure, test, and improve continuously.

Trade-offs throughout included balancing explainability (adding Reason/Caution lines) with brevity to prevent cognitive overload.

AI Agent Design, Tests, Iteration

For LLM integration, I iterated prompts through testing with AI agents: Starting with basic system roles, I refined based on failures (e.g., adding "fairness rules" after bias tests showed isolated bank promotions). Early prototypes tested plain-English refinements and repair strategies using AI simulations. The process validated components through AI-agent prototypes, showing reduced confusion (e.g., "Why is this here?" queries dropped 50% in tests).

UI Design and Test

I began with low-fidelity sketches to outline chat flows for the AI stock screener, prioritizing batch responses to address latency issues. These captured essentials like user inputs, summaries, result cards, and repair paths.

Next, I developed high-fidelity prototypes in Figma, creating interactive journeys with clickable refinements, CTAs (e.g., "Add to watchlist"), and evolved pattern library components into polished, reusable UI elements.

Refined through three rounds of guerrilla usability testing with 15 users: Round 1 Main Screen content; Round 2 Result card page; Round 3 confirmed the overal flow, cutting confusion queries like "Why is this here?" by 50%.

Stock Screener Investment Preferences
Stock Screener Results Display

Main Page Wireframe 

Result Card Page Wireframe

Solution & Implementation

I engineered a policy-prompted conversational agent plus a reusable pattern set and analytics loop. Highlights:

1) Conversation design (flows, microcopy, repair)

  • End-to-end flow (batch level):
    1. Summary (what I understood + what I’ll do)
    2. Ten Result Cards (each explainable)
    3. Filter Strip (plain-English refinements)
    4. CTAs (“Add to watchlist” | “Find more like this”)
    5. Disclaimer/Handover.

  • Intent model: Thematic discovery, Values/Ethical, Income focus, Momentum/Technicals, Compare/Shortlist.

  • Examples of user turns → system replies:
    • User: “fast-growing remote diagnostics under $30.”
    • Assistant (summary): “You’re exploring remote healthcare diagnostics under $30. I’ll list 10 related stocks and explain each choice. Note: I don’t predict performance; results are AI-generated and require your validation.”
    • Repair (low confidence): “Do you mean telehealth platforms or device makers? I’ll tailor results after you choose.”
Canonical flow

End-to-end flow (batch level)

2) LLM policy prompting (do/don’t/how) and guardrails

  • System role (excerpt): 
    “You are a neutral investing assistant. Prioritise clarity, safety, and policy compliance. Use plain English; avoid predictions or advice; disclose limits; never guess.”
  • Policy prompts (excerpts):
    • Do: explain why selected in one sentence tied to the user’s intent; include one Caution in plain language; show latest report and website links.
    • Don’t: recommend, predict, or imply certainty; avoid loaded terms (“best”, “guaranteed”); if the prompt implies future performance, insert a bold-italic reminder.
    • Abstain: if there’s no match, state: “I couldn’t find any public company directly in [topic].” Offer nearest categories.
    • Fairness rule: never surface a single major bank in isolation—pair with at least one comparable peer.

  • Repair strategies: one clarifying question on low confidence; fallback to nearest categories; switch to education mode for advice-seeking prompts.
Policy prompts

Policy Prompts Flow

Repair Strategy and Handover Flow

Repair Strategy & Handover Flow

3) Patterns & content design (reusable components)

  • Result Card (per stock):
    Name (H2)
    • Symbol; Exchange; Type; Sector
    • Key metrics: Market Cap, P/E, Cash Flow, Dividend Yield, 52-week range
    • Reason for selection: one natural sentence linking to the user’s intent
    • Caution: one practical risk in plain English
    • Latest report | Website
    • CTAs: [Add to watchlist] | [Find more like this]
  • Filter Strip (batch level): Five context-aware refinements drawn from Fundamentals, Valuation, Dividends, Performance, or Technicals (e.g., “Dividend yield ≥ 4% | P/E below industry median | ASX-listed only | 52-week high within 10% | Revenue growth > 10%”).
  • Handover pattern: when confidence remains low after one repair, when advice is requested, or when data is missing; pass a context package (user goal, last turns, confidence, options shown, errors).
Main Page

Patterns & Content Design(Main Screen)

Response page

Patterns & Content Design(Result Card)

4) Measurement & optimisation

  • Core metrics & definitions:

    • Containment rate: % of sessions completing without human handover.

    • Handover rate: % of sessions escalated to agents (with reasons).

    • Turns-to-task: average conversational steps to reach shortlist/download/watchlist add.

    • Reformulations/session: proxy for friction; should drop over time.

    • Time-to-first-useful-answer: seconds to first meaningful result display.

    • CSAT/CES: post-conversation satisfaction and effort.

  • Targets (illustrative for first 90 days): Containment +8–12 pts; CES +0.3–0.6; CSAT +4–7 pts; reformulations −20–30%.
  • Cadence: Weekly copy/prompt A/B tests (e.g., “Reason” phrasing; order of filters), monthly failure-utterance mining, quarterly intent re-prioritisation.
Measurement & optimisation loop-2

Hybrid Measurement & Optimisation Loop

The Impact

  • Clarity and trust up: Usability sessions showed higher perceived understanding (“I can see why this stock is here”) due to explicit Reason and Caution lines and the batch Summary.
  • Reduced effort path: The Filter Strip and consistent CTAs kept users in self-serve, establishing a clear route to track containment and turns-to-task.
  • Operational readiness: Guardrails, refusal/repair, and handover rules make the flow safe for a regulated context; the pattern library enables rapid scaling across similar domains.
  • Recognition & adoption: Awarded AI Innovation Award (Jul 2025); documented as a reusable conversation-design system and prioritised for the product backlog.

CommSec AI Innovation Award

July 2025
Left | Ashkan Deravi | Senior Human Experience Designer
Right | James Fowle | Managing Director at CommSec

Ash And James

Learnings & Takeaways

People search in ideas, not ratios—plain English works. A single, targeted clarifying question early (e.g. “telehealth platforms or device makers?”) cut back-and-forth and got to useful results faster. Adding two short lines—Reason (“why this showed up”) and Caution (“what to watch out for”)—lifted confidence and reduced “why is this here?” confusion. A firm safety note whenever users asked for predictions kept us compliant while still offering neutral next steps they could choose.

Other Projects

Gold-Quote-54 1

It is what we think we know already that often prevents us from learning.

–– Claude Bernard ––