BiMasters · Power BI Toolkit
Welcome to the Design Hub
Everything you need to design world-class Power BI reports. Color tools, DAX library, visual guidance — all in one place.
Color Palette Generator
Generate accessible, cohesive palettes from any brand color. Export as Power BI theme JSON.
Canvas Size Calculator
Find perfect dimensions for desktop, mobile, print and embed. Exports page size JSON.
Accessibility Checker
Test color contrast against WCAG 2.1 AA/AAA for all text sizes and UI components.
DAX Snippet Library
Hundreds of production-ready DAX patterns. Search, filter, add your own.
Visual Recommender
Answer 4 questions about your data — get the perfect Power BI visual with DAX suggestions.
Copilot Optimization Guide
Use M365 Copilot to analyse your PBIP files and generate a consultant-grade optimization plan for semantic models, Power Query, and DAX.
Best Practices Guide
Eight design principles for Power BI dashboards — the reasoning behind every layout, color, and interaction decision. Read once, refer back often.
SVG Shape Library
Browse, customise and copy SVG shapes, KPI frames, banners, dividers and Fluent icons — all sized for Power BI.
Visual Decision Tree
Step-by-step flowchart to choose the right visual. Browse all 56 built-in Power BI visuals.
Design Tool 01
Color Palette Generator
Generate a full Power BI theme from a single brand color. Preview swatches and export as theme JSON.
Configuration
Set brand color & style
Live Preview
Click swatch to copy hex
UI Colors
Power BI Theme JSON
View → Themes → Browse for themes
Click Generate…
Design Tool 02
Canvas Size Calculator
Find perfect dimensions for any output. See aspect ratio, physical size, and get Power BI page settings JSON.
Dimensions
Preset or custom
Preview
Scaled to fit
Page Size JSON
Design Tool 03
Accessibility Checker
Test foreground/background contrast against WCAG 2.1 AA and AAA standards.
Color Pair
Contrast Preview
—
Enter colors
WCAG 2.1 Results
AA Normal
—
AA Large
—
AAA Normal
—
AAA Large
—
Text Preview
Normal text — Body copy in a Power BI tooltip or card.
Large text — Report title or KPI value
Small caption — Axis labels, data source footnotes
DAX Library
Snippet Library
Search, filter and copy from hundreds of production-ready DAX patterns. Add your own snippets — they persist in your session.
Visual Selector 01
Visual Recommender
Answer a few questions about your data and goal — get the ideal Power BI visual with guidance and DAX snippets.
1Data Type
2Goal
3Dimensions
4Audience
Answer the questions to see recommendations →
Visual Selector 02
Browse All Visuals
All 56 built-in Power BI visuals with when-to-use guidance, data requirements, and related DAX.
Guides
Copilot Optimization Guide
Use M365 Copilot to analyse your Power BI files and generate a consultant-grade optimization plan — in under 20 minutes.
✦
What this guide does
Slow and inefficient Power BI reports are a widespread cause of poor user experience, bloated costs, and wasted compute capacity. Fixing them used to require days of expert effort. This guide shows you how to use M365 Copilot to read your Power BI Desktop files and produce a detailed, actionable mitigation plan — covering semantic model design, Power Query performance, and DAX optimization — in less than 20 minutes.
No extra licensing beyond M365 Copilot
Works with standard Power BI Desktop
Intermediate Power BI skills sufficient
Step 1 — Prepare your Power BI Desktop file
Export TMDL files so Copilot can read them
1
Save as PBIP format
In Power BI Desktop: File → Save as → choose Power BI Project (.pbip). This creates a folder structure containing human-readable TMDL definition files.
2
Create a working folder in OneDrive
Create an empty folder in OneDrive that M365 Copilot can access. This will hold the files you want Copilot to analyse.
3
Copy the TMDL definition files
From the PBIP folder, navigate to
[FileName].pbix.SemanticModel → definition. Copy the model.tmdl, relationships.tmdl, and expressions.tmdl files to your OneDrive folder.4
Copy all table files
From
definition → tables, copy all .tmdl table files. Include the auto date/time intelligence tables (the ones with long GUIDs in their names).5
Rename extensions from .tmdl to .txt
Select all copied files in OneDrive and rename the extension from
.tmdl to .txt. This allows Copilot to read them as plain text.6
Create a shareable link to the folder
In OneDrive, right-click the folder → Share → Copy link. Make sure the link is set to “Anyone with the link can view”. You’ll paste this URL into the Copilot prompt below.
Step 2 — Run the Assessment Prompt in M365 Copilot
Set model to Auto · Use the Researcher Agent · Replace the OneDrive URL in the prompt
💡 In M365 Copilot, leave the model selection set to Auto and use the Researcher Agent. Paste the prompt below and replace
[paste your OneDrive URL here] with your actual folder link.
You are a senior Power BI architect and performance specialist. I have shared a folder of TMDL files exported from a Power BI Desktop semantic model. The folder is available at: [paste your OneDrive URL here] Please perform a full technical review of this semantic model and produce a structured report covering the following six areas: SECTION 1 — BUSINESS CONTEXT Before any technical analysis, explain in plain English: - What business domain this model appears to serve (sales, finance, HR, operations, etc.) - What types of analytical questions the model is designed to answer - Who the likely audience is based on the measure names and table structure - Any domain-specific patterns or assumptions you can identify SECTION 2 — MODEL STRUCTURE REVIEW Provide a complete inventory of: - All tables (fact vs dimension, whether they follow star schema) - All relationships (cardinality, direction, any many-to-many or bidirectional) - All calculated columns (flag any that should be measures instead) - All parameters and what they control - Data sources and connection types - Flag any tables that appear to be staging, intermediate, or unused SECTION 3 — POWER QUERY PERFORMANCE REVIEW Analyse every Power Query (M) script and identify: - Steps that break query folding (list each one by query name and step) - Unnecessary Table.Buffer, List.Buffer, or row-by-row operations - Redundant merge or expand steps that could be eliminated - Unused queries or staging queries that are loaded but not needed - Transformations that should be pushed upstream to the source - Provide a rewritten version of the worst-performing query as an example SECTION 4 — DAX QUALITY AND PERFORMANCE REVIEW Review every measure and calculated column for: - Use of FILTER() as a table argument (should use predicate syntax instead) - Iterator functions (SUMX, AVERAGEX) over large tables without necessity - Calculated columns that add to model size and could be measures - Missing use of DIVIDE() instead of direct division (divide-by-zero risk) - Measures that use CALCULATE() with ALL() when REMOVEFILTERS() is cleaner - Dependency chains that are unnecessarily long or complex - Any time intelligence that will break if the date table is not marked Provide an optimised rewrite for each problematic measure you find. SECTION 5 — DATA MODEL BEST PRACTICES AUDIT Score the model against these 10 best practices (Pass / Fail / Partial for each): 1. Star schema design with clear fact and dimension separation 2. All relationships use single-direction cross-filtering 3. Date table is present, marked, and covers the full data range 4. No high-cardinality text columns in fact tables 5. Numeric columns use correct data types (not stored as text) 6. Calculated columns minimised (prefer measures) 7. Unused columns are hidden or removed 8. Consistent naming convention across tables, columns and measures 9. Measures are organised in display folders 10. Auto date/time intelligence is disabled (custom date table used instead) SECTION 6 — COPILOT AND Q&A READINESS Evaluate the model for use with Power BI Copilot, Q&A, and AI-driven features: - Which measure and column names are ambiguous or would confuse a language model - Which tables or fields lack descriptions that should have them - Which fields should be hidden from the Q&A surface - Specific renaming recommendations with before/after examples - Any relationship or schema ambiguity that would produce wrong AI-generated results OUTPUT FORMAT - Use clear section headers matching the six areas above - Use bullet points for findings, not paragraphs - Include specific DAX or M code snippets when referencing issues - End each section with a priority-ordered action list (High / Medium / Low impact) - Be specific — reference actual table names, measure names, and column names from the files
💾 Save the response as a Word document — this is your consultant-grade assessment.
Step 3 — Generate the Step-by-Step Mitigation Plan
Continue in the same Copilot conversation — add your OneDrive URL at the end
Using the assessment you just completed, create a prioritised step-by-step fix plan I can hand directly to a Power BI developer with intermediate skills. Structure the plan as follows: PRIORITY 1 — CRITICAL FIXES (Do these first — high impact on performance or correctness) For each fix: - Problem: What is wrong and why it matters - Location: Exact table/measure/query name in the model - Fix: Step-by-step instructions to resolve it in Power BI Desktop - Code: Before and after DAX or M code where applicable - Impact: What will improve after the fix (speed, size, accuracy) - Refresh required: Yes / No PRIORITY 2 — IMPORTANT IMPROVEMENTS (Do these second — meaningful quality gains) [Same structure as Priority 1] PRIORITY 3 — BEST PRACTICE ALIGNMENTS (Do these when time allows — polish and maintainability) [Same structure as Priority 1] Rules for this plan: - Every fix must be actionable by someone who did not write the original model - Do not use vague language like "consider refactoring" — be precise - If a DAX rewrite is needed, provide the complete new measure, not a partial snippet - If a Power Query fix is needed, show the complete revised step or query - Order fixes within each priority group by effort — quickest wins first Folder reference: [paste your OneDrive URL here]
Step 4 — Optimize for Copilot & AI Readiness
Optional but recommended if you use Power BI Copilot or Q&A features
You are preparing this Power BI semantic model for production use with Microsoft Copilot, Power BI Q&A, and future AI agent integrations. Using the model files at [paste your OneDrive URL here], produce a Copilot Readiness Checklist. For each item below, give a status (✓ Ready / ✗ Action needed / ⚠ Partial) and specific instructions: 1. MEASURE NAMING - Are all measures named in plain English that a non-technical user would understand? - List every measure that uses abbreviations, codes, or technical terms - Provide the recommended plain-English rename for each 2. TABLE AND COLUMN NAMING - Are table names and column names human-readable? - Flag any PascalCase, snake_case, or coded names that should be display-name friendly - Provide rename recommendations 3. FIELD DESCRIPTIONS - Which tables, measures, and key columns are missing descriptions? - Write a ready-to-use description for each missing one (copy-paste into Model view) 4. HIDDEN FIELDS AUDIT - Which columns are currently visible that should be hidden (foreign keys, internal IDs, technical flags)? - Which measures or columns are hidden that should be visible for Q&A? - Provide the exact list to show/hide 5. SYNONYM RECOMMENDATIONS - For the top 10 most-used measures and dimensions, suggest 2–3 synonyms each that users might naturally type in Q&A (e.g. "revenue" → "sales", "income", "turnover") 6. RELATIONSHIP CLARITY FOR AI - Are there any relationships where the direction or cardinality could confuse an AI agent? - Flag any ambiguous join paths and recommend resolution 7. COPILOT VISUAL SUGGESTIONS READINESS - Will Copilot be able to generate meaningful visuals from this model? - What is the single most impactful change to make the model more Copilot-friendly? Output as a checklist I can work through field by field in Power BI Desktop Model view.
When to use this approach
→Reports with slow visual rendering or long refresh times
→Semantic models built by self-service developers without BI governance
→Before publishing a major report to a wide audience
→Models with many calculated columns, complex DAX, or large DirectQuery tables
→Preparing a model for Copilot, Q&A, or Data Agent integration
→Scaling optimization practices across a large team of Power BI developers
Important considerations
→All sensitive metadata stays within your Microsoft 365 tenant — nothing leaves your organization
→Results will vary by model complexity — this approach works best on poorly optimized models
→Always validate Copilot’s DAX rewrites before applying — test in a dev environment first
→PBIP format is required — PBIX files alone cannot be read this way
→Requires M365 Copilot licence — not included in standard Microsoft 365
Design Principles
Power BI Dashboard Best Practices
A guide to the decisions behind a good dashboard — not just what to check, but how to think about each choice. Use the Accessibility Checker for a self-score; use this for the reasoning underneath.
In this guide
Eight principles, each with do/don’t pairs and a worked example
01 Start from the question, not the data
02 Earn every pixel — the 5-second test
03 Layout follows the eye, not the spec
04 Pick the right visual for the comparison
05 Color is information, not decoration
06 Interaction should feel inevitable
07 Performance is a design decision
08 Build for trust before you build for delight
01
Start from the question, not the data
A dashboard exists to answer specific decisions. If you can’t write down the three questions it should answer in one sentence each, you don’t have a dashboard — you have a data dump with charts on top.
The most common failure mode in Power BI work isn’t slow DAX or ugly visuals — it’s reports that show everything the data could possibly say, leaving the reader to figure out what matters. Before placing a single visual, write down who reads this report, what decision they make from it, and how often. Every visual you add later has to earn its place against that brief.
Do
Open with a one-line brief: “This dashboard helps regional managers spot underperforming products before quarter-end so they can rebalance promotions.” Every visual ladders up to that.
Don’t
Build “an executive sales dashboard” without naming the executive or the decisions they make. Generic audiences produce generic dashboards that nobody quite uses.
Worked example
A retail client asked for a “store performance dashboard.” Pushing back on the brief surfaced three actual questions: which stores missed quarterly targets, why (footfall vs. conversion vs. basket), and which categories drove the gap. That single conversation cut the planned visual count from 22 to 9 — and the report became one people actually opened weekly.
02
Earn every pixel — the 5-second test
Show the dashboard to someone for five seconds, then take it away. What did they remember? That’s your real headline. If they can’t answer “is the business doing well right now?” you’ve buried the lead.
Reports are read in the airport lounge, between meetings, and on phone screens — not studied for ten minutes at a desk. The top-left third of the canvas should communicate the headline status before the reader processes anything else. Everything below the fold is supporting evidence. If your hero KPI isn’t visually dominant, the eye won’t find it; if it has no comparison (vs. target, vs. last period), the number means nothing.
Do
Place 3-5 hero KPI cards across the top, each showing the value, a comparison (▲ +12% vs. PY), and a sparkline. Use 32-44px font for the headline number — it should be readable from across a room.
Don’t
Lead with a busy filter pane, a logo banner, or a giant slicer row. The first thing the eye lands on should be the answer, not the controls.
Worked example
Run the test on yourself: take a screenshot of your dashboard, blur it 80% in any image tool, and look at the result. The strongest shapes and darkest areas are what readers see first. If those are your filter chrome and not your KPIs, swap their visual weight.
03
Layout follows the eye, not the spec
Western readers scan in F or Z patterns. Place the most important content where the eye lands first. Group related visuals so the connection is obvious without explanation. Whitespace is structure — not waste.
A good Power BI page reads like a well-written paragraph: the topic sentence first, supporting detail next, conclusions at the end. The “Z pattern” works for KPI-heavy dashboards (top-left → top-right → diagonal sweep → bottom-right). The “F pattern” works for table-heavy or narrative reports (left rail of headlines, deep content to the right). Pick one and stay consistent across pages — switching reading patterns mid-report makes users feel lost without knowing why.
Do
Use a clear grid (8-column or 12-column). Align edges. Leave 16-24px between visuals. Group related cards inside a subtle background panel so the relationship is visual, not just spatial.
Don’t
Pack visuals edge-to-edge to “use the space.” Crammed dashboards read as anxious; readers can’t tell what’s related to what. Empty space tells the eye where to rest.
Worked example
Three rows is usually enough. Row 1: hero KPI cards. Row 2: the chart that explains the most important “why” behind those KPIs. Row 3: a detailed table or matrix for users who want to inspect specific rows. If you need more than three rows, you probably need a second page.
04
Pick the right visual for the comparison
Every visual answers one question shape. Bar charts compare categories. Lines show change over time. Scatter plots reveal relationships. Pie charts show parts of a whole — and only when there are 4 or fewer slices. Mismatching shape to question is the most preventable design error in BI.
The default in Power BI is the clustered column chart, which encourages people to use one shape for everything. That’s a trap. Before choosing, name the comparison: are you comparing values across categories (bar), watching change over time (line), looking at distributions (histogram or box plot), correlating two measures (scatter), or showing rank (table or ordered bar)? Use the Visual Recommender tool in this Hub if you’re unsure — it walks you through the decision in four questions.
Do
Use horizontal bars for category names longer than 8 characters — vertical labels are unreadable. Use line charts when the X axis is time. Use a table when readers need exact numbers.
Don’t
Use 3D charts. Use donut charts with 8+ segments. Use dual-axis line charts unless both lines share a meaningful zero — readers will misread the relationship every time.
Worked example
“Show me sales by product category” almost always wants a horizontal bar chart sorted descending — not the vertical column Power BI gives you by default. The bar version reads naturally (biggest at top), accommodates long product names, and uses the canvas more efficiently for 8+ categories.
05
Color is information, not decoration
Color encodes meaning. If two visuals use the same color for unrelated things, you’ve taught the reader that color is random — and stripped your real signals of force. Reserve strong color for what matters and use neutrals everywhere else.
Three palette types cover almost every dashboard need. Sequential (light → dark of one hue) for ordered values like revenue or temperature. Diverging (red → neutral → green, around a meaningful midpoint like target or zero) for variance and growth. Qualitative (distinct hues with similar saturation) for unordered categories. The mistake most often is using a qualitative palette for ordered data, which destroys the ordering. The Color Palette Generator in this Hub builds all three from a single brand color and exports them as a Power BI theme.
Do
Use one accent color for “the thing the reader cares about” and grayscale for everything else. When the reader’s eye is drawn to color, make sure that color is on the most important data point.
Don’t
Rely on red/green alone. ~8% of men and ~0.5% of women have red-green color blindness. Always pair color with a label, icon (▲▼), or pattern. Test in grayscale.
Worked example
A revenue-by-region bar chart with all bars in the brand teal looks fine but says nothing. The same chart with one bar (the underperformer) in a contrasting color and the rest in a muted gray instantly answers “where’s the problem?” without a label. That’s color doing real work.
06
Interaction should feel inevitable
Filters, slicers, drillthrough, tooltips, and bookmarks all change what the user sees. Each one needs a clear purpose and a clear cue. If users don’t know they can interact — or they fear breaking the report — the interactivity may as well not exist.
There’s a hierarchy of interaction. Tooltips answer “what is this?” without changing state. Cross-filtering shows how this slice relates to others. Slicers change the question being asked. Drillthrough moves to a different page with focused detail. Bookmarks capture a saved view. Pick the lightest interaction that does the job. A drillthrough page is overkill if a tooltip will do; a slicer is overkill if cross-filtering already shows the answer.
Do
Add visible cues — “→ Click for detail”, a navigation button labelled “Reset filters”, a footer line saying “Right-click any bar for drillthrough.” Discoverability is part of the design, not the user’s problem.
Don’t
Hide everything important behind interaction. If the headline only emerges after the user picks the right slicer, you’ve made them do your work — and most won’t bother.
Worked example
Default state matters. When the report opens, slicers should be set to the most useful view — typically “current quarter” or “last 30 days” — not blank. A blank-state report makes the user define the question; a sensible default lets them confirm or refine it. Add a clearly-labelled Reset button so they can return to default after exploring.
07
Performance is a design decision
A dashboard that takes 12 seconds to load is a dashboard that doesn’t get used. Performance starts at the model — star schema, marked date table, single-direction relationships, measures over calculated columns — but it ends with what you put on the canvas.
Three things slow Power BI reports more than any others: bidirectional relationships used by default, FILTER() wrapped around large tables inside CALCULATE, and matrices showing thousands of rows of detail. Each of these is fixable, but the easier fix is upstream — design pages that don’t ask the engine for unreasonable things. A page with 15 visuals each doing 50ms of work is a 750ms page even if every measure is optimal. Performance budgeting before you build is cheaper than performance tuning afterwards.
Do
Aim for under 5 seconds first paint, under 2 seconds for slicer interactions. Cap visible rows in tables (Top N filter, 50 rows). Use aggregations or composite models for tables over 10M rows. Profile with Performance Analyzer.
Don’t
Ship without profiling. Don’t enable bidirectional cross-filtering as a default. Don’t put 5,000-row matrices on a summary page — that’s what drillthrough is for.
Worked example
View → Performance Analyzer → Start Recording → interact with every visual. The output shows query time per visual. Anything over 500ms deserves attention; anything over 2 seconds needs fixing before publish. Use the Copilot Optimization Guide in this Hub for a structured assessment of slow models.
08
Build for trust before you build for delight
A beautiful dashboard with a wrong number is worse than an ugly one with the right one. Show the data lineage. Show when it last refreshed. Show the definition behind each measure. Earn trust on day one and the dashboard becomes a tool people rely on.
Trust is built on small, visible signals. A “last refreshed at 06:14 UTC” caption tells the reader the data is fresh. A tooltip on every measure name explaining the formula prevents the “wait, what’s this?” question that erodes confidence. A row-level security indicator showing “you’re seeing data for: Region 7” prevents the “I see different numbers than my colleague” disaster. Audit those signals before launch — they cost almost nothing and pay back forever.
Do
Add a footer with: data source name, last refresh timestamp, report owner, and a link to “report a problem.” Add measure descriptions so tooltips explain calculations. Validate totals against a known baseline before publishing.
Don’t
Publish without testing edge cases — empty filter states, single-row selections, future dates. Don’t hide the data refresh status. Don’t ship measures named “Measure 1” or “Calc 12”.
Worked example
Before any report goes live, run a validation pass: pick five rows from the source table, calculate the KPIs by hand or in SQL, and confirm the report shows the same numbers. Document the test in a hidden page or readme tab. When someone challenges a number six months later — and they will — you have evidence the report was correct on launch and can investigate what changed.
📋
All eight principles in one example
Below is a fictional Power BI sales dashboard built to demonstrate every principle in this guide. The numbered markers (① through ⑧) point to specific design choices. Hover any marker to see which principle it illustrates.
1Clear brief — subtitle names the audience and refresh cadence so readers know who and when
25-second readable — hero numbers are 38px, comparison and YoY badge sit right beneath
3Three-row layout — KPI cards → analytical charts → detail table; reads top to bottom
4Right visual — horizontal bars sorted descending for region comparison; line chart for time
5Color carries meaning — only the underperforming bar is red, others muted; eye finds the problem
6Inevitable interaction — drillthrough hint and reset button are visibly labelled, not hidden
7Performance budget — table is Top 5, not the full product list; details live behind drillthrough
8Trust signals — source, owner, validation date and “report a problem” link all in the footer
→
Use this guide alongside the rest of the Design Hub
These principles are the why. The other tools in this Hub are the how. Pair this guide with the Accessibility Checker for a self-score, the Color Palette Generator for production-ready themes, the Visual Recommender for choosing visuals, and the DAX Library for the calculations that power your KPIs.
SVG DAX Library
SVG Visuals & Custom KPI Cards — Complete Guide
Production-ready DAX measures that render inline SVG graphics inside Power BI tables and matrices. Five complete recipes, executive dashboard patterns, enterprise design patterns, and FAQ — no AppSource required.
🚫
No AppSource needed
Works in HIPAA and FedRAMP environments that block custom visuals entirely.
📧
Email & PDF safe
Renders correctly in subscriptions, PDF exports, and embedded reports.
⚡
Fully dynamic DAX
Colors, sizes, labels respond to filter context, slicers, and cross-filtering.
🏢
All deployment models
Pro, PPU, Premium, and Fabric. Zero licensing constraints.
⚙️
Quick setup
Add to Table/Matrix → Data category: Image URL → Image height 20–30px.
The Basic SVG Pattern
Every SVG measure follows this exact structure
SVG Basic Example =
VAR _Value = [Your Measure]
VAR _Color =
SWITCH(
TRUE(),
_Value >= 90, "#22C55E", -- Green
_Value >= 70, "#EAB308", -- Yellow
"#EF4444" -- Red
)
VAR _SVG =
"data:image/svg+xml;utf8," &
"<svg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'>" &
"<circle cx='10' cy='10' r='9' fill='" & _Color & "'/>" &
"</svg>"
RETURN
_SVG
SVG Encoding Rules
Required — do not skip these
1
Use single quotes for all SVG attributes — double quotes conflict with the DAX string delimiter.
2
Replace
# with # in hex colors. Example: #22C55E = #22C55E.
3
Always include
xmlns='http://www.w3.org/2000/svg' on the root element.
4
Always set
viewBox explicitly — it controls the coordinate system and aspect ratio.
5
After adding to visual: Column tools → Data category → Image URL. Format pane: Cell elements → Image height 20–30px.
— 5 Production-Ready Recipes (Full DAX)
Bonus: Trend Arrow SVG
Use as sub-value in the new card visual or inline in a matrix — shows direction + % change
Trend Arrow =
VAR _Current = [Gross Margin %]
VAR _Prior = [Prior Period Margin %]
VAR _Direction = IF( _Current >= _Prior, "up", "down" )
VAR _Color = IF( _Direction = "up", "#22C55E", "#EF4444" )
VAR _ArrowUp = "M10,2 L18,14 L13,14 L13,22 L7,22 L7,14 L2,14 Z"
VAR _ArrowDown = "M10,22 L18,10 L13,10 L13,2 L7,2 L7,10 L2,10 Z"
VAR _Path = IF( _Direction = "up", _ArrowUp, _ArrowDown )
VAR _Change = FORMAT( ABS( _Current - _Prior ), "0.0%" )
VAR _SVG =
"data:image/svg+xml;utf8," &
"<svg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 80 24'>" &
"<path d='" & _Path & "' fill='" & _Color & "' transform='scale(0.8) translate(0,1)'/>" &
"<text x='24' y='17' font-family='Segoe UI,sans-serif' font-size='12' font-weight='600' fill='" & _Color & "'>" &
IF( _Direction = "up", "+" , "-" ) & _Change &
"</text>" &
"</svg>"
RETURN
_SVG
Executive KPI Dashboard Architecture
Production 3-row layout combining new card visual + SVG indicators
Row 1 — KPI Cards (New Card Visual × 4)
💰 Revenue
Callout: Total Revenue
Ref label: YoY % variance
Sparkline: 12-month trend
CF: green/red callout
Callout: Total Revenue
Ref label: YoY % variance
Sparkline: 12-month trend
CF: green/red callout
📈 Margin
Callout: Gross Margin %
Ref label: vs. target
Sparkline: quarterly trend
Sub-value: Trend Arrow SVG
Callout: Gross Margin %
Ref label: vs. target
Sparkline: quarterly trend
Sub-value: Trend Arrow SVG
⭐ CSAT
Callout: CSAT score
Sub-value: Star Rating SVG
Ref label: sample size
“Based on 2,340 responses”
Callout: CSAT score
Sub-value: Star Rating SVG
Ref label: sample size
“Based on 2,340 responses”
🎯 Pipeline
Callout: Pipeline $
Ref label: conversion %
Sub-value: Progress Bar SVG
showing coverage ratio
Callout: Pipeline $
Ref label: conversion %
Sub-value: Progress Bar SVG
showing coverage ratio
Row 2 — Performance Matrix with SVG Indicators
Business Unit (text)Revenue (currency)
Sparkline SVGTraffic Light SVG
Progress Bar SVGYoY % (conditional font color)
Each row = a business unit or product line. SVG columns give quick visual status before users drill through for detail.
Row 3 — Trend Charts + Detail Tables
Full-width line chart (standard Power BI visual, not SVG) alongside a detail table with drill-through navigation. SVG indicators in the detail table provide quick status before drilling. Best balance of interactivity and information density.
Conditional Formatting — Beyond the Basics
Advanced techniques that complement SVG measures
Background Color by Rules
Right-click measure → Conditional formatting → Background color → Rules. Example: 0–50 = #FEE2E2, 50–80 = #FEF3C7, 80–100 = #DCFCE7. Works on columns adjacent to SVG columns for layered visual encoding.
Icon Sets in Tables
Right-click column → Conditional formatting → Icons. Configure rules for arrows, flags, or circles. Simpler than SVG — better performance on high-row-count tables where SVG DAX computation adds overhead.
Native Data Bars
Conditional formatting → Data bars. Colored bar behind the number, like Excel. Zero DAX overhead. Limited styling — no labels, rounded corners, or multi-segment. Use for quick implementations; SVG progress bars for polished reports.
Web URL Navigation
Set column data category to “Web URL” — values become clickable links. Combine with a DAX measure that constructs URLs dynamically based on filter context, linking rows to SharePoint, web apps, or drill-through reports.
Font Color by Field Value — DAX Color Measure
YoY Color =
VAR _Growth = [YoY Growth %]
RETURN
SWITCH(
TRUE(),
_Growth >= 0.10, "#15803D", -- Dark green: 10%+ growth
_Growth >= 0, "#22C55E", -- Green: positive growth
_Growth >= -0.10, "#EF4444", -- Red: negative growth
"#991B1B" -- Dark red: 10%+ decline
)
-- Usage: Conditional formatting > Font color > Field value > select this measure
Performance Considerations
SVG measures run in the formula engine — understand the impact before deploying to large tables
| Row Count | Simple SVG (Traffic Light) | Complex SVG (Sparkline) | Recommendation |
|---|---|---|---|
| < 100 rows | Instant | Fast (<1s) | No concerns |
| 100–500 rows | Fast | Moderate (1–3s) | Acceptable for most scenarios |
| 500–2,000 rows | Moderate | Slow (3–8s) | Optimise or paginate |
| 2,000+ rows | Noticeable | Unacceptable | Aggregate data, limit visible rows |
1. SWITCH not IF chains
SWITCH(TRUE(),…) is more readable and slightly more efficient than nested IF statements.
2. Pre-compute with VAR
VAR values are computed once and reused. Without VARs the same sub-expression may recalculate repeatedly.
3. Minimise SVG elements
Each SVG element adds to string length and render time. Avoid decorative borders or shadows unless essential.
4. Aggregate first
Show product categories (50 rows) not SKUs (5,000 rows). Use drill-through for detail access.
5. Top N filter
Cap visible rows at 20–50 with a Top N filter. Add a “Show All” bookmark for users who need the full list.
Enterprise KPI Design Patterns
Standardisation, reusability, and governance patterns for large deployments
1. Standardise KPI Colors with Theme JSON
Define organisation-wide KPI colors in a Power BI theme file. Reference the same hex codes in your SVG DAX measures. Distribute through your organisation’s visual theme gallery so every report uses consistent green, yellow, and red.
{
"name": "Enterprise KPI Theme",
"dataColors": [
"#2563EB", "#7C3AED", "#059669", "#D97706",
"#DC2626", "#0891B2", "#4F46E5", "#15803D"
],
"tableAccent": "#2563EB",
"good": "#22C55E",
"neutral": "#EAB308",
"bad": "#EF4444",
"maximum": "#15803D",
"center": "#EAB308",
"minimum": "#991B1B"
}
2. Reusable Measure Naming Convention
Build a library of SVG measure templates in a dedicated “Measures” table. Follow a consistent naming convention so every developer on the team can find and reuse existing SVG measures.
-- Naming convention: _SVG.[Type].[Context] -- Examples: _SVG.TrafficLight.SalesPerformance _SVG.Sparkline.RevenueByMonth _SVG.ProgressBar.TargetCompletion _SVG.BulletChart.BudgetVsActual _SVG.StarRating.CustomerSatisfaction
3. Config-Driven Traffic Light — Thresholds from a Table
Store thresholds in a configuration table so business users can adjust them without modifying DAX. The table holds KPI names, green thresholds, and yellow thresholds. The measure looks them up dynamically.
Configuration table: KPI_Thresholds
| KPI_Name | Green_Min | Yellow_Min |
| Sales Performance | 90 | 70 |
| Budget Utilization | 85 | 60 |
| SLA Compliance | 95 | 80 |
| KPI_Name | Green_Min | Yellow_Min |
| Sales Performance | 90 | 70 |
| Budget Utilization | 85 | 60 |
| SLA Compliance | 95 | 80 |
Traffic Light (Config-Driven) =
VAR _Score = [Score %]
VAR _KPIName = SELECTEDVALUE( KPI_Thresholds[KPI_Name] )
VAR _GreenMin =
LOOKUPVALUE(
KPI_Thresholds[Green_Min],
KPI_Thresholds[KPI_Name], _KPIName
)
VAR _YellowMin =
LOOKUPVALUE(
KPI_Thresholds[Yellow_Min],
KPI_Thresholds[KPI_Name], _KPIName
)
VAR _Color =
SWITCH(
TRUE(),
_Score >= _GreenMin, "#22C55E",
_Score >= _YellowMin, "#EAB308",
"#EF4444"
)
VAR _SVG =
"data:image/svg+xml;utf8," &
"<svg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 20 20'>" &
"<circle cx='10' cy='10' r='9' fill='" & _Color & "'/>" &
"</svg>"
RETURN
_SVG
4. Dynamic Thresholds with What-If Parameters
Create a What-If parameter for “Green Threshold” (range 50–100, increment 5) and another for “Yellow Threshold”. Reference these parameter values in your SVG measures. Users drag a slicer to see how different thresholds reclassify their KPIs in real time — a powerful tool for threshold calibration workshops with business stakeholders.
New Card Visual (2024+) — Complete Feature Reference
Replaces legacy card and multi-row card. Use for standalone KPIs; SVG is for inline indicators inside tables.
| Feature | Description | Legacy Card |
|---|---|---|
| Multiple Callout Values | Display several KPIs on one card with independent formatting | Single only |
| Reference Labels | Show comparison text like “vs. Last Quarter” with variance values | Not available |
| Built-in Sparklines | Add trend line directly to the card — no DAX SVG needed | Not available |
| Target/Actual Comparison | Visual comparison against target with automatic variance calculation | Manual setup |
| Conditional Formatting | Apply rules to callout, reference labels, background, and borders independently | Limited |
| Sub-values | Add secondary metrics below the main callout for context | Not available |
| Layout Control | Horizontal or vertical stacking, alignment, spacing, and padding | Fixed layout |
1
Insert Card (new)
Select “Card (new)” from Visualizations. Use the dropdown arrow to switch from legacy if needed.
2
Add Callout Value
Drag your KPI measure into “Callout value”. Format units, decimals and font in Format pane.
3
Reference Labels
Format pane → Reference labels → On. Set type to “Comparison”, select prior period measure.
4
Add Sparkline
Callout value → Sparkline → On. Select a date field for X-axis. Choose line or bar style.
5
Conditional Formatting
Right-click callout → Conditional formatting. Apply to font, background, reference label independently.
Frequently Asked Questions
Visual Selector 03
Visual Decision Tree
Follow the flowchart to find the perfect visual for your specific use case.
One response to “Design Hub”
-
The copilot guide is amazing. Also, the Power BI best practices is really helpful.
Leave a Reply