Build Things That Actually Work.
After 18 years and 30+ enterprise engagements, the patterns are clear. The same quality failures appear in nearly every organisation. These resources help you avoid them — before they cost you users, trust, or your weekend.
- Microsoft Certified Trainer (MCT)
- PL-300, DP-600/700 certified
- 18+ yrs in BI
- 200+ trained
- 30+ enterprises advised
What a Production-Ready Solution Looks Like
Most quality conversations in Power BI start and end with data quality. That’s too narrow. A production-ready analytics solution has to be solid end-to-end — from the first pipeline run to the report in a board pack.
Speed without quality isn’t speed — it’s technical debt with a deadline. I’ve watched organisations rush reports into production, skip testing, and spend three months unpicking the consequences. The six dimensions below define what “done” actually means.
- Refresh schedules that complete without manual intervention
- DAX measures validated against a known baseline
- Alerts when refresh fails — not silence until someone notices
- No calculated columns where measures would do the job
- Import vs DirectLake decided deliberately, not by default
- Spark pipelines that don’t thrash shared capacity
- Dev → Test → Prod deployment pipelines
- Automated regression tests before every promotion
- Refresh history monitored by notebook, not inbox
- Measures in display folders with clear descriptions
- Format strings applied consistently across every figure
- Hidden fields hidden; visible fields purposeful
- No internal IDs exposed to report users
- Tooltips and descriptions for non-obvious measures
- Reports that answer a specific question, not every possible one
- Power Query error handling for bad source data
- Limited cross-dependencies between measures
- Regression tests before every production deployment
Before You Ship, Check the List
The exact checks I run — or recommend teams run — before promoting any Power BI or Fabric artefact to production. Minimum bar, not a ceiling.
- Star schema confirmed — no flat table anti-patterns
- All relationships set to correct cardinality and direction
- Calculated columns replaced with measures where possible
- DAX measures tested against a known baseline
- Refresh schedule configured and tested end-to-end
- Row-level security tested with representative accounts
- Sensitivity labels applied and documented
- Model description updated with owner and date
- All visuals load without errors
- Layout tested at 1366×768 and 1920×1080
- Slicer and cross-filter interactions behave as intended
- Bookmarks and navigation tested in reading view
- Alt text added to all non-decorative visuals
- Page titles use field values, not hardcoded text
- Published to correct workspace with correct permissions
- Source connections use service accounts, not personal creds
- Refresh schedule configured — not left on manual
- Incremental refresh set where data volume warrants it
- Column types explicitly set — no auto-detection reliance
- Null and error handling applied to critical columns
- Output columns use business-friendly naming
- Workspace has a documented Owner and Contributor
- Access roles assigned to groups, not individuals
- Workspace licence mode confirmed
- Git integration configured for Fabric workspaces
- Deployment pipeline connected to Dev/Test/Prod
- Workspace contact details updated in settings
- Dev → Test → Prod stages assigned to correct workspaces
- Deployment rules configured for connection strings
- Automated tests run before each Test → Prod promotion
- Only designated approvers can deploy to Prod
- Post-deployment validation documented
- Rollback procedure documented for failed deployments
- App audience aligned with intended users — not “Everyone”
- Navigation tested by a non-developer user
- App description and contact information updated
- Default filter state reviewed for relevance
- App update process documented and tested
Quality Doesn’t Live in One Place
The most common mistake is treating quality as a final-step review. It needs to be built in at every stage — a flawed foundation always surfaces at the worst possible moment.
Why DataOps Changes the Equation
DataOps is a set of habits that make your analytics pipeline reliable enough to sleep through the night.
It borrows from DevOps and Agile — separate environments, automated testing, version control, monitored deployments — and applies them to the full analytics stack. Not just ETL, not just code. Reports, models, pipelines. All of it.
The organisations that struggle most with quality rely on individual heroes rather than systematic processes. DataOps replaces heroics with structure. Start with one principle today — separate Dev and Prod workspaces. That alone changes everything.