AI Won’t Replace Your BI Reports.
But Here’s What Changes If You Ignore It.
8 practical AI accelerators for Power BI & Fabric developers — with real workflows, sector playbooks for Healthcare, Retail, BFSI, NBFC and Logistics, and a guardrails framework to use AI without putting client data at risk.
Sixteen years in BI. Power BI, Fabric, Qlik, Tableau, Cognos, Looker, Spotfire, Google Data Studio — I’ve built on all of them, across every kind of client. And every few years, something new arrives that’s going to “change everything.” Self-service BI was supposed to make developers redundant. Cloud was meant to kill the data warehouse. Now it’s AI coming for dashboards and reports.
It won’t. Not the way people think.
That said — the developers burying their heads and ignoring AI are going to fall behind. And the ones treating it like a LinkedIn demo toy aren’t far behind them. The ones actually moving ahead? They’ve figured out what AI is genuinely good at in a BI context, and they’re using it hard. Stuff that used to take a full day now takes a couple of hours. That’s the gap that’s opening up.
Almost every quarter, someone in a leadership meeting drops some version of: “Why are we waiting for a report when we can just ask the AI?” Fair enough — the demos are slick. Type a question, get a number, looks polished. Feels like the future.
Real Scenario — The Accruals Miss
What I’ve SeenMost articles on this topic skip the actual mechanics. So let’s go there.
A Power BI report connected to a certified semantic model is deterministic. Net Revenue is defined — gross sales minus returns minus rebates, scoped to the fiscal calendar finance agreed on. RLS is enforced. A regional director sees their region, full stop. The refresh runs on a known schedule or Direct Lake from a governed source. And every number traces back — visual to measure to query to fact table row. That auditability is the whole point.
Ask an AI the same question and you get something different in nature. It’s generating a response — interpreting your wording, picking tables, applying whatever definition it’s been given or can infer from column names. Phrase the question slightly differently and you can get a different answer. That’s not a flaw in the model — that’s how language models work. Context and wording shape the output. Which is fine for exploration. It’s a problem when someone assumes the answer is definitive.
“Board packs, month-end closes, regulatory submissions — none of that is moving to a chat interface. Those processes need the same answer every time, with a clear trail back to source. That’s what a certified semantic model does. An AI layer doesn’t.”
✓ Use Your Certified Report When
- Finance is closing the books
- KPIs are going to the board
- You need an auditable number trail
- The question is the same every time
- Multiple stakeholders need identical numbers
- Regulatory or compliance reporting
- Numbers that go into external filings
⚡ Use AI When
- You’re exploring — don’t know what to ask
- New analyst needs to find the right report
- Summarising large text-heavy datasets
- Speed matters more than auditability
- Forming a hypothesis before building a report
- Ad-hoc investigation, not a recurring metric
- Generating documentation or requirements drafts
The Four Pillars of Data Trust
When an analyst walks into a meeting with a number, there’s a whole invisible structure behind it. People know who built that report, where the data comes from, who signed off the logic. It has a history. When a chatbot returns a number, none of that exists — and most business users have no way to check. Building AI on top of a governed data foundation isn’t optional. Without it you’re just making the wrong answers sound more confident.
Not theoretical. Every one of these is something I’ve actually built or used on a real engagement. The principle behind all of them is the same — AI takes on the repetitive, structurally complex parts, and I apply the judgement and context a model simply doesn’t have.
DAX Measure Generator
Semantic Model Documenter
Report Requirements Validator
DAX Review & Optimiser
M Code Generator
Fabric Notebook Scaffolder
KPI Narrative Generator
Migration Assessment Assistant
Accelerator Visual
The Problem It Solves
Most enterprise models end up with 80 to 200 measures. Writing them one by one is slow, and the inconsistency that creeps in between developers is a real problem — same calculation, three different patterns. Authoring time drops sharply and the team writes from the same skeleton. Junior developers stop reinventing patterns senior developers sorted out years ago.
Workflow
Prompt Template (Zone 1 — Safe for Any Tool)
-- Describe logic + structure only. No actual table or column names. "I need a DAX measure for Power BI with the following requirements: PATTERN: Rolling 12-month average [alternatives: YTD vs PY / Top N with Others / Semi-additive balance] FACT TABLE STRUCTURE (types only, not names): - One date key column (integer YYYYMMDD format) - One numeric measure column (currency/quantity/count) - Date table is marked as a date table CALENDAR: [Fiscal year Apr-Mar / Calendar year / 445 retail] EDGE CASES: - Months with no data should return [zero / BLANK / prior value] - Must work at day / month / year grain without modification - Handle partial current period correctly CONTEXT: - This is for a [Finance / Operations / HR] executive dashboard - Audience: executive level — no DAX visible to end user Return the measure with inline comments on each step."
Example Output — Rolling 13-Month Average
-- Developer renames 'FactTable', 'DateTable', [Amount] to actual model fields Rolling 13M Avg = VAR LastDate = MAX( 'DateTable'[Date] ) VAR StartDate = DATEADD( 'DateTable'[Date], -12, MONTH ) VAR PeriodData = CALCULATETABLE( ADDCOLUMNS( SUMMARIZE( 'FactTable', 'DateTable'[MonthKey] ), "MonthAmt", CALCULATE( SUM( 'FactTable'[Amount] ) ) ), DATESBETWEEN( 'DateTable'[Date], StartDate, LastDate ) ) VAR FilledData = ADDCOLUMNS( PeriodData, "SafeAmt", IF( ISBLANK( [MonthAmt] ), 0, [MonthAmt] ) ) RETURN AVERAGEX( FilledData, [SafeAmt] )
Always rename placeholders before publishing. Validate in DAX Studio against real data. Test at all grain levels — day, month, quarter, year — before the measure goes into a certified model.
Sector Applications
- Healthcare: Rolling average patient admissions by ward; 30-day readmission rate with surrogate keys only
- BFSI: NIM over rolling quarters; semi-additive portfolio balance at period end
- Retail: Same-store sales growth; basket size moving average by store format
- NBFC: GNPA/NNPA ratio with IFRS 9 stage logic; collection efficiency per DPD bucket
- Logistics: On-time delivery rate rolling 90 days; cost per shipment trend by lane
Accelerator Visual
The Problem It Solves
The most common reason a certified semantic model gets bypassed is simple — nobody documented what the measures actually mean. Developer builds 120 measures, knows every one of them, writes nothing down. Six months later a new analyst can’t tell Net Revenue from Gross Revenue Adjusted and picks the wrong one. The solution: generate that documentation directly from the measure code. TMDL is pure logic — no business data, no row values — so it’s safe to run through AI and the output is a working data dictionary in minutes, not days.
Architecture
Prompt Template
"Below are DAX measure definitions from a Power BI semantic model.
For each measure, write a 2–3 sentence plain-English description that:
1. Explains what the measure calculates
2. States which filters or time periods apply
3. Clarifies when to use this vs similar measures
Format as JSON only — no preamble, no markdown:
[{"measure_name":"","description":"","use_when":""}]
MEASURES:
[paste TMDL measure block here — DAX expressions only]"TMDL contains your data model structure — table names, measure logic, relationships. This is proprietary IP. Use your organisation’s Azure OpenAI private endpoint or M365 Copilot on your corporate tenant. Never paste TMDL content into a public AI tool.
Accelerator Visual
Sat in enough post-delivery reviews where a client says “we expected X to be in here” and there’s nothing in the requirements doc about X. It’s painful and avoidable. ACC-03 runs against your requirements document — no data, just business descriptions — before a single visual gets built. It’s like having someone who’s reviewed a hundred dashboards in your sector read your brief and flag what’s missing. You still make the calls, but you go into build with fewer blind spots.
Prompt Template (Zone 1 — Any Tool)
"I am building a [Finance/Risk/Operations/HR] dashboard for a [job title].
Below is my requirements document. Review it and:
1. Identify KPIs commonly expected for this role that are missing
2. Flag measure definitions that are ambiguous or will cause disagreement
3. Suggest dimensions to add for filtering/slicing
4. Identify date/period requirements I haven't specified
5. Flag data availability risks based on what I've described
Do not suggest anything unrealistic for this business function.
Requirements Document:
[paste requirements — business logic descriptions only, no actual data]"What AI Typically Catches by Sector
- Healthcare CFO: Missing “Days Cash on Hand”; “Occupancy Rate” definition ambiguous (licensed vs staffed beds); payer mix dimension missing
- BFSI Risk: VaR confidence level not specified; LCR and NSFR missing from liquidity section; reporting date vs value date not clarified
- Retail Trading: “Shrinkage” not defined (known vs unknown loss); missing LFL comparative; store format dimension absent
- NBFC Collections: DPD bucket definitions missing; no roll-forward analysis specified; restructured loans not addressed
- Logistics Ops: SLA threshold definition missing; lane vs hub granularity unclear; carrier dimension for benchmarking not included
Accelerator Visual
ACC-04 — DAX Review Prompt
"Review the following DAX measure for:
1. Performance anti-patterns (row context iteration on large tables,
FILTER() where CALCULATE works, unnecessary SUMX on a column)
2. Logical correctness — does this handle BLANK correctly?
3. Time intelligence errors — does SAMEPERIODLASTYEAR need ALL()?
4. Suggest a refactored version with inline comments
DAX:
[paste measure expression — no actual data, just the DAX code]"Accelerator Visual
ACC-05 — M Code Generator Prompt
"Write a Power Query M function for a data transformation.
SOURCE SCHEMA (column names + types only — no values):
- transaction_id: text
- transaction_date: text (mixed formats: dd/MM/yyyy and MM/dd/yyyy)
- amount: number (some nulls, some negatives)
- status_code: integer (1=complete, 2=pending, 3=failed)
TRANSFORMATIONS REQUIRED:
1. Detect date format and normalise to proper Date type
2. Replace null amounts with 0, flag negatives in new column
3. Map status_code to a description column via lookup
4. Return as a reusable function callable from multiple queries"Sharing column names and data types describes the structure, not the content. A column called transaction_date of type text tells the AI nothing about any actual transaction. This is Zone 2 — safe with your organisation’s approved AI endpoint.
Accelerator Visual
Architecture: Bronze → Silver → Gold with AI Scaffolding Boundary
Prompt Template — Bronze to Silver Notebook
"Generate a Fabric PySpark notebook for a Bronze-to-Silver transformation.
SOURCE SCHEMA (column names + types — no values):
- loan_id: string
- transaction_date: string (format dd/MM/yyyy, some nulls)
- amount: decimal(18,2) (some negatives to flag)
- customer_ref: string (surrogate key — format CUST-XXXXX)
- status_code: integer (1=complete, 2=pending, 3=failed, 99=unknown)
TRANSFORMATIONS:
1. Cast transaction_date → proper date, nulls → COALESCE to 1900-01-01
2. Flag negative amounts in new column 'amount_flag'
3. Map status_code → description column via lookup
4. Deduplicate on transaction_id, keep latest by transaction_date
5. Add load_timestamp watermark, process only new rows
OUTPUT: Delta MERGE (upsert) into Silver lakehouse
Include: logging, error handling, row count validation pre/post"Accelerator Visual
ACC-07 — Automated KPI Narrative
Dashboards show the what. ACC-07 generates the so what. The weekly trading pack, the exec summary, the board narrative — someone senior used to spend 3 to 4 hours on those every Monday morning just writing the commentary. With this set up right, the Fabric notebook pulls the Gold layer snapshot after Friday’s refresh (aggregated metrics — no PII, no customer rows), formats it as JSON, and calls Azure OpenAI through the private endpoint. Draft commentary is ready before the weekend is over. The analyst reads it Monday morning, adjusts tone where needed, and publishes. They still own it. The blank page problem is gone.
"Write a weekly performance commentary. Data below is aggregated only —
no individual or customer-level information is included.
For each business unit, write one paragraph (max 100 words) that:
1. States whether performance is above/below target and by how much
2. Names the top contributing factor from the data
3. Notes improvement or deterioration vs prior week
4. Flags any threshold breaches needing action
Tone: factual and direct. Numbers in every sentence. No hedging.
DATA:
[JSON of aggregated KPI snapshot from Gold layer]"Accelerator Visual
ACC-08 — Migration Assessment Assistant
Migration discovery is expensive time. Going through a Tableau, SSRS or Qlik estate report by report takes weeks when done manually. ACC-08 takes the inventory export — report names, data sources, visual types, refresh schedules, usage stats — and classifies it: migration complexity, what to decommission (zero usage in 90 days, obvious duplicates), and what to prioritise for Phase 1. No actual data involved, only metadata. Gets you to a defensible shortlist in hours rather than weeks.
"Analyse this report inventory for a Tableau-to-Fabric migration.
Classify each report as COMPLEX / STANDARD / DECOMMISSION.
Identify top 10 highest-value reports by usage + data source diversity.
Flag common sources that should become shared semantic models in Fabric.
Estimate relative effort (S/M/L/XL) per report.
INVENTORY (metadata only — no actual data):
[CSV: report names, data sources, usage counts, visual types, owner]"Same accelerators, different contexts. KPIs shift, regulatory constraints differ, and the PII risk profile between a logistics network and an NHS trust is completely different. Here’s how it plays out sector by sector.
Healthcare BI is where the stakes on data governance are at their highest. Clinical data — NHS numbers, dates of birth, postcodes, ICD-10 diagnosis codes, OPCS procedure codes, consultant GMC numbers — falls under GDPR special category rules and NHS DSPT obligations. Get it wrong and you’ve got a reportable incident, not just a policy issue. At the same time, the pressure to report on bed utilisation, theatre efficiency, readmission rates and financial sustainability is constant. The developer challenge here isn’t mainly technical — it’s navigating the gap between what the data can tell you and what you’re actually permitted to do with it.
Key Executive KPIs
ACC-01 Applied — Readmission Rate DAX
"Write a DAX measure for 30-day readmission rate.
FACT TABLE STRUCTURE (surrogate keys only — no real identifiers):
- episode_id: string (system-generated, not patient-identifiable)
- admit_date: date
- discharge_date: date (null if still admitted)
- patient_key: integer (surrogate — NOT NHS number, NOT DOB)
- specialty_key: integer
REQUIREMENTS:
- Flag any episode where same patient_key had a prior episode
discharged within 30 days before this admission
- Return: count of readmissions AND rate as % of total discharges
- Filter to completed episodes only (discharge_date not null)
- Must work at specialty level and trust-wide without modification"Healthcare PII Data Flow
Saw this happen: a developer debugging a readmission DAX measure pasted 50 rows from the Power BI data view into a public AI tool to show it the data structure. Those rows had admission dates, discharge dates, specialty codes, and a patient ID that was partially masked but still traceable. Under NHS DSPT and GDPR Article 9 that’s a disclosure of special category health data to a third party with no DPA in place. Reportable incident. The right way: describe the column structure and the logic issue in words — no rows, ever.
Retail moves fast. Assortment decisions, promotional responses, markdown calls — they’re happening daily and the business doesn’t want to wait for a reporting cycle. The pressure to answer faster is real. But the underlying data carries its own risks. Transaction-level sales data is one thing. Customer loyalty data — names, emails, purchase history, location data from in-store behaviour — that’s personal data under GDPR, full stop. It doesn’t matter how quick the AI answer would be. It doesn’t go near an external tool.
Key Executive KPIs
ACC-07 Applied — Weekly Trading Commentary Automation
A retail group produces a weekly trading pack for 12 regional directors. The narrative commentary used to take a senior analyst 3 hours every Monday morning. After Friday’s data refresh, a Fabric notebook pulls the Gold layer aggregates (store × category × week grain — no customer data), formats them as structured JSON, and calls Azure OpenAI through the private endpoint. Draft commentary lands in a OneLake table. The analyst reviews and publishes by 9am. Time saving: 2.5 hours per week, every week.
Customer name, email, loyalty card number, individual purchase history — personal data under GDPR. No ambiguity there. If the retail BI includes a customer 360 view, the row-level grain stays out of every AI tool without anonymisation first. Aggregated segment or cohort level is fine. Individual customer rows are Zone 3 and stay there.
Working in BFSI, the semantic model isn’t just a reporting tool — it’s often the source of record for regulatory submissions. Basel III/IV, IFRS 9, Solvency II, PRA returns, FCA reporting — these aren’t optional interpretations. They’re filed. A discrepancy between what the certified report shows and what an AI chatbot returns isn’t just confusing — it’s potentially a governance failure. That’s the environment. Everything in this space has to be defensible, auditable, and traceable.
Key Executive KPIs
ACC-01 Applied — Semi-Additive Balance Measure
"Write a DAX measure for a period-end balance (semi-additive).
FACT TABLE (structure only, no values):
- account_key: integer (surrogate)
- balance_date: date
- balance_amount: decimal(18,4)
- Date table is marked as a date table
REQUIREMENTS:
- Day grain: return balance for that day
- Month grain: last day of month — NOT a sum
- Quarter/Year grain: last day of period
- No activity in period: return most recent prior balance, not BLANK
- Must use LASTNONBLANK correctly across all time intelligence contexts
- Executive portfolio outstanding balance KPI"Customer account balances, transaction histories, credit scores, AML flags — this is financially sensitive personal data. In the UK this falls under FCA data handling obligations in addition to GDPR. Account-level data shared with a public AI tool without a DPA is a reportable breach. Even anonymised transaction data carries documented re-identification risk.
NBFCs are in a tough spot — heavy regulatory scrutiny from RBI or equivalent bodies, but typically leaner BI teams than the large banks. Deadlines don’t move just because the team is small. AI accelerators have real ROI here because the developer-to-dashboard ratio is stretched. More to build, fewer people to build it. The DPD bucket pipeline and the collections dashboard tend to absorb the most time, and that’s where these prompts pay back quickest.
Key Executive KPIs
ACC-06 Applied — DPD Bucket Pipeline Scaffold
"Generate a Fabric PySpark notebook for DPD (Days Past Due) buckets.
SOURCE SCHEMA (column names + types — no borrower data):
- loan_id: string
- loan_start_date: date
- emi_due_date: date
- emi_amount: decimal(18,2)
- payment_date: date (nullable — null = unpaid)
- payment_amount: decimal(18,2) (nullable)
- report_date: date (the as-of calculation date)
LOGIC:
1. outstanding_dpd = report_date − last_payment_date
(no payments ever: dpd = report_date − loan_start_date)
2. Bucket into: Current (0), 1-30, 31-60, 61-90, 91-180, 180+
3. Flag NPA if dpd > 90 days
4. outstanding_principal = SUM(emi_amount) − SUM(payment_amount)
5. Output one row per loan per report_date
Add watermark for incremental processing. Include logging."Logistics data moves fast and at volume. PII risk is lower than healthcare or financial services, but commercial sensitivity is high — carrier rates, lane profitability, customer SLA performance. That’s all competitive intelligence and it needs treating accordingly. Packaging manufacturers have a slightly different focus: production yield, OEE, waste rates, order fill. Different KPIs, same governance principles.
Key Executive KPIs
ACC-07 Applied — Automated SLA Breach Narrative
A logistics client with 40+ major retail customers — each on bespoke SLA agreements — needed a weekly narrative of which customers were in breach. Previously 4 hours every Friday for a senior analyst. With ACC-07 against Gold layer SLA aggregates (customer account codes, metrics, thresholds — no individual recipient data), the narrative is drafted automatically. The analyst reviews and approves by end of day.
The primary PII risk in logistics is recipient address and contact data for B2C residential deliveries. This must never enter any AI tool. Metric aggregations at customer account or segment level — OTIF rate, cost per shipment, SLA breach rate — contain no PII and are safe through your private endpoint.
Most AI and BI content glosses over this part. It shouldn’t. Both scenarios below have happened in real organisations. The damage in each case was real and in some cases lasting.
The Three-Zone Framework
Every AI interaction needs to be categorised. Not complicated — just consistent. Three zones, and knowing which one you’re in before you start typing.
| Zone | What You Share | Where You Can Use AI | Risk | Typical Examples |
|---|---|---|---|---|
| Zone 1 ✓ Safe | Logic patterns, DAX structures, M code patterns, architectural questions. No table names, no field values, no client identifiers. | Any tool — public or private. Nothing sensitive in context. | None | “How do I write a RANKX with ties?” / “What pattern for rolling average in DAX?” |
| Zone 2 ⚠ Caution | Column names and types (no values), TMDL measure code, report structure descriptions, anonymised schema. | Organisation’s approved AI only — M365 Copilot on corporate tenant or Azure OpenAI private endpoint with DPA. | Low — with right endpoint | Pasting TMDL for documentation / sharing schema for modelling advice |
| Zone 3 🚫 Danger | Actual data rows, PII fields, financial account details, medical records, customer lists, employee data, commercially sensitive transactions. | Explicit DPO sign-off required. Private endpoint alone may not be enough — formal DPIA may be needed. | High — potential breach | Pasting CSV “to check format” / copying Power BI data view rows to debug DAX / uploading a spreadsheet extract |
If the document or data would cause a problem appearing in tomorrow’s newspaper — it is Zone 3. No AI tool touches it without explicit organisational approval, a Data Processing Agreement, and in sensitive cases a full Data Protection Impact Assessment (DPIA).
Scenario A: Developer Shares Financial Rows to Debug a Measure
A developer is building a balance sheet report. A DAX measure returns incorrect values at subsidiary level. To show the AI what’s happening, they copy 30 rows from the Power BI data view — company codes, cost centre codes, account codes, period balances — and paste them into a public AI tool to explain the data structure.
Consequences
- That data is now in a third-party system with no DPA and no contractual restriction on secondary use
- If those account codes relate to a publicly-listed entity, this may be market-sensitive information under MAR (Market Abuse Regulation)
- If cost centre codes can identify an individual (e.g. a sole trader’s account), it’s personal data under GDPR Article 4(1) — ICO notification may be required within 72 hours
- Depending on your acceptable use policy, this is a disciplinary matter regardless of intent
- The fix: Describe the shape, not the content. “I have company_key (integer), account_key (integer), period_key (integer), balance (decimal). The measure is summing incorrectly at subsidiary level when…” — no actual data in the prompt, same quality of AI help.
Scenario B: Executive Uploads Board Pack to Personal AI Account
An executive needs to quickly summarise a 60-page board pack. Their corporate Copilot is loading slowly. They use their personal Microsoft account — which also has Copilot — and upload the PDF. The board pack contains M&A targets, forward revenue projections, and a headcount reduction plan.
Consequences
- Board pack was classified Confidential under the organisation’s information policy. Personal accounts are out of scope for corporate data handling.
- If this is a consulting engagement, sharing the client’s board pack with a third party without authorisation is a direct NDA breach
- Forward revenue projections and M&A targets are inside information under MAR. Sharing with a consumer AI platform may constitute a disclosure violation — potential FCA investigation for a listed entity
- Personal liability for the individual. ICO notification obligation. Possible MAR investigation. Permanent reputational damage with the client.
- The fix: Corporate Copilot on the M365 tenant, for anything above Zone 1. If it’s slow, wait. The convenience of a personal account isn’t worth the exposure.
Building AI Within Your Organisation’s Guardrails
- Azure OpenAI private endpoint: AI calls routed through an endpoint inside your own Azure tenant, covered by the organisational DPA with Microsoft. Nothing hits a shared public service. This is the right setup for ACC-02, ACC-06, and ACC-07.
- M365 Copilot on corporate tenant: For the day-to-day developer work — code generation, drafting docs, validating requirements — this stays inside the tenant boundary and is already covered by the enterprise DPA.
- Gold layer as the AI boundary: Fabric workspace permissions should be set so Copilot and data agents can only reach the Gold layer through the semantic model. Bronze and Silver workspaces exclude AI service principals entirely — not by convention, by configuration.
- Microsoft Purview sensitivity labels: Label everything. Confidential and Highly Confidential labels can be wired to block export to AI tools. Technical controls beat policy documents. Don’t rely on people making the right call when they’re in a hurry.
- Shared prompt library: Keep a bank of approved Zone 1 and Zone 2 prompts in SharePoint or Teams. Team-wide consistency. Junior developers use tested patterns instead of improvising on live engagement data.
This is what a properly governed Fabric deployment actually looks like when AI is in the picture. The AI boundary isn’t just a policy statement on a SharePoint page — it’s built into the workspace permissions and access controls. Nothing raw gets near a model.
Copilot, data agents, Azure OpenAI called from a notebook — all of it is scoped to Gold layer and semantic model only. Bronze and Silver may carry raw data with PII. The architecture makes the right behaviour the only easy behaviour. Telling people to be careful is not a control. Removing the ability to make the mistake is.
The Power BI and Fabric developers doing well right now aren’t trying to become AI engineers. They’re staying in their lane — semantic modelling, DAX, data architecture, governance — and using AI to move faster inside that lane.
First draft of every tedious piece of work: documentation, boilerplate DAX, requirements checklists, M code transforms, pipeline skeletons. AI writes it, they review and certify it. They’re not publishing what the model produces — they’re publishing what they’ve validated.
And they’re not pasting client data into public tools. They’ve set up Azure OpenAI private endpoints, or they’re on M365 Copilot through the corporate tenant where the DPA covers them, or they stay in Zone 1 — no data in the prompt, no risk.
The foundation underneath it all is still the semantic model. Certified measures, governed metrics, reports that return the same number on Monday as they do on Friday. AI sits on top and makes the experience better — faster onboarding, natural language exploration, auto-generated narratives. But the foundation is doing the work. That hasn’t changed.
Reports aren’t going anywhere. The developers who understand both tools and know when to use each one — that’s a skillset that’s actually getting harder to find.
Build Responsibly. Build Fast. Build Both.
Reports for what you know. AI for what you’re still figuring out. A solid data foundation underneath both. That’s the setup that actually works.