Free Checklist

AI wrote your DAX.
Looks right. Counts wrong.

Seven questions to ask before you put any AI-generated DAX measure into a report.

A practical pre-flight checklist with real diagnostics - because "looks right" is not the same as "is right"

When AI-generated DAX arrived, it made things faster and also more dangerous. The code compiles. It runs. It returns numbers. The numbers look plausible. And then, three weeks later, someone notices that the "Orders" measure has been counting line items all along.

This checklist is not a theoretical framework - it is a set of seven concrete questions you can run through in DAX Studio before a measure goes anywhere near a live report.

Who this is for: intermediate Power BI developers who use AI to generate or accelerate DAX, know enough to be suspicious, but want a systematic way to verify before shipping.

Why AI DAX fails - it is always one of these
  • Filter context misunderstandings - overusing ALL or REMOVEFILTERS, accidentally disabling slicers
  • Row context vs filter context confusion - iterators that ignore "current row"
  • Implicit vs explicit measure drift - AI assumes a column should just be summed
  • BLANK and data type surprises - DISTINCTCOUNT counting BLANK, silent type coercions
  • Aggregation granularity and non-additivity - AI "fixing" a total by summing rows
  • Time intelligence without proper scaffolding - fact-date column instead of date table
  • Relationship propagation - the wrong relationship is active

The next step: understand, not just fix

Brighter is a Power BI learning environment that works on your actual tasks. Bring your real DAX, your stuck measures, your broken models. We work through them with you - so next time you don't need the checklist.

You'll get:
Early access (we invite people in waves)
+300 bonus credits on your account
Price lock (keep early pricing)
Join Waitlist

Free. No credit card required.

Q1
Most common failure
Is the metric definition unambiguous at the grain you plan to show?
Granularity mismatch Wrong base table Row count vs distinct entity
AI cannot infer your semantic grain from table names. It sees "SalesLine" and produces a COUNTROWS. You want "orders." These are not the same thing. Run this query first - it takes 30 seconds:
Diagnostic query - DAX Studio
EVALUATE SUMMARIZECOLUMNS(
    'Date'[Month],
    "Fact rows",       COUNTROWS( 'SalesLine' ),
    "Distinct orders", DISTINCTCOUNT( 'SalesLine'[OrderID] )
)
ORDER BY 'Date'[Month]
Red flag: your AI measure for "Orders" matches Fact rows, not Distinct orders.
Faulty AI measure
Orders (AI) = COUNTROWS ( 'SalesLine' )
Corrected
Orders = DISTINCTCOUNT ( 'SalesLine'[OrderID] )
One more thing: DISTINCTCOUNT counts BLANK as a distinct value. If you want to exclude unknown records, use DISTINCTCOUNTNOBLANK.
Q2
Does it respect the intended filter context - and only remove filters on purpose?
Accidental filter removal Wrong total definition Measure ignores slicers
AI tends to reach for ALL or REMOVEFILTERS without thinking about scope. The result: a measure that ignores slicers entirely, or a percentage that never sums to 100% within a category.
Quick visual check: put the measure in a matrix, add a slicer on a dimension it should respond to. Does the number change? If not, something is nuking your filter context. For percentage measures, shares inside each category should sum to approximately 1:
Sanity check - shares should sum to ~1 per category
EVALUATE SUMMARIZECOLUMNS(
    'Product'[Category],
    "Sum of shares (should be ~1)",
        SUMX(
            VALUES ( 'Product'[ProductName] ),
            [Sales % (AI)]
        )
)
ORDER BY 'Product'[Category]
Faulty - ALL removes all product filters, denominator is always grand total
Sales % (AI) =
DIVIDE(
    [Sales Amount],
    CALCULATE( [Sales Amount], ALL ( 'Product' ) )
)
Corrected - remove only the product name filter
Sales % within Category =
DIVIDE(
    [Sales Amount],
    CALCULATE( [Sales Amount], REMOVEFILTERS('Product'[ProductName]) )
)
Rule of thumb: REMOVEFILTERS('Product'[ProductName]) is almost always safer than ALL('Product').
Q3
Most confusing symptoms
Are row context and context transition handled correctly inside iterators?
Missing context transition Every row returns same value All customers qualify
Every row returns the same number. Or "all customers qualify" for a condition that should filter most of them out. The cause is almost always a missing CALCULATE inside an iterator. Run this - "Without CALCULATE" and "With CALCULATE" should differ per customer:
Diagnostic - columns should differ per customer
EVALUATE TOPN(
    10,
    ADDCOLUMNS(
        VALUES ( 'Customer'[CustomerKey] ),
        "Without CALCULATE", COUNTROWS('Sales'),
        "With CALCULATE",    CALCULATE( COUNTROWS('Sales') )
    ),
    [With CALCULATE], DESC
)
If "Without CALCULATE" is the same number for every customer - that is your bug.
Faulty - COUNTROWS has no context transition, always sees all sales
Repeat Customers (AI) =
COUNTROWS(
    FILTER( 'Customer', COUNTROWS('Sales') > 1 )
)
Corrected - CALCULATE transitions row context to filter context per customer
Repeat Customers =
COUNTROWS(
    FILTER(
        VALUES('Customer'[CustomerKey]),
        CALCULATE( COUNTROWS('Sales') ) > 1
    )
)
Q4
Are the relationships you rely on active, unambiguous, and propagating correctly?
Wrong relationship path Inactive relationship ignored Bidirectional ambiguity
AI often patches relationship issues with expensive inline filters instead of using the right relationship function. The measure compiles, looks reasonable, and produces subtly wrong numbers whenever ship dates and order dates diverge.
First check the model visually: active vs inactive relationships, cardinality, cross-filter direction. Then validate with a query - if both columns are identical month-by-month and you know ship lag exists, the AI measure is still using the active OrderDate relationship.
Faulty - intersects order-date filter with ship-date filter
Shipped Sales (AI) =
CALCULATE(
    [Sales Amount],
    FILTER(
        'Sales',
        'Sales'[ShipDate] >= MIN('Date'[Date])
            && 'Sales'[ShipDate] <= MAX('Date'[Date])
    )
)
Corrected - activate the intended relationship for this calculation only
Shipped Sales =
CALCULATE(
    [Sales Amount],
    USERELATIONSHIP( 'Sales'[ShipDate], 'Date'[Date] )
)
Q5
Often overlooked
Are BLANKs, unknown members, and data types handled explicitly?
BLANK counted as real value Unknown-member rows Division by zero
DAX BLANK is not SQL NULL. DISTINCTCOUNT counts it as a member. Referential integrity violations introduce phantom rows. AI almost never anticipates any of this. This two-line diagnostic has caught real production bugs:
Two-line diagnostic
EVALUATE ROW(
    "DistinctCount",        DISTINCTCOUNT ( 'Sales'[CustomerKey] ),
    "DistinctCountNoBlank", DISTINCTCOUNTNOBLANK ( 'Sales'[CustomerKey] )
)
If these differ, you have BLANK customer keys being counted. Check for orphan fact rows:
Orphan row check
EVALUATE ROW(
    "Fact rows with blank key",
        CALCULATE( COUNTROWS('Sales'), ISBLANK('Sales'[CustomerKey]) )
)
Faulty
Customers (AI) = DISTINCTCOUNT ( 'Sales'[CustomerKey] )
Corrected
Customers = DISTINCTCOUNTNOBLANK ( 'Sales'[CustomerKey] )
Also: always use DIVIDE instead of / when the denominator can be zero or BLANK. DIVIDE handles it gracefully. The / operator returns an error.
Q6
Is time intelligence grounded on a proper date table?
Fact-date column used directly Auto date/time issues Missing dates or BLANKs
Classic time intelligence requires: a dedicated date table, continuous date column with no gaps, no BLANKs, full-year coverage, table marked as date table. AI often skips all of this and uses the fact-date column directly. It works in simple scenarios and breaks in complex ones.
Date table validation - BlankDates should be zero, Rows should equal ExpectedDaysInRange
EVALUATE ROW(
    "MinDate",            MIN('Date'[Date]),
    "MaxDate",            MAX('Date'[Date]),
    "Rows",               COUNTROWS('Date'),
    "DistinctDates",      DISTINCTCOUNT('Date'[Date]),
    "BlankDates",         CALCULATE( COUNTROWS('Date'), ISBLANK('Date'[Date]) ),
    "ExpectedDaysInRange",
        DATEDIFF( MIN('Date'[Date]), MAX('Date'[Date]), DAY ) + 1
)
Faulty - uses fact-date column, breaks when a second fact table is introduced
Sales LY (AI) =
CALCULATE(
    [Sales Amount],
    SAMEPERIODLASTYEAR( 'Sales'[OrderDate] )
)
Corrected - uses the dedicated date table
Sales LY =
CALCULATE(
    [Sales Amount],
    SAMEPERIODLASTYEAR( 'Date'[Date] )
)
Note: if your model uses auto date/time and has more than one fact table, disable it and build a single shared date table. Auto date/time creates a hidden table per date column - they don't share a common filter.
Q7
The real test
Do totals reconcile to the intended meaning, and will it hold up at scale?
Non-additive totals misread Double-counting via forced additivity Performance at scale
Non-additive totals (distinct counts, ratios, semi-additive balances) are supposed to look different from the sum of row values. That is not a bug - that is correct behavior. AI does not know this and will helpfully "fix" the total by summing rows, changing the meaning entirely.
Additive vs non-additive check - Customer-months will always be larger
EVALUATE SUMMARIZECOLUMNS(
    'Date'[Year],
    "Distinct customers",
        DISTINCTCOUNTNOBLANK('Sales'[CustomerKey]),
    "Customer-months (sum of monthly)",
        SUMX(
            VALUES('Date'[Month]),
            DISTINCTCOUNTNOBLANK('Sales'[CustomerKey])
        )
)
ORDER BY 'Date'[Year]
Faulty fix - changes meaning without telling anyone
Distinct Customers (AI Fix) =
SUMX(
    VALUES('Date'[Month]),
    DISTINCTCOUNT ( 'Sales'[CustomerKey] )
)
Corrected - two honest measures instead of one misleading one
Distinct Customers =
DISTINCTCOUNTNOBLANK ( 'Sales'[CustomerKey] )

Customer-months (additive) =
SUMX(
    VALUES('Date'[Month]),
    DISTINCTCOUNTNOBLANK ( 'Sales'[CustomerKey] )
)
Performance tip
-- Use Performance Analyzer to capture visual query time before shipping.
-- Prefer set-based aggregations (SUM) over row-by-row iterators (SUMX)
-- when you don't need per-row expressions.
-- Use VAR to avoid repeated evaluation of the same expression.
Error class Symptom Fix pattern
Grain mismatchCounts feel inflated; adding a line-level column changes totals unexpectedlyUse DISTINCTCOUNT on the entity key, or count the header table directly
Filter context nukedMeasure ignores slicers; % doesn't sum to 100% within scopeRemove only specific columns; use KEEPFILTERS for intersection semantics
Missing context transitionIterator returns same value for every row; all customers qualifyAdd CALCULATE inside the iterator predicate; refactor to explicit measures
Wrong relationship pathNumbers match wrong date role; slicers don't propagate as expectedUse USERELATIONSHIP for inactive relationships; TREATAS for virtual ones
BLANK / unknown memberDistinct count off by 1; unexpected (Blank) category appearsUse DISTINCTCOUNTNOBLANK; filter out BLANK explicitly; fix referential integrity
Time intelligenceYoY/YTD inconsistent across facts; filters don't apply across tablesBuild and mark a proper date table; avoid auto date/time in complex models
Non-additive totalsUsers complain totals don't add up; visuals are slowKeep non-additive totals as-is; create additive companion KPI; optimize with VAR
The principle behind all 7

AI generates DAX for a generic model. You have a real one. These checks aren't about distrust - they're about not shipping embarrassing numbers to your stakeholders. A 10-minute pre-flight check in DAX Studio beats a 3-hour incident debrief every time.

None of this means AI-generated DAX is bad. It is genuinely useful - it accelerates the first draft, handles boilerplate, and often gets you 80% of the way there. The problem is that the remaining 20% fails silently.

Good luck. And yes, DAX will still surprise you. Even after all this.

References: Microsoft Learn (DAX documentation, Power BI guidance), SQLBI.com (Alberto Ferrari and Marco Russo). All DAX queries tested in DAX Studio.

Made on
Tilda