The goal is to make UK house price data readable quickly: a long-run trend for context, a property-type comparison to show divergence over time, and regional summaries to highlight local differences.
These screenshots show the dashboard under different time windows. The “slider” screenshots demonstrate the filter window itself, and the “boxes” screenshots show how the headline metrics change when you shift to a more recent period.
These are intentionally presented as a compact comparison: same measure definitions, different filter context. This is useful for showing how “growth” and “level” can look different depending on the time window.
The regional visual is used as a “pattern finder” — quickly identifying clusters and outliers. The screenshots below show an example region view (Staffordshire) and a zoomed London view. (For the portfolio version, screenshots are used rather than relying on a live map experience.)
This is the part that makes the portfolio credible: not just charts, but proof of data preparation, modelling discipline, and measure logic.
This view shows the dataset after loading into Power BI, where I validate structure, data types, and analytical usability before building visuals. A key step at this stage was creating a DimDate table. Because the dashboard uses multiple datasets (overall UK metrics and property-type data), relying on raw date columns inside each table would lead to fragmented filtering and inconsistent time behaviour. To resolve this, I created a dedicated DimDate table containing a continuous sequence of dates, added derived fields (Year, Quarter, Month) for flexible time analysis, and linked both datasets to this single date dimension. This approach ensures consistent filtering, accurate time-based aggregations, reliable DAX calculations, and a properly structured model, with dates acting as a shared reference point rather than isolated columns.
The dashboard relies on a set of custom measures written in DAX to control how values are aggregated, filtered, and displayed across visuals. Rather than relying solely on default aggregations, measures were used to ensure that calculations behaved consistently under different filter context
An important refinement in the second iteration of the model was the introduction of “latest period” measures. This adjustment ensures that the headline metrics consistently reflect current market conditions rather than blended historical values. By keeping calculations to the most recent available month, values remain comparable when modifying date filters, and regional comparisons stay stable and analytically meaningful, rather than aggregating across the entire range of dates.
Within this stage, I used Power Query to promote headers, correct data types, and standardise column naming to ensure consistency across tables. Establishing accurate date and numeric formats was particularly important, as these fields drive filtering behaviour and calculation logic throughout the dashboard. In addition to initial preparation, this view also served as an inspection layer. I applied queries to get a clearer representation of the underlying data when validating relationships, or troubleshooting aggregation issues during later modelling iterations. This step ensured that the analytical model was built on correctly interpreted data rather than relying on assumptions from the source files.