Site icon R-bloggers

House price data cleansing and segmentation tool.

[This article was first published on R – NYC Data Science Academy Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

Project background

Land Registry publishes data for each housing sale transaction that is registered in England & Wales. This data has been used extensively for many analysis, from price evolution in time to the assessment of price differences between areas. This dataset is publicly available under the government licence and dates back to 1995.

The main variables we find in each dataset are the location of the transaction (with full postcode, used for geolocate each property), the type of asset (Flat, Detached, Bungalow,… and Other, which was discarded) and the price. Unfortunately, there is no qualitative or quantitative information of each asset (i.e. an area would be very useful).

The main issue working with this dataset, particularly when working with any transactional dataset (Residential, Offices, …) is that the distribution is usually skewed, with values of just few thousand pounds to many millions. For this reason, we have created a tool with two main objectives:

Cleanse the data and discard outliers

Although it is a common practice, it would be wrong to cleanse the data by only applying a minimum and maximum value filtering the whole dataset, mainly because of the two reasons listed below:

To tackle the first issue, we decided to implement the MAD (Median Absolute Deviation) instead of the standard deviation to be less dependent on the variance of the data and apply a factor below and above the median to filter the data. So, a ‘2’ factor will discard values 2 times above or below the Median therefore the higher the factor applied will result in more extreme values to be included in the final sample.

At the same time and to avoid geographical bias, we apply this methodology for each Borough. That is, we calculate the median and MAD for each borough and with the calculated median, we then apply the MAD factor filtering. In other words, a 1million pounds house in a central Borough won’t be considered an outlier but it might be discarded if the house is located in one of the outer Boroughs.

 

 

Market segmentation

Real Estate investors market their properties to very specific market segments, i.e. for the top end of the market. Having this capability to dynamically select which ‘slice’ of the market we are interested in is, therefore, a very useful tool when working with housing prices.

With the percentiles slider implemented, the user can easily select the segment of the market he/she is interested in. For example, to analyse the top end of the market would be as easy as to select for 0.8 to 1 in the slider.

Download the data

The primary objective of this tool is to implement specific criteria to filter and segment a dataset. Hence, the download capability has also been implemented. We can download either the full datasets once is cleansed according to user selection or a Borough summary for convenience.

 Next steps

This application should be extended to allow the following:

Access the hosted Shiny Application: https://natxomoreno.shinyapps.io/London_House_Prices_Stat_Explorer/

To leave a comment for the author, please follow the link and comment on their blog: R – NYC Data Science Academy Blog.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.