Zoopla is one of the UK’s two dominant property portals, alongside Rightmove, with hundreds of thousands of active for-sale and to-rent listings across England, Scotland, Wales, and Northern Ireland. For property analysts, lettings agencies, PropTech founders, and housing researchers, Zoopla’s public listing data is a critical input: asking prices, monthly rents, bedroom and bathroom counts, property types, and full addresses. There is no public API for bulk extraction. Working with Zoopla data at any meaningful scale means collecting it programmatically.
This post walks through why Zoopla data matters, what makes it hard to collect cleanly, and how to extract structured listings without writing or maintaining a scraper.
Zoopla pages render listing summaries server-side, but search filters, map-bounded queries, and listing detail pages depend on JavaScript state and client-side rendering. Reliable bulk collection runs into a familiar set of problems at scale.
Bot detection and request fingerprinting: Zoopla applies behavioural and network-level signals to identify automated traffic. Sequential requests across paginated search results, missing browser-style signals, and consistent network paths over many pages are flagged as non-human and result in throttled or degraded responses. Reliable extraction depends on session continuity, realistic pacing across paginated result sets, and network path diversity over long-running jobs — that infrastructure layer is the hard part, not the HTML parsing.
Result-set behaviour is highly geographic. A search for properties in central London paginates across dozens of pages with thousands of listings; a search for a rural Scottish postcode might return twelve listings with no pagination at all. Collection logic tuned only for high-density urban areas silently produces incomplete datasets in low-density regions, with no obvious error to catch. Handling the full range of UK market densities — from inner-city flats to remote rural cottages — needs logic that adapts to variable result sizes.
Field completeness is uneven. A new-build flat listing carries agent name, multiple photos, floor plan, EPC rating, and full description; an older private rental listing might surface only a partial address and a monthly rent. Pipelines that assume all fields are always present either crash on partial listings or quietly drop data. A production-grade collection layer must handle optional fields gracefully across Zoopla’s full diversity of listing types.
We maintain a Zoopla UK Property Scraper on Apify that handles JavaScript rendering where required, pagination, session management, and field normalisation across listing types. You give it a location query, choose for sale or to rent, and set how many results you want; it returns clean structured property data ready for analysis or product feeds.
Search for properties for sale in London:
{
"location": "London",
"listing_type": "for_sale",
"max_results": 50
}
Search for rentals in Manchester:
{
"location": "Manchester",
"listing_type": "to_rent",
"max_results": 100
}
Using the Apify Python client:
import apify_client
client = apify_client.ApifyClient('YOUR_API_TOKEN')
run_input = {
'location': 'Edinburgh',
'listing_type': 'for_sale',
'max_results': 50,
}
run = client.actor('cryptosignals/zoopla-scraper').call(run_input=run_input)
for item in client.dataset(run['defaultDatasetId']).iterate_items():
print(item)
No selectors to maintain, no proxies to manage, no session state to track on your side.
Each property listing returns a structured object:
{
"listing_id": "72599145",
"price": "£725,000",
"property_type": "house",
"bedrooms": 4,
"bathrooms": 3,
"address": "Wimbledon Road, London SW17",
"listing_type": "for_sale",
"listing_url": "https://www.zoopla.co.uk/for-sale/details/72599145/",
"scraped_at": "2026-05-01T06:00:00.000Z"
}
| Field | Type | Description |
|---|---|---|
listing_id | string | Zoopla unique listing identifier |
price | string | Asking price (sale) or monthly rent, GBP-formatted |
property_type | string | house, flat, bungalow, terraced, etc. |
bedrooms | integer | Number of bedrooms |
bathrooms | integer | Number of bathrooms where listed |
address | string | Street and area, including outward postcode where available |
listing_type | string | for_sale or to_rent |
listing_url | string | Direct Zoopla property URL |
scraped_at | string | ISO 8601 collection timestamp |
Output is available as JSON, CSV, or XLSX. CSV loads directly into pandas or Excel for price and rent distribution analysis. JSON feeds cleanly into rental yield calculators or PropTech APIs. Apify’s scheduling lets you run weekly or monthly refreshes across target postcodes without managing infrastructure.
For comprehensive UK housing analysis, combine Zoopla data with other major portals. We maintain a parallel Rightmove Property Scraper with the same input/output structure, so you can run both feeds against the same locations and merge them for deduplicated, multi-source coverage. Triangulating across Zoopla and Rightmove catches listings that appear on only one portal and gives you a more complete view of any local market.
The actor uses Pay Per Result pricing at $0.005 per property listing (effective May 17, 2026). The first 5 results per run are free, so you can test the actor before committing to a full collection job.
| Volume | Cost |
|---|---|
| 500 listings (single region snapshot) | ~$2.50 |
| 5,000 listings (multi-region pull) | ~$25.00 |
| Weekly 1,000-listing refresh | ~$20.00/month |
Try the Zoopla UK Property Scraper free on Apify Store →
Apify’s free tier covers initial testing — the first 5 results per run cost nothing. Sign up here if you do not have an account. The actor connects to Apify’s scheduling, webhook, and dataset APIs so you can run automated UK property data pipelines without building scraping infrastructure yourself.