• Description

Reliable data about socio-economic conditions of individuals, such as health indexes, consumption expenditures and wealth assets, remain scarce for most countries. Traditional methods to collect such data include on site surveys that can be expensive and labour intensive. On the other hand, remote sensing data, such as high-resolution satellite imagery, are becoming largely available. To circumvent the lack of socio-economic data at high granularity, computer vision has already been applied successfully to raw satellite imagery sampled from resource poor countries.

In this work we apply a similar approach to the metropolitan areas of five different cities in North and South America, starting from pre-trained convolutional models used for poverty mapping in developing regions. Applying a transfer learning process we estimate household income from visual satellite features. The urban environment we consider is characterized by different features with respect to the resource-poor training environment, such as the high heterogeneity in population density. By leveraging both official and crowd-sourced data at city scale, we show the feasibility of estimating the socio-economic conditions of different neighborhoods from satellite data.

Predicting City Poverty Using Satellite Imagery