University of Maryland researchers Dr. Yiqun Xie and Dr. Sergii Skakun are co-leading a new project titled "Advancing Deep Learning Towards Spatial Fairness," funded by the National Science Foundation (NSF) and Amazon through the Fairness in Artificial Intelligence (FAI) program. The project is a collaborative research with Dr. Xiaowei Jia from the University of Pittsburgh.
As innovations in artificial intelligence (AI) continue to bring excitement and create new opportunities for growth in a variety of domains, the bias issues in AI techniques have been recognized as a major risk factor and bottleneck impeding AI's deployment in critical societal applications. In fact, clear biases have been identified in many popular use cases, including face recognition. NSF has partnered with Amazon to "jointly support computational research focused on fairness in AI, with the goal of contributing to trustworthy AI systems that are readily accepted and deployed to tackle grand challenges facing society."
This new project is the first-of-its-kind effort to understand and address fairness issues in AI techniques, primarily deep learning, with respect to locations (a.k.a., spatial fairness). The goal is to reduce biases that have significant linkage to the locations or geographical areas of data samples. Such biases, if left unattended, can cause or exacerbate unfair distribution of resources, social division, spatial disparity, and weaknesses in resilience or sustainability. Spatial fairness is urgently needed for the use of AI in a large variety of real-world problems such as agricultural monitoring and disaster management. Agricultural products, including crop maps and acreage estimates, are used to inform important decisions such as the distribution of subsidies and providing farm insurance. Inaccuracies and inequities produced by spatial biases adversely affect these decisions. Similarly, effective and fair mapping of natural disasters such as floods or fires is critical to inform life-saving actions and quantify damages and risks to public infrastructures, which is also related to insurance estimation. Machine learning, in particular deep learning, has been widely adopted for spatial datasets with promising results. However, straightforward applications of machine learning have found limited success in preserving spatial fairness due to the variation of data distribution, data quantity, and data quality.
The consideration of location-based fairness brings unique challenges. For example, locations are represented by numerous continuous coordinates in space, whereas most existing fairness-aware formulations rely on predefined groups such as races and genders. The goal of this project is to develop a new generation of machine learning frameworks to explicitly preserve location-based fairness through new statistical formulations and associated training strategies.