WSDM2021

Say No to the Discrimination: Learning Fair Graph Neural Networks with Limited Sensitive Attribute Information

Enyan Dai Suhang Wang
The Pennsylvania State University, USA

Graph neural networks (GNNs) have achieved state-of-the-art performance in modeling graphs.

Despite its great success, as with many other models, GNNs have the risk to inherit the bias from the training data. In addition, the bias of GNN can be magnified by the graph structures and message-passing mechanism of GNNs.

The risk of discrimination limits the adoption of GNNs in sensitive domains such as credit score estimation.

Though there are extensive studies on fair machine learning, most of them focus on i.i.d data and rely on a large amount of sensitive attributes for debiasing. This is in contrast to the practical scenario of graph data, which is non-i.i.d and provides sparse annotations in sensitive attributes. There's no existing work on fair GNN, let alone dealing with the limited sensitive attribute information. Therefore, we study the novel and important problem of learning fair GNNs with limited sensitive information. We propose a novel framework called FairGNN, which is able to reduce the bias of GNNs and maintain high node classification accuracy by leveraging graph structured data and sensitive information.

Theoretical analysis is conducted to show that FairGNN can ensure fairness under mild conditions given limited nodes with known sensitive attributes. Experiments on real-world datasets demonstrated the effectiveness of the proposed framework in eliminating discrimination while maintaining high node classification accuracy.