• Non ci sono risultati.

Constraining Deep Representations with a Noise Module for Fair Classification

N/A
N/A
Protected

Academic year: 2021

Condividi "Constraining Deep Representations with a Noise Module for Fair Classification"

Copied!
4
0
0

Testo completo

(1)

27 July 2021

AperTO - Archivio Istituzionale Open Access dell'Università di Torino

Original Citation:

Constraining Deep Representations with a Noise Module for Fair Classification

Publisher:

Published version:

DOI:10.1145/3341105.3374090

Terms of use:

Open Access

(Article begins on next page)

Anyone can freely access the full text of works made available as "Open Access". Works made available under a

Creative Commons license can be used according to the terms and conditions of said license. Use of all other works

requires consent of the right holder (author or publisher) if not exempted from copyright protection by the applicable law.

Availability:

ASSOC COMPUTING MACHINERY

This is the author's manuscript

(2)

Constraining Deep Representations with a Noise Module for

Fair Classification

Mattia Cerrato

mattia.cerrato@unito.it Università di Torino, Computer

Science Department Torino, Italy

Roberto Esposito

roberto.esposito@unito.it Università di Torino, Computer

Science Department Torino, Italy

Laura Li Puma

laura.lipuma@intesasanpaolo.com Intesa Sanpaolo Innovation Center

Torino, Italy

ABSTRACT

The recent surge in interest for Deep Learning (motivated by its exceptional performances on longstanding problems) made Neural Networks a very appealing tool for many actors in our society. One issue in this shift of interest is that Neural Networks are very opaque objects and it is often hard to make sense of their predictions. In this context, research efforts have focused on building fair repre-sentationsof data which display little to no correlation with regard to a sensitive attributes. In this paper we build onto a domain adap-tation neural model by augmenting it with a “noise conditioning” mechanism which we show is instrumental in obtaining fair (i.e. non-correlated withs) representations. We provide experiments on standard datasets showing the effectiveness of the noise conditioning mechanism in helping the networks to ignore the sensible attribute. ACM Reference Format:

Mattia Cerrato, Roberto Esposito, and Laura Li Puma. 2020. Constraining Deep Representations with a Noise Module for Fair Classification. In The 35th ACM/SIGAPP Symposium on Applied Computing (SAC’20), March 30-April 3, 2020, Brno, Czech Republic.ACM, New York, NY, USA, 3 pages. https://doi.org/10.1145/3341105.3374090

1

INTRODUCTION

Artificial intelligence in general, and machine learning in particular, is becoming a pervasive technology adopted by large corporations as well as by small businesses [7]. This growing interest in these technologies is raising increasing concerns about their fairness, mo-tivated by the fact that they could possibly be leveraged to perpetrate and justify discriminating behavior.

In this context deep learning techniques appear to be at odds with the aforementioned trend. Deep learning models have shown impressive results on many pattern matching and machine perception problems [4] and, as a result, businesses and institutions have shown a growing interest in employing neural networks to further automate and innovate their decision-making process; however, a longstanding problem with deep neural architectures is their opacity and sheer number of trainable parameters.

One possible way to cope with this issue, in absence of full explainability, is to ensure fairness in the model. Defining a clas-sifier’s fairness in a precise, formal way has received considerable

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

SAC ’20, March 30-April 3, 2020, Brno, Czech Republic © 2020 Copyright held by the owner/author(s). ACM ISBN 978-1-4503-6866-7/20/03. https://doi.org/10.1145/3341105.3374090

attention[5, 8–10]. In this work we leverage neural networks to learn a non-linear mapping from the original data space into a feature space in which information about the sensitive featuress is absent. We argue that such a representation is inherently fair.

Instead of explicitly optimizing for a given discrimination mea-sure as in [9], we follow Xie et al. [8] and employ two sub-networks Y and S which are optimized to predict the target and sensitive variables respectively (see Figure 1). Parameters of the model are optimized so that the sensitive attribute is predicted badly, while still allowing for accurate predictions of the target variable.

2

MOTIVATION

Building fair representations through neural networks has been re-cently proposed in [2, 5]. In these works the neural network is fed with some input data and is trained to learn a representation that dis-cards all information regarding some attribute. Our model is based on Ganin et al.’s work [1, 2] and tries to achieve decorrelated repre-sentations via a loss function:L = Ly−λLs built as the combination

of a termLythat penalizes errors over the target variabley, and a

negative termLsthat penalizes accuracy overs, Ganin et al. apply

their Domain Adversarial training framework to various domain adaptation datasets, showing state of the art performance.

In domain adaptation, learning algorithms are evaluated on their ability to learn features that adapt to other similar, related datasets; on the other hand, state of the art fair classification models pro-duce representations which have little to no correlation w.r.t. the sensitive attributes. Therefore, we argue that simply employing domain adaptation algorithms to the fair classification context may not result in optimal performances w.r.t. the fairness of the obtained representations. Our experiments in Section 4 indeed show that the standard Domain Adversarial learning algorithm does not guarantee the removal of all information about the sensitive attribute. In the following section we introduce our noise conditioning module which is instrumental in obtaining truly fair representations.

3

PROPOSED MODEL

Let us consider a datasetD = {(xi, si, yi), i ∈ {1 . . . N }}, where xi ∈

Rnare vectors describing non sensitive attributes of our problem; si ∈ Rm’s are vectors describing sensitive attributes, andyi are the

target values.

We aim at training a neural network modelR so that the output ofR(x) can be used as a useful and fair representation for building classifiers predictingy values. We say that a representation r is fair iffs cannot be predicted using only r and we say that it is useful iff y can be accurately predicted.

(3)

SAC ’20, March 30-April 3, 2020, Brno, Czech Republic Mattia Cerrato, Roberto Esposito, and Laura Li Puma

To achieve this goal, as proposed in Ganin et al. [1, 2], we leverage an auxiliary neural networkY to predict the y variable, and another auxiliary networkS to predict the s variable. In the following, θY,

θS, andθRare the parameters for theY , the S and the R models

respectively.

The networkR is trained so to maximize the performances of networkY while minimizing the performances of S. Each iteration step optimizesS so to improve in predicting the sensitive variable, optimizesY so to improve the prediction of the target variable, and optimizesR so to build the best possible representation for Y and the worst possible forS.

The overall training objective can therefore be written as follows: ˆ θR, ˆθY, ˆθS= arg min θR,θY  L(y, Y (R(x))) + λ max θS L(s, S(R(x))) 

whereλ is a “fairness importance” parameter employed to balance the importance of decorrelating with respect tos against accurately predictingy. As it should be apparent from the formulation, lower λ values would make the system more tolerant of unfair behaviors (setting it to0 makes the system behave as a regular neural classifier predictingy).

We note that the optimization problem is complicated by the inter-action between the minimization over parametersθRwith the inner

maximization which uses theR network based on those parameters. As suggested in [1, 2], the back propagation algorithm can be easily modified to cope with this objective by changing the sign of the gradients onθSafter updating the weights of networkS (so that the

the networkR is updated using the inverted gradient, thus worsening the performance overS). Figure 1 gives a graphical representation of the model.

R

x

Y

S

rL(y, Y (R(x))) <latexit sha1_base64="lHRSWLqSSK8Bn1vsHAMVKc3csiI=">AAAB/3icbVDLSsNAFJ3UV62vqODGzWARUtCSVEGXRTcuXFSxD2lDmUwn7dDJJMxMxBC78FfcuFDErb/hzr9x2mah1QMXDufcy733eBGjUtn2l5Gbm19YXMovF1ZW19Y3zM2thgxjgUkdhywULQ9JwigndUUVI61IEBR4jDS94fnYb94RIWnIb1QSETdAfU59ipHSUtfcOYQdjjyG4KWVHMBb69q6L5VKXbNol+0J4F/iZKQIMtS65menF+I4IFxhhqRsO3ak3BQJRTEjo0InliRCeIj6pK0pRwGRbjq5fwT3tdKDfih0cQUn6s+JFAVSJoGnOwOkBnLWG4v/ee1Y+aduSnkUK8LxdJEfM6hCOA4D9qggWLFEE4QF1bdCPEACYaUjK+gQnNmX/5JGpewclStXx8XqWRZHHuyCPWABB5yAKrgANVAHGDyAJ/ACXo1H49l4M96nrTkjm9kGv2B8fAPyl5N5</latexit> rL(s, S(R(x))) <latexit sha1_base64="Ud+1WCbRAJpNLLfai1Vtpl2wbzg=">AAAB/3icbVDLSgNBEJyNrxhfq4IXL4NBSEDDbhT0GPTiwUN85AFJCL2TSTJkdnaZmRXDmoO/4sWDIl79DW/+jZNkDxotaCiquunu8kLOlHacLys1N7+wuJRezqysrq1v2JtbVRVEktAKCXgg6x4oypmgFc00p/VQUvA9Tmve4Hzs1+6oVCwQt3oY0pYPPcG6jIA2UtveOcRNAR4HfJlTB/gmd527z+fzbTvrFJwJ8F/iJiSLEpTb9mezE5DIp0ITDko1XCfUrRikZoTTUaYZKRoCGUCPNgwV4FPViif3j/C+UTq4G0hTQuOJ+nMiBl+poe+ZTh90X816Y/E/rxHp7mkrZiKMNBVkuqgbcawDPA4Dd5ikRPOhIUAkM7di0gcJRJvIMiYEd/blv6RaLLhHheLVcbZ0lsSRRrtoD+WQi05QCV2gMqoggh7QE3pBr9aj9Wy9We/T1pSVzGyjX7A+vgHgAZNt</latexit> rL(y, Y (R(x))) <latexit sha1_base64="lHRSWLqSSK8Bn1vsHAMVKc3csiI=">AAAB/3icbVDLSsNAFJ3UV62vqODGzWARUtCSVEGXRTcuXFSxD2lDmUwn7dDJJMxMxBC78FfcuFDErb/hzr9x2mah1QMXDufcy733eBGjUtn2l5Gbm19YXMovF1ZW19Y3zM2thgxjgUkdhywULQ9JwigndUUVI61IEBR4jDS94fnYb94RIWnIb1QSETdAfU59ipHSUtfcOYQdjjyG4KWVHMBb69q6L5VKXbNol+0J4F/iZKQIMtS65menF+I4IFxhhqRsO3ak3BQJRTEjo0InliRCeIj6pK0pRwGRbjq5fwT3tdKDfih0cQUn6s+JFAVSJoGnOwOkBnLWG4v/ee1Y+aduSnkUK8LxdJEfM6hCOA4D9qggWLFEE4QF1bdCPEACYaUjK+gQnNmX/5JGpewclStXx8XqWRZHHuyCPWABB5yAKrgANVAHGDyAJ/ACXo1H49l4M96nrTkjm9kGv2B8fAPyl5N5</latexit> +rL(s, S(R(x))) <latexit sha1_base64="b4Ao638zlxKvRUOF0/PFkfK3AY8=">AAAB/3icdVDLSgMxFM3UV62vUcGNm2ARWpQh01qmy6IbFy7qo7bQlpJJ0zY0kxmSjFhqF/6KGxeKuPU33Pk3pg9BRQ9cOJxzL/fe40ecKY3Qh5WYm19YXEoup1ZW19Y37M2taxXGktAKCXkoaz5WlDNBK5ppTmuRpDjwOa36/ZOxX72hUrFQXOlBRJsB7grWYQRrI7XsnQPYENjnGJ5l1CG8zFxkbrPZbMtOI6dQRJ7nQuSgfBEVcmPiegUPQddBE6TBDOWW/d5ohyQOqNCEY6XqLop0c4ilZoTTUaoRKxph0sddWjdU4ICq5nBy/wjuG6UNO6E0JTScqN8nhjhQahD4pjPAuqd+e2PxL68e606xOWQiijUVZLqoE3OoQzgOA7aZpETzgSGYSGZuhaSHJSbaRJYyIXx9Cv8n1znHzTu586N06XgWRxLsgj2QAS7wQAmcgjKoAALuwAN4As/WvfVovViv09aENZvZBj9gvX0CPKeTrQ==</latexit>

Figure 1: Fairness model. Adjacent to each component of the network we report the gradients used to update the parameters of the component. Note that the direction of ∇L(s, S(R(x))) is re-versed when applied toR.

The model presented so far is equivalent to the one introduced by Ganin et al. [1, 2]. However, as discussed in Section 2, decorrelation with respect to the sensitive variable can prove to be very difficult since it requires to combine many possibly meaningful features to mask and dampen signals correlated withs. To overcome this difficulty, we propose a new noise layer that simplifies the task of decorrelating with respect tos.

<latexit sha1_base64="jpVdjwnRsHnjvYQzwTGHXHUPFLk=">AAAB63icbVBNS8NAEJ34WetX1aOXxSJ4KkkV9Fj04rGC/YA2lM120y7d3YTdiVBC/4IXD4p49Q9589+YtDlo64OBx3szzMwLYiksuu63s7a+sbm1Xdop7+7tHxxWjo7bNkoM4y0Wych0A2q5FJq3UKDk3dhwqgLJO8HkLvc7T9xYEelHnMbcV3SkRSgYxVzqc6SDStWtuXOQVeIVpAoFmoPKV38YsURxjUxSa3ueG6OfUoOCST4r9xPLY8omdMR7GdVUceun81tn5DxThiSMTFYayVz9PZFSZe1UBVmnoji2y14u/uf1Egxv/FToOEGu2WJRmEiCEckfJ0NhOEM5zQhlRmS3EjamhjLM4ilnIXjLL6+Sdr3mXdbqD1fVxm0RRwlO4QwuwINraMA9NKEFDMbwDK/w5ijnxXl3Phata04xcwJ/4Hz+AAowjjw=</latexit>

a(i)k

<latexit sha1_base64="JrgA0VnO3RjEAUblHCd8fvQPjCQ=">AAAB8HicbVBNSwMxEJ2tX7V+VT16CRahXspuK+ix6MVjBfsh7VqyabYNTbJLkhXK0l/hxYMiXv053vw3pu0etPXBwOO9GWbmBTFn2rjut5NbW9/Y3MpvF3Z29/YPiodHLR0litAmiXikOgHWlDNJm4YZTjuxolgEnLaD8c3Mbz9RpVkk780kpr7AQ8lCRrCx0gN+TMvsfNof94slt+LOgVaJl5ESZGj0i1+9QUQSQaUhHGvd9dzY+ClWhhFOp4VeommMyRgPaddSiQXVfjo/eIrOrDJAYaRsSYPm6u+JFAutJyKwnQKbkV72ZuJ/Xjcx4ZWfMhknhkqyWBQmHJkIzb5HA6YoMXxiCSaK2VsRGWGFibEZFWwI3vLLq6RVrXi1SvXuolS/zuLIwwmcQhk8uIQ63EIDmkBAwDO8wpujnBfn3flYtOacbOYY/sD5/AFLgZAT</latexit>

a<latexit sha1_base64="Y4l71koFXLypfcqaYcTZaIAp3cU=">AAACC3icbZDLSsNAFIYn9VbrLerSzdAiVJSSVEGXRTcuK9gLtDFMppN2yGQSZiZKCdm78VXcuFDErS/gzrdx2mahrT8M/HznHM6c34sZlcqyvo3C0vLK6lpxvbSxubW9Y+7utWWUCExaOGKR6HpIEkY5aSmqGOnGgqDQY6TjBVeTeueeCEkjfqvGMXFCNOTUpxgpjVyzjO7SKj3K3AA+uKl9EmTwGPaJQjNQ18A1K1bNmgouGjs3FZCr6Zpf/UGEk5BwhRmSsmdbsXJSJBTFjGSlfiJJjHCAhqSnLUchkU46vSWDh5oMoB8J/biCU/p7IkWhlOPQ050hUiM5X5vA/2q9RPkXTkp5nCjC8WyRnzCoIjgJBg6oIFixsTYIC6r/CvEICYSVjq+kQ7DnT1407XrNPq3Vb84qjcs8jiI4AGVQBTY4Bw1wDZqgBTB4BM/gFbwZT8aL8W58zFoLRj6zD/7I+PwBiRqZdg==</latexit>(i)k w1,k+ ⌘kw2,k

w 2,k

<latexit sha1_base64="/Nt7Hn4S6OJZXd+Ugofmfp4XSrE=">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJJUQY9FLx4r2A9oQ9lsN+3SzSbsTpQS+iO8eFDEq7/Hm//GbZuDtj4YeLw3w8y8IJHCoOt+Oyura+sbm4Wt4vbO7t5+6eCwaeJUM95gsYx1O6CGS6F4AwVK3k40p1EgeSsY3U791iPXRsTqAccJ9yM6UCIUjKKVWk+9rHo+mvRKZbfizkCWiZeTMuSo90pf3X7M0ogrZJIa0/HcBP2MahRM8kmxmxqeUDaiA96xVNGIGz+bnTshp1bpkzDWthSSmfp7IqORMeMosJ0RxaFZ9Kbif14nxfDaz4RKUuSKzReFqSQYk+nvpC80ZyjHllCmhb2VsCHVlKFNqGhD8BZfXibNasW7qFTvL8u1mzyOAhzDCZyBB1dQgzuoQwMYjOAZXuHNSZwX5935mLeuOPnMEfyB8/kDBC6PWw==</latexit>

w1,k<latexit sha1_base64="4foxCL+MiAbdgR1iLd8yTmyVYYk=">AAAB7nicbVBNS8NAEJ34WetX1aOXxSJ4kJJUQY9FLx4r2A9oQ9lsN+3SzSbsTpQS+iO8eFDEq7/Hm//GbZuDtj4YeLw3w8y8IJHCoOt+Oyura+sbm4Wt4vbO7t5+6eCwaeJUM95gsYx1O6CGS6F4AwVK3k40p1EgeSsY3U791iPXRsTqAccJ9yM6UCIUjKKVWk+9zDsfTXqlsltxZyDLxMtJGXLUe6Wvbj9macQVMkmN6Xhugn5GNQom+aTYTQ1PKBvRAe9YqmjEjZ/Nzp2QU6v0SRhrWwrJTP09kdHImHEU2M6I4tAselPxP6+TYnjtZ0IlKXLF5ovCVBKMyfR30heaM5RjSyjTwt5K2JBqytAmVLQheIsvL5NmteJdVKr3l+XaTR5HAY7hBM7AgyuowR3UoQEMRvAMr/DmJM6L8+58zFtXnHzmCP7A+fwBAqePWg==</latexit>

Figure 2: Componentk of a noise module inserted after layer i. The number of components k is equal to the number of neu-rons in layeri. Each neuron’s activation, ai

k, is multiplied by

a weightw1,k which can be set to dampen or augment the fea-ture’s value; the amount of noise which is added to the same neuron’s activation is learned via a separate parameterw2,k.

Given a layeri with outputs (a(i)

k ) n

k=1, a noise layer at leveli + 1

computes the outputsa(i+1)as it follows: a(i+1)= a(i)w

1+ η ⊙ w2

where ⊙ is the Hadamard product,w1andw2are vectors of learnable

weights, andη is a source of noise providing random vectors in Rn. Figure 2 shows the neural module responsible for computing the k-th component of the ai+1output. This parametrization provides

the network with an efficient feature-wise mechanism for dampening problematic features which are correlated withs without resorting to searching for combinations with other attributes; at the same time, the addition of an adjustable amount of additive noise can help lowering the amount of mutual information between the learned representationsx and the sensitive attribute s.ˆ

An issue with this approach is that a fundamental assumption of the gradient descent algorithm is that the employed loss function L is a function of the input data: this assumption is violated if one samples from a distribution at each forward pass of the network, as L would not be functional (univalent)1. We circumvent this problem

by only sampling once, at the start of the learning process; while the obtained samples have no correlation withs and can then be employed to partially mask the value of a feature, they are fixed and therefore the functional property of the loss function still holds.

4

EXPERIMENTS

We evaluated our model by running experiments on “fair” clas-sification datasets widely employed in the literature, namely the Adult, German, Bank (from the UCI ML repository [6]) and COM-PAS [3] datasets. An archive containing the results of all experi-ments, as well as the software needed to replicate the experimenta-tion (or to make new experiments) can be downloaded from https: //github.com/ml-unito/fair-networks.

The choices for the network architectures follow the encoder network sizes employed by Louizos et al. [5], which were motivated by referring to the sizes of the datasets. Specifically, for the Adult and Bank dataset the networksR, Y , and S are constituted by a single hidden layer with 100 neurons. As for the German and COMPAS datasets,R, Y and S have a single hidden layer with 60 neurons.

In order to pick the bestλ value, we repeated the learning pro-cess using few different values (namely we experimented with λ ∈ {0, 0.5, 1, 2}) and picked the most promising one using a simple 1That is, L would not longer be a function, as it would associate different outputs to

(4)

A Noise Module for Fair Classification SAC ’20, March 30-April 3, 2020, Brno, Czech Republic

fairness/predictiveness tradeoff evaluation metric, evaluated on a holdout set, that takes into account both the loss in predictivity and the loss in fairness. Lastly, we used the representations to train four general purpose classifiers on the obtained decorrelated representa-tions ˆX : logistic regression, random forests, support vector machines with Gaussian kernels, and decision trees. The resulting classifiers have been used to predict both thes and y variables over an inde-pendent test set. In the following we will focus on the results from logistic regression and random forests since the other results provide similar insights. We set the hyperparameters for the Fair Variational Autoencoder following Louizos et al.’s choices [5].

We observe that our methodology leads to representations which are both fair and discriminative on all datasets, as visible in Figures 3 through 6. In these figures, the solid line refers to the accuracy achievable on the original representation for the data when predict-ingy, while the dotted line indicates the random chance accuracy fors. Thus, a classifier trained to predict s with a perfectly fair repre-sentation would display performance which matches the dotted line, while the performance of a classifier trained to predicty with a rep-resentation which has not lost any of its discriminative information would match the solid line.

On the Adult dataset (Figure 3), the fair representations have achieved true invariance w.r.t.s, being close or slightly below the random chance baseline. On the other hand, the standard adversarial strategy is unable to account for all the correlations between the at-tributes and the sensitive attribute. The Variational Fair Autoencoder achieves representations with a similar level of fairness when com-pared to our methodology, at the cost of reduced performance when predictingy. Experiments on the COMPAS data provide similar insights. The German dataset displays high levels of class imbalance which lead to non-discriminative representations by all strategies we tested; however, fairness was still preserved. As for the Bank dataset, all of the methodologies employed are able to learn consistently fair and discriminative representations.

Figure 3: Results on the Adult dataset. In this figure and all the following ones, the solid line refers to the accuracy attained on the original representation when predictingy, whereas the dotted line represents the majority class rate fors. Under our definition, a perfectly fair representation does not allow for per-formances higher than the dotted line baseline when predicting s.

5

CONCLUSIONS

In this work we have shown how to augment the Domain Adversarial Learning algorithm by Ganin et al. [2] with a noise conditioning

Figure 4: Results on the COMPAS dataset

Figure 5: Results on the German dataset

Figure 6: Results on the Bank dataset

layer. Experimental results show that our contribution is instrumental in achieving truly fair representations.

REFERENCES

[1] Ganin, Y., Lempitsky, V.: Unsupervised Domain Adaptation by Backpropagation. In: Proceedings of the 32nd International Conference on Machine Learning (2015) [2] Ganin, Y., Ustinova, E., Ajakan, H., Germain, P., Larochelle, H., Laviolette, F., Marchand, M., Lempitsky, V.: Domain-adversarial training of neural networks. The Journal of Machine Learning Research 17(1), 2096–2030 (2016) [3] Julia Angwin, Jeff Larson, S.M., Kirchner, L.: Machine Bias (2016)

[4] LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015) [5] Louizos, C., Swersky, K., Li, Y., Welling, M., Zemel, R.S.: The Variational Fair

Autoencoder. CoRR abs/1511.00830 (2015)

[6] Newman, C.B.D., Merz, C.: UCI repository of machine learning databases (1998), http://archive.ics.uci.edu/ml/index.php

[7] Nitin Mittal, David Kuder, S.H.: AI-fueled organizations: Reaching AI’s full potential in the enterprise. Deloitte Insights (January 2019)

[8] Xie, Q., Dai, Z., Du, Y., Hovy, E., Neubig, G.: Controllable invariance through ad-versarial feature learning. In: Advances in Neural Information Processing Systems 30, pp. 585–596 (2017)

[9] Zafar, M.B., Valera, I., Gomez Rodriguez, M., Gummadi, K.P.: Fairness beyond disparate treatment & disparate impact: Learning classification without disparate mistreatment. In: Proceedings of the 26th International Conference on World Wide Web (2017)

[10] Zemel, R., Wu, Y., Swersky, K., Pitassi, T., Dwork, C.: Learning fair representa-tions. In: International Conference on Machine Learning. pp. 325–333 (2013)

Riferimenti

Documenti correlati

Com- paiono sporadicamente nella preparazione del ripieno per i tortelli, ma soprattutto in quella delle frittate, come la Clematis vitalba, in Garfagnana comunemente vezzadro,

Most previous studies, that included reduced intensity conditioning and peripheral blood as stem cell source, reported significant mortality and morbidity due to chronic

As one studies surfaces that have differentiability properties in the intrinsic struc- ture of the Heisenberg group, but not in the Euclidean structure, then it is not natural

ama(Carlo, Giovanni); donna(Maria); donna(Luisa); giovane(Luisa); allegra(Luisa); allegra(Maria); ama(Carlo, Carlo); ama(Giovanni, Giovanni); ama(Maria, Maria);

Different theoretical works have studied a non-asymptotic Bayesian approach in quantum multiparameter estimation 40 – 44 , thus providing bounds and protocols to generally

To understand if non-detections imply instead the presence of a high- energy cutoff in the prompt spectrum, one simple method consists in extrapolating the best fit spectral model

re; quest’ultimo particolare parametro ` e stato inserito in quanto ` e stato previsto che un qualsiasi utente abbia la facolt` a di poter inviare un comando anche se non ri-

Although several scholars from different disciplinary perspectives seems to agree on the positivity of using visual elements, especially with children, it is also important to