Fig. 1: An example of a typical backdoor attack (adapted from Wang et al. (2019) (IMAGE)
Caption
The visible distributed trigger is shown in Figure 1(a) and the target label is seven (7). The training data is modified. We see this in Figure 1(b) and the model is trained with this poisoned data. The inputs without the trigger will be correctly classified and the ones with the trigger will be incorrectly classified during the inference, as seen in Figure 1(c).
Credit
SUTD
Usage Restrictions
SUTD
License
Original content