Synthesizing Images on Perceptual Boundaries of ANNs for Uncovering and Modulating Individual Human Percepts

2024-10-02

Authors: Chen Wei*, Chi Zhang*, Jiachen Zou, Haotian Deng, Dietmar Heinke, Quanying Liu
*Chen Wei and Chi Zhang contributed equally to this work.
Date: October 2, 2024
Supervisor: Quanying Liu, Dietmar Heinke

varMNIST Motivation

Introduction

Motivation:
Human perception varies with ambiguous stimuli, and ANNs struggle to capture this variability. Understanding it is key to predicting decision-making in uncertain situations. This study explores whether generating images based on ANN perceptual boundaries can reveal differences in human perception.

Method:
We developed a model that samples ANN boundaries to create images that induce varied human responses. These include stimuli that highlight perceptual differences and adversarial perturbations that subtly influence decisions. We tested these in a large-scale experiment, creating the varMNIST dataset.

Contribution:

  • Created a model generating images that provoke diverse human perceptions, forming the varMNIST dataset.
  • Improved predictions of human decision-making by aligning ANN variability with human data.
  • Uncovered individual differences in perception, supporting personalized AI development.

Method

Perceptual Boundaries of ANNs:
We sampled images along ANN decision boundaries to induce high perceptual variability. Using two methods:

  • Uncertainty Guidance: Maximizes entropy in an ANN’s classification to generate ambiguous images near its decision boundary.
  • Controversial Guidance: Maximizes the KL divergence between two ANN models, producing images that highlight differences in their classifications.

Diffusion Model Regularization:
To ensure natural-looking images, we used a classifier-free diffusion model that introduces prior information from the MNIST dataset. A reference-constrained method, guided by MSE loss, prevents mode collapse and ensures diverse, recognizable images.

Human Experiments & Dataset Construction:
We filtered generated images based on entropy and KL divergence, then tested them in a digit recognition task with human participants. Based on response times and entropy thresholds, we finalized 4,741 images that form the varMNIST dataset.

framework
Figure 1: The framework of constructing dataset.

Human Experiments and Behavioral Analysis

Entropy Distribution and Image Variability:

  • We used images generated by uncertainty and controversial guidance methods in digit recognition tasks. The example images in Fig.3 indicates the ability of generated images to evoke varying perceptual variability, making entropy a useful metric for image filtering.
framework
Figure 2: Entropy distribution of digit recognition by human subjects.

Human vs. Model Entropy Comparison:

  • Human response entropy correlated positively with model-predicted entropy, though the model’s predictions initially clustered near zero. After fine-tuning with human behavioral data, the correlation improved significantly, aligning model predictions more closely with human behavior.

Response Time (RT) Analysis:

  • A positive correlation was found between entropy and RT (Pearson’s correlation=1.17e-17), suggesting that higher perceptual variability leads to longer decision times. This makes RT a valuable supplementary measure when filtering images for perceptual variability.
framework
Figure 3: Quantitative analysis of behavioral measurements. (a) Positive correlation between behavior-calculated and model-predicted entropy. (b) Improvement in the correlation between behavior-calculated and model-predicted entropy after alignment. (c) Distribution of RT. (d) Positive correlation between RT and entropy.

Variability alignment between humans and networks

Fine-Tuning for Perceptual Alignment:

  • ANN models had low accuracy (0.2802) on varMNIST, revealing a gap in perceptual variability between humans and networks. After:
    Population-Level Fine-Tuning with data from all participants, accuracy improved to 0.5569.
    Subject-Level Fine-Tuning with individual data, accuracy increased to 0.6230, highlighting individual perceptual differences.
framework
Figure 4: Alignment of model and human behavior on the generated dataset. The x-axis labels are in the format "Training dataset (Test Dataset)", indicating that the model was trained on the "Training dataset" and the accuracy was calculated on the "Test dataset".

Subject Clustering Analysis:

  • Participants were grouped into eight clusters based on behavior similarity. Models predicted in-cluster behavior more accurately than out-cluster, confirming significant perceptual differences across participants.
framework
Figure 5: Subject clustering analysis. (a) Subject similarity matrix and clustering results. (b) Performance of the subject-finetuned model in predicting data from different groups.

Controversial stimuli between subjects unveil individual differences in perceptual variability

Generating Controversial Stimuli:

  • Using fine-tuned models, we generated controversial stimuli to explore individual perceptual differences. We compared:
    Subject-Subject Testing: Adversarial classifiers created stimuli that caused two individuals to make different decisions, highlighting their perceptual differences.
    Subject-Group Testing: Stimuli were generated between an individual’s fine-tuned model and a group model, revealing differences between the individual and the collective.

Disagreement Patterns:

  • We identified typical patterns in controversial stimuli, called disagreement patterns. For instance, some subjects consistently disagreed on digits like 3 and 7, or 3 and 8. These patterns show that even in seemingly objective tasks like digit recognition, individual differences and preferences exist.
framework
Figure 6: Overview of patterns among different models in digit classification. (a) Example of disagreement patterns between different models. (b) Distribution of KL divergences between MNIST, varMNIST, and controversial samples. (c) Conceptual diagram of disagreement patterns.

Discussion

Key Findings:

  • We generated stimuli from ANN perceptual boundaries, effectively evoking diverse human perceptual experiences.
  • Controversial stimuli revealed individual differences in human perception, advancing understanding of variability between humans and AI.

Advantages Over Prior Work:

  • Unlike previous studies that focused on either humans or ANNs, our method influences both, allowing for a more comprehensive analysis of perceptual boundaries.
  • The use of diffusion models increases image naturalness and sampling flexibility, enhancing human-AI perceptual alignment.

Broader Impact:

  • Our framework enables personalized perceptual modulation, making studies on human perception more efficient.
  • Combining controversial and adversarial stimuli with diffusion models significantly improves the generation of impactful stimuli.

Limitations and Future Work:

  • Expanding the dataset to natural images and diverse participants will better capture human variability.
  • Further improvements are needed in AI-human alignment using optimal experimental design.

This paper has been submitted to ICLR2025.