Back to Projects

Face to Ball– Lighting Transfer Using Deep Learning (Using Synthetic Dataset)

View Code

What

We built a deep learning pipeline that captures and transfers lighting effects from human facial images onto gray ball images, mimicking how light interacts with human faces. The goal was to generate a synthetic dataset and train a model capable of learning lighting patterns to realistically recreate them on a grey ball.


How

Dataset Creation
  • Generated a synthetic dataset of 1,600 image sets (face and corresponding ball) using tools like FaceBuilder, Sketchfab, and TRELLIS.
  • Covered 16 identities under 100 unique lighting and angle conditions.
Model Architecture
  • Explored U-Net and GANs; implemented a ResNet34-based U-Net.
  • Experimented with two training regimes:
    • Freezing encoder + training decoder.
    • Training both encoder and decoder.
Training
  • Used a hybrid loss: 0.5 × MSE + 0.5 × (1 - SSIM)
  • Used L1 loss
  • Trained over 500 epochs with batch size 50 and learning rate between 1e-4 to 1e-5.

Results

  • The model learned to realistically transfer lighting from a face image to a gray ball image.
  • Even though trained on synthetic data, it was successful in transfering light from real faces too.
  • Demonstrated potential for applications in relighting, synthetic data generation, and 3D rendering.

Contributors

View Code