MemoryBody: Wearing Digital Memories

Category:PrototypeTags:
#Interactive Art#PoseNet
Published: 2019 - 8 - 6

Data from past SNS posts quietly binds a person's identity, behavior, and future posts. By literally "dressing" users in their own past SNS post data, this interactive artwork lets them physically experience the relationship between digital memory and the self.

Image

Image

The core technologies behind this work are DensePose (Dense Human Pose Estimation In The Wild, Facebook AI Research, CVPR 2018) and the texture transfer technique built on top of it.

IUV
Mapping
with
DensePose

While conventional pose estimation predicts keypoints (joint coordinates), DensePose maps every human pixel in an image to a coordinate on a 3D body surface model (the SMPL model). This correspondence is represented as an IUV image — a 3-channel image analogous to RGB.

ChannelMeaning
I (part index)Which of the 24 body parts the pixel belongs to (e.g. front/back torso, upper/lower arm, thigh/calf)
UHorizontal coordinate on that part's surface (scaled to 0–255)
VVertical coordinate on that part's surface (scaled to 0–255)

The model uses DensePose-RCNN, built on a ResNet-101 + FPN (Feature Pyramid Network) backbone.

Mapping
to
the
Texture
Atlas

As the texture source, we use the texture atlas (texture_from_SURREAL.png) bundled with the SURREAL dataset. It is a single image consisting of 200×200 px texture patches for each of the 24 body parts, arranged in a 4×6 grid. The atlas is decomposed into 24 individual part textures as follows:

TextureIm = np.zeros([24, 200, 200, 3])
for i in range(4):
    for j in range(6):
        TextureIm[6*i+j, :, :, :] = Tex_Atlas[(200*j):(200*j+200), (200*i):(200*i+200), :]

The TransferTexture function reads the (I, U, V) coordinate indicated by each pixel of the IUV map and writes the corresponding pixel from the texture atlas back onto the original image. By converting SNS post text or images into this atlas format, we can project "memories of the past" directly onto the user's body surface.

Processing
Pipeline

  1. Capture photo (Web camera input via Google Colab + JavaScript)
  2. DensePose inference (generate IUV and INDS images with the ResNet-101 FPN model)
  3. Texture preparation (SNS post data → texture atlas for 24 body parts)
  4. Texture transfer (project texture onto the body surface using IUV coordinates)
  5. Output and display the composited image
  • DensePose (Facebook AI Research, CVPR 2018) — dense human pose estimation & IUV mapping
  • ResNet-101 + FPN — backbone network for DensePose
  • Detectron / Caffe2 — inference framework for DensePose
  • SURREAL texture atlas — texture mapping foundation for 24 body parts
  • OpenCV — image I/O and processing
  • Google Colaboratory — GPU execution environment

Source code is available on Google Colab.