PyTorch3D

PyTorch3D

  • 文档
  • 教程
  • API
  • GitHub

›

教程

  • 概述

3D 运算符

  • 拟合网格
  • 捆绑调整

渲染

  • 渲染纹理网格
  • 渲染 DensePose 网格
  • 渲染彩色点云
  • 通过渲染拟合带纹理的网格
  • 使用可微渲染进行相机位置优化
  • 通过光线步进拟合体积
  • 通过光线步进拟合简化的 NeRF

数据加载器

  • ShapeNetCore 和 R2N2 的数据加载器

Implicitron

  • 使用 Implicitron 训练自定义体积函数
  • Implicitron 配置系统深入探讨
在 Google Colab 中运行
下载教程 Jupyter Notebook
下载教程源代码
在 [ ]
# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved.

渲染 DensePose¶

DensePose 指的是密集人体姿态表示:https://github.com/facebookresearch/DensePose。在本教程中,我们提供了一个在 PyTorch3D 中使用 DensePose 数据的示例。

本教程展示了如何

  • 从 densepose 的 .mat 和 .pkl 文件加载网格和纹理
  • 设置渲染器
  • 渲染网格
  • 更改渲染设置,例如灯光和相机位置

导入模块¶

确保已安装 torch 和 torchvision。如果未安装 pytorch3d,请使用以下单元格安装它

在 [ ]
import os
import sys
import torch
need_pytorch3d=False
try:
    import pytorch3d
except ModuleNotFoundError:
    need_pytorch3d=True
if need_pytorch3d:
    if torch.__version__.startswith("2.2.") and sys.platform.startswith("linux"):
        # We try to install PyTorch3D via a released wheel.
        pyt_version_str=torch.__version__.split("+")[0].replace(".", "")
        version_str="".join([
            f"py3{sys.version_info.minor}_cu",
            torch.version.cuda.replace(".",""),
            f"_pyt{pyt_version_str}"
        ])
        !pip install fvcore iopath
        !pip install --no-index --no-cache-dir pytorch3d -f https://dl.fbaipublicfiles.com/pytorch3d/packaging/wheels/{version_str}/download.html
    else:
        # We try to install PyTorch3D from source.
        !pip install 'git+https://github.com/facebookresearch/pytorch3d.git@stable'
在 [ ]
# We also install chumpy as it is needed to load the SMPL model pickle file.
!pip install chumpy
在 [ ]
import os
import torch
import matplotlib.pyplot as plt
import numpy as np

# libraries for reading data from files
from scipy.io import loadmat
from PIL import Image
import pickle

# Data structures and functions for rendering
from pytorch3d.structures import Meshes
from pytorch3d.renderer import (
    look_at_view_transform,
    FoVPerspectiveCameras, 
    PointLights, 
    DirectionalLights, 
    Materials, 
    RasterizationSettings, 
    MeshRenderer, 
    MeshRasterizer,  
    SoftPhongShader,
    TexturesUV
)

# add path for demo utils functions 
import sys
import os
sys.path.append(os.path.abspath(''))

加载 SMPL 模型¶

下载 SMPL 模型¶

  • 访问 https://smpl.is.tue.mpg.de/download.php 并注册。
  • 下载适用于 Python 用户的 SMPL 并解压缩。
  • 将文件 male template 文件 'models/basicModel_m_lbs_10_207_0_v1.0.0.pkl' 复制到 data/DensePose/ 文件夹中。
    • 将文件名重命名为 'smpl_model.pkl' 或重命名下面注释中的字符串

如果使用 Google Colab 运行此笔记本,请运行以下单元格以获取纹理和 UV 值并将其保存到正确的路径。

在 [ ]
# Texture image
!wget -P data/DensePose https://raw.githubusercontent.com/facebookresearch/DensePose/master/DensePoseData/demo_data/texture_from_SURREAL.png

# UV_processed.mat
!wget https://dl.fbaipublicfiles.com/densepose/densepose_uv_data.tar.gz
!tar xvf densepose_uv_data.tar.gz -C data/DensePose
!rm densepose_uv_data.tar.gz

加载我们的纹理 UV 数据和 SMPL 数据,并进行一些处理以校正数据值和格式。

在 [ ]
# Setup
if torch.cuda.is_available():
    device = torch.device("cuda:0")
    torch.cuda.set_device(device)
else:
    device = torch.device("cpu")
    
# Set paths
DATA_DIR = "./data"
data_filename = os.path.join(DATA_DIR, "DensePose/UV_Processed.mat")
tex_filename = os.path.join(DATA_DIR,"DensePose/texture_from_SURREAL.png")
# rename your .pkl file or change this string
verts_filename = os.path.join(DATA_DIR, "DensePose/smpl_model.pkl")


# Load SMPL and texture data
with open(verts_filename, 'rb') as f:
    data = pickle.load(f, encoding='latin1') 
    v_template = torch.Tensor(data['v_template']).to(device) # (6890, 3)
ALP_UV = loadmat(data_filename)
with Image.open(tex_filename) as image:
    np_image = np.asarray(image.convert("RGB")).astype(np.float32)
tex = torch.from_numpy(np_image / 255.)[None].to(device)

verts = torch.from_numpy((ALP_UV["All_vertices"]).astype(int)).squeeze().to(device) # (7829,)
U = torch.Tensor(ALP_UV['All_U_norm']).to(device) # (7829, 1)
V = torch.Tensor(ALP_UV['All_V_norm']).to(device) # (7829, 1)
faces = torch.from_numpy((ALP_UV['All_Faces'] - 1).astype(int)).to(device)  # (13774, 3)
face_indices = torch.Tensor(ALP_UV['All_FaceIndices']).squeeze()  # (13774,)
在 [ ]
# Display the texture image
plt.figure(figsize=(10, 10))
plt.imshow(tex.squeeze(0).cpu())
plt.axis("off");

在 DensePose 中,人体网格被分成 24 个部分。在纹理图像中,我们可以看到这 24 个部分被分成每个身体部位的单独 (200, 200) 图像。DensePose 中的约定是网格中的每个面都与一个身体部位相关联(由上面的 face_indices 张量给出)。每个面的顶点 UV 值(在 [0, 1] 范围内)特定于与网格面对应的身体部位的 (200, 200) 尺寸纹理图。我们不能直接将它们与整个纹理图一起使用。我们必须根据关联的面所属的身体部位偏移顶点 UV 值。

在 [ ]
# Map each face to a (u, v) offset
offset_per_part = {}
already_offset = set()
cols, rows = 4, 6
for i, u in enumerate(np.linspace(0, 1, cols, endpoint=False)):
    for j, v in enumerate(np.linspace(0, 1, rows, endpoint=False)):
        part = rows * i + j + 1  # parts are 1-indexed in face_indices
        offset_per_part[part] = (u, v)

U_norm = U.clone()
V_norm = V.clone()

# iterate over faces and offset the corresponding vertex u and v values
for i in range(len(faces)):
    face_vert_idxs = faces[i]
    part = face_indices[i]
    offset_u, offset_v = offset_per_part[int(part.item())]
    
    for vert_idx in face_vert_idxs:   
        # vertices are reused, but we don't want to offset multiple times
        if vert_idx.item() not in already_offset:
            # offset u value
            U_norm[vert_idx] = U[vert_idx] / cols + offset_u
            # offset v value
            # this also flips each part locally, as each part is upside down
            V_norm[vert_idx] = (1 - V[vert_idx]) / rows + offset_v
            # add vertex to our set tracking offsetted vertices
            already_offset.add(vert_idx.item())

# invert V values
V_norm = 1 - V_norm
在 [ ]
# create our verts_uv values
verts_uv = torch.cat([U_norm[None],V_norm[None]], dim=2) # (1, 7829, 2)

# There are 6890 xyz vertex coordinates but 7829 vertex uv coordinates. 
# This is because the same vertex can be shared by multiple faces where each face may correspond to a different body part.  
# Therefore when initializing the Meshes class,
# we need to map each of the vertices referenced by the DensePose faces (in verts, which is the "All_vertices" field)
# to the correct xyz coordinate in the SMPL template mesh.
v_template_extended = v_template[verts-1][None] # (1, 7829, 3)

创建我们的纹理网格¶

Meshes 是 PyTorch3D 中提供的一种独特的数据结构,用于处理不同大小的网格批次。

TexturesUV 是用于存储网格的顶点 uv 和纹理图的辅助数据结构。

在 [ ]
texture = TexturesUV(maps=tex, faces_uvs=faces[None], verts_uvs=verts_uv)
mesh = Meshes(v_template_extended, faces[None], texture)

创建渲染器¶

在 [ ]
# Initialize a camera.
# World coordinates +Y up, +X left and +Z in.
R, T = look_at_view_transform(2.7, 0, 0) 
cameras = FoVPerspectiveCameras(device=device, R=R, T=T)

# Define the settings for rasterization and shading. Here we set the output image to be of size
# 512x512. As we are rendering images for visualization purposes only we will set faces_per_pixel=1
# and blur_radius=0.0. 
raster_settings = RasterizationSettings(
    image_size=512, 
    blur_radius=0.0, 
    faces_per_pixel=1, 
)

# Place a point light in front of the person. 
lights = PointLights(device=device, location=[[0.0, 0.0, 2.0]])

# Create a Phong renderer by composing a rasterizer and a shader. The textured Phong shader will 
# interpolate the texture uv coordinates for each vertex, sample from a texture image and 
# apply the Phong lighting model
renderer = MeshRenderer(
    rasterizer=MeshRasterizer(
        cameras=cameras, 
        raster_settings=raster_settings
    ),
    shader=SoftPhongShader(
        device=device, 
        cameras=cameras,
        lights=lights
    )
)

渲染我们从 SMPL 模型和纹理图创建的纹理网格。

在 [ ]
images = renderer(mesh)
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");

身体的不同视角和灯光¶

我们还可以更改渲染管道中的许多其他设置。在这里,我们

  • 更改相机的视角
  • 更改点光的位置
在 [ ]
# Rotate the person by increasing the elevation and azimuth angles to view the back of the person from above. 
R, T = look_at_view_transform(2.7, 10, 180)
cameras = FoVPerspectiveCameras(device=device, R=R, T=T)

# Move the light location so the light is shining on the person's back.  
lights.location = torch.tensor([[2.0, 2.0, -2.0]], device=device)

# Re render the mesh, passing in keyword arguments for the modified components.
images = renderer(mesh, lights=lights, cameras=cameras)
在 [ ]
plt.figure(figsize=(10, 10))
plt.imshow(images[0, ..., :3].cpu().numpy())
plt.axis("off");

结论¶

在本教程中,我们学习了如何从DensePose 模型和 uv 数据构建纹理网格,以及如何初始化渲染器并更改渲染网格的视角和灯光。

pytorch3d
Facebook Open Source
版权所有 © 2024 Meta Platforms, Inc
法律信息:隐私条款