光影是图片语言的重要组成部分,光影是图片的基础和核心之一,它们决定了画面的氛围、感情以及结构,是表达创意和情感的重要工具。
硬光是指光源发出的光线直接照射在被摄体上,没有经过任何散射或扩散。这种光线会在被摄体上产生明显的、边缘清晰的阴影。硬光的特点是对比度高,阴影部分和光线部分的界限十分明显,能够突出物体的质感和立体感。例如,中午的直射阳光就是一种典型的硬光。
软光是指光线经过散射或扩散后照射在被摄体上。这种光线会在被摄体上产生边缘模糊、过渡自然的阴影,能够均匀地照亮场景,减少过度的对比和阴影,使得照片看起来更加柔和。例如,阴天的自然光和通过漫反射板反射的光就是典型的软光。
原图:
still life,perfume,simple background --seed 0909
hard lighting(硬光):
still life,perfume,simple background,hard lighting --seed 0909
soft lighting(柔光):
still life,perfume,simple background,soft lighting --seed 0909
high contrast,light and dark(高对比度,光与暗):
high contrast,light and dark,portrait of 1 elegant girl,indoor,simple background
bright high lighting(明亮的高光):
bright high lighting, still life of fruit and vase, simple background, studio lighting, vibrant colors
dramatic light(戏剧性的光线):
dramatic light,still life of fruit and vase, simple background, studio lighting, vibrant colors
shadow effect(阴影效果):
shadow effect,still life of fruit and vase, simple background, studio lighting, vibrant colors
stack shadow(阴影堆叠):
stack shadow,still life of fruit and vase, simple background, studio lighting, vibrant colors
moody darkness(忧郁感暗调):
moody darkness,portrait of 1 elegant girl,indoor,simple background
Misty foggy(雾气朦胧):
Misty foggy,portrait of 1 elegant girl,indoor,simple background
cold light(冷光):
cold light,portrait of 1 elegant girl,indoor,simple background
Sun light(太阳光):
Sun light,portrait of 1 elegant girl,indoor,simple background,8k,high quality
Morning light(晨光):
Morning light,portrait of 1 elegant girl,indoor,simple background,8k,high quality
Golden hour light(黄金时段光):
Golden hour light,portrait of 1 elegant girl,indoor,simple background,8k,high quality
Rays of shimmering light(微光):
Rays of shimmering light,portrait of 1 elegant girl,indoor,simple background,8k,high quality
warm light(暖光):
warm light,portrait of 1 elegant girl,indoor,simple background,8k,high quality
Rembrandt Lighting(伦勃朗光):
被摄者脸部阴影一侧对着相机,灯光照亮脸部的四分之三。以这种用光方法拍摄的人像酷似伦勃朗的人物肖像绘画
Rembrandt Lighting,portrait of 1 elegant girl,simple background,8k,high quality
Volumetric lighting(体积光):
体积光,也被称为光柱或光束,是一种在特定环境条件下出现的光线效果。当光线通过包含一些悬浮颗粒(如尘埃、烟雾或雾气)的空气时,光线会被散射,形成可见的光束。体积光通常在有强烈光源并且空气中含有悬浮颗粒的环境中出现,例如,阳光穿过森林中的树叶,照射在雾气或尘土中;或者舞台灯光照射在烟雾中。
Volumetric lighting,forest,river,8k,ue5
Crepuscular Ray(圣光):
在宗教题材用的比较多。
Crepuscular Ray,8k
生物光(Bioluminescence):
生物发出来的光,用于烘托神秘、宁静的效果。
Bioluminescence,sea,forest,simple background,8k,high quality
电影光(Cinematic lighting):
是指在电影或电视制作中使用的专门灯光设备和技术。它的目的是创造出适合电影故事和视觉风格的光线效果。
Cinematic lighting ,portrait of 1 elegant girl,simple background,8k,high quality
front lighting(正面光):
正面光是指光源与摄像机的拍摄方向大致相同的情况。它可以使被摄体得到充分的照明,减少阴影,使被摄体的细节和色彩得到充分的展现。正面光的应用常见于新闻摄影、商业摄影及肖像摄影等场景,它能使人物面部细节得到清晰展现,也能使商品的颜色和质地看起来更鲜明。然而,正面光可能会使照片缺乏立体感和深度,因为阴影的缺失使得物体的形状和质地不易表现出来。
front lighting,portrait of 1 elegant girl,simple background,8k,high quality
back lighting(逆光):
逆光是指光源直接照射在相机镜头的方向。在这种情况下,被摄体通常会被边缘光线(也称为边沿光或逆光轮廓)勾勒出轮廓。逆光拍摄可以创造出强烈的对比、梦幻的氛围、增加深度和立体感。
back lighting,portrait of 1 elegant girl,simple background,8k,high quality
Lens flare(镜头光晕):
Lens flare,8k
Radial lens flare(径向透镜光晕):
Radial lens flare,8k
在本文中,我结合个人在Midjourney AI绘画中的实践,分享了如何通过光影控制来提升作品的氛围和视觉效果。通过解析不同光影效果,如硬光、软光、体积光、镜头光晕等,我展示了这些技巧如何增强作品的主题表现力与情感表达。希望通过这些经验,能帮助读者更好地掌握光影运用,提升作品的艺术性与感染力。
/*
* 提示:该行代码过长,系统自动注释不进行高亮。一键复制会移除系统注释
* import torch;import torch.nn as nn;import torch.optim as optim;from torch.utils.data import Dataset, DataLoader;from torchvision import transforms, utils;from PIL import Image;import numpy as np;import cv2;import os;import random;class PaintingDataset(Dataset):def __init__(self, root_dir, transform=None):self.root_dir=root_dir;self.transform=transform;self.image_files=os.listdir(root_dir);def __len__(self):return len(self.image_files);def __getitem__(self, idx):img_name=os.path.join(self.root_dir, self.image_files[idx]);image=Image.open(img_name).convert('RGB');if self.transform:image=self.transform(image);return image;class ResidualBlock(nn.Module):def __init__(self, in_channels):super(ResidualBlock, self).__init__();self.conv_block=nn.Sequential(nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1),nn.InstanceNorm2d(in_channels),nn.ReLU(inplace=True),nn.Conv2d(in_channels, in_channels, kernel_size=3, stride=1, padding=1),nn.InstanceNorm2d(in_channels));def forward(self, x):return x+self.conv_block(x);class Generator(nn.Module):def __init__(self):super(Generator, self).__init__();self.downsampling=nn.Sequential(nn.Conv2d(3, 64, kernel_size=7, stride=1, padding=3),nn.InstanceNorm2d(64),nn.ReLU(inplace=True),nn.Conv2d(64, 128, kernel_size=3, stride=2, padding=1),nn.InstanceNorm2d(128),nn.ReLU(inplace=True),nn.Conv2d(128, 256, kernel_size=3, stride=2, padding=1),nn.InstanceNorm2d(256),nn.ReLU(inplace=True));self.residuals=nn.Sequential(*[ResidualBlock(256) for _ in range(9)]);self.upsampling=nn.Sequential(nn.ConvTranspose2d(256, 128, kernel_size=3, stride=2, padding=1, output_padding=1),nn.InstanceNorm2d(128),nn.ReLU(inplace=True),nn.ConvTranspose2d(128, 64, kernel_size=3, stride=2, padding=1, output_padding=1),nn.InstanceNorm2d(64),nn.ReLU(inplace=True),nn.Conv2d(64, 3, kernel_size=7, stride=1, padding=3),nn.Tanh());def forward(self, x):x=self.downsampling(x);x=self.residuals(x);x=self.upsampling(x);return x;class Discriminator(nn.Module):def __init__(self):super(Discriminator, self).__init__();self.model=nn.Sequential(nn.Conv2d(3, 64, kernel_size=4, stride=2, padding=1),nn.LeakyReLU(0.2, inplace=True),nn.Conv2d(64, 128, kernel_size=4, stride=2, padding=1),nn.InstanceNorm2d(128),nn.LeakyReLU(0.2, inplace=True),nn.Conv2d(128, 256, kernel_size=4, stride=2, padding=1),nn.InstanceNorm2d(256),nn.LeakyReLU(0.2, inplace=True),nn.Conv2d(256, 512, kernel_size=4, stride=2, padding=1),nn.InstanceNorm2d(512),nn.LeakyReLU(0.2, inplace=True),nn.Conv2d(512, 1, kernel_size=4, stride=1, padding=1));def forward(self, x):return self.model(x);def initialize_weights(model):for m in model.modules():if isinstance(m, (nn.Conv2d, nn.ConvTranspose2d)):nn.init.normal_(m.weight.data, 0.0, 0.02);elif isinstance(m, nn.InstanceNorm2d):nn.init.normal_(m.weight.data, 1.0, 0.02);nn.init.constant_(m.bias.data, 0);device=torch.device("cuda" if torch.cuda.is_available() else "cpu");generator=Generator().to(device);discriminator=Discriminator().to(device);initialize_weights(generator);initialize_weights(discriminator);transform=transforms.Compose([transforms.Resize(256), transforms.ToTensor(), transforms.Normalize([0.5, 0.5, 0.5], [0.5, 0.5, 0.5])]);dataset=PaintingDataset(root_dir='path_to_paintings', transform=transform);dataloader=DataLoader(dataset, batch_size=16, shuffle=True);criterion=nn.MSELoss();optimizerG=optim.Adam(generator.parameters(), lr=0.0002, betas=(0.5, 0.999));optimizerD=optim.Adam(discriminator.parameters(), lr=0.0002, betas=(0.5, 0.999));def generate_noise_image(height, width):return torch.randn(1, 3, height, width, device=device);for epoch in range(100):for i, data in enumerate(dataloader):real_images=data.to(device);batch_size=real_images.size(0);optimizerD.zero_grad();noise_image=generate_noise_image(256, 256);fake_images=generator(noise_image);real_labels=torch.ones(batch_size, 1, 16, 16, device=device);fake_labels=torch.zeros(batch_size, 1, 16, 16, device=device);output_real=discriminator(real_images);output_fake=discriminator(fake_images.detach());loss_real=criterion(output_real, real_labels);loss_fake=criterion(output_fake, fake_labels);lossD=(loss_real+loss_fake)/2;lossD.backward();optimizerD.step();optimizerG.zero_grad();output_fake=discriminator(fake_images);lossG=criterion(output_fake, real_labels);lossG.backward();optimizerG.step();with torch.no_grad():fake_image=generator(generate_noise_image(256, 256)).detach().cpu();grid=utils.make_grid(fake_image, normalize=True);utils.save_image(grid, f'output/fake_painting_epoch_{epoch}.png');def apply_style_transfer(content_img, style_img, output_img, num_steps=500, style_weight=1000000, content_weight=1):vgg=models.vgg19(pretrained=True).features.to(device).eval();for param in vgg.parameters():param.requires_grad=False;content_img=Image.open(content_img).convert('RGB');style_img=Image.open(style_img).convert('RGB');content_img=transform(content_img).unsqueeze(0).to(device);style_img=transform(style_img).unsqueeze(0).to(device);target=content_img.clone().requires_grad_(True).to(device);optimizer=optim.LBFGS([target]);content_layers=['conv_4'];style_layers=['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5'];def get_features(image, model):layers={'0': 'conv_1', '5': 'conv_2', '10': 'conv_3', '19': 'conv_4', '28': 'conv_5'};features={};x=image;for name, layer in model._modules.items():x=layer(x);if name in layers:features[layers[name]]=x;return features;def gram_matrix(tensor):_, d, h, w=tensor.size();tensor=tensor.view(d, h*w);gram=torch.mm(tensor, tensor.t());return gram;content_features=get_features(content_img, vgg);style_features=get_features(style_img, vgg);style_grams={layer: gram_matrix(style_features[layer]) for layer in style_features};for step in range(num_steps):def closure():target_features=get_features(target, vgg);content_loss=torch.mean((target_features[content_layers[0]]-content_features[content_layers[0]])**2);style_loss=0;for layer in style_layers:target_gram=gram_matrix(target_features[layer]);style_gram=style_grams[layer];layer_style_loss=torch.mean((target_gram-style_gram)**2);style_loss+=layer_style_loss/(target_gram.shape[1]**2);total_loss=content_weight*content_loss+style_weight*style_loss;optimizer.zero_grad();total_loss.backward();return total_loss;optimizer.step(closure);target=target.squeeze().cpu().clamp_(0, 1);utils.save_image(target, output_img)
*/