【AI绘画】Midjourney进阶:对称构图详解 https://blog.csdn.net/2201_75539691?type=blog
构图是摄影、绘画、设计等视觉艺术中的一个基本概念。它指的是艺术家如何在二维平面上安排元素,包括形状、线条、色彩、质地、空间等,以达到一定的视觉效果和艺术表达。
三分线构图
,又被称为黄金分割构图,是构图中最常用的一种构图法则。这种构图法则的基本原理是将画面分割为九等份,由两条垂直线和两条水平线相交形成,主体或重要元素放在这四条线或者交点上。
平衡感:三分线构图可以使画面的视觉重心分布均匀,使画面看上去更加稳定和平衡。 画面美感:三分线构图符合人类的视觉习惯和审美规律,能够使画面看上去更加和谐和美观。
在风景图片中
,通过使用三分线构图,可以有效地表现出天空和地面的比例关系。在人像图片中
,通过使用三分线构图,可以使人物的位置和背景的比例更加协调。在建筑图片和静物图中
,通过使用三分线构图,可以使建筑或静物的位置和周围环境的关系更加和谐。
原图:
serene forest, river flowing through mountains, vibrant autumn colors, sun setting, reflections on water, ultra detailed, cinematic light, 8k --ar 16:9
原图+Rule of Thirds composition:
serene forest, river flowing through mountains, vibrant autumn colors, sun setting, reflections on water, Rule of Thirds composition, ultra detailed, cinematic light, 8k --ar 16:9
在本文中,我们详细介绍了三分线构图(也称为黄金分割构图)的原理、特点以及实际应用。通过将画面分为九等份,并将主体或重要元素放置在这些分割线或交点上,可以大幅提升图像的平衡感和美感。三分线构图广泛应用于风景、人像和建筑等场景中,能够自然引导观众的视线,增强画面的视觉效果。同时,在使用AI绘画工具如Midjourney时,结合“Rule of Thirds composition”这样的提示词,可以自动优化图像的构图效果,使作品更加符合人类的审美习惯。通过本文的解析和示例,读者可以更深入地理解这一构图技巧,进而在创作中灵活应用,提升作品的表现力。
import torch, torchvision.transforms as transforms; from torchvision.models import vgg19; import torch.nn.functional as F; from PIL import Image; import matplotlib.pyplot as plt; class StyleTransferModel(torch.nn.Module): def __init__(self): super(StyleTransferModel, self).__init__(); self.vgg = vgg19(pretrained=True).features; for param in self.vgg.parameters(): param.requires_grad_(False); def forward(self, x): layers = {'0': 'conv1_1', '5': 'conv2_1', '10': 'conv3_1', '19': 'conv4_1', '21': 'conv4_2', '28': 'conv5_1'}; features = {}; for name, layer in self.vgg._modules.items(): x = layer(x); if name in layers: features[layers[name]] = x; return features; def load_image(img_path, max_size=400, shape=None): image = Image.open(img_path).convert('RGB'); if max(image.size) > max_size: size = max_size; else: size = max(image.size); if shape is not None: size = shape; in_transform = transforms.Compose([transforms.Resize((size, size)), transforms.ToTensor(), transforms.Normalize((0.485, 0.456, 0.406), (0.229, 0.224, 0.225))]); image = in_transform(image)[:3, :, :].unsqueeze(0); return image; def im_convert(tensor): image = tensor.to('cpu').clone().detach(); image = image.numpy().squeeze(); image = image.transpose(1, 2, 0); image = image * (0.229, 0.224, 0.225) + (0.485, 0.456, 0.406); image = image.clip(0, 1); return image; def gram_matrix(tensor): _, d, h, w = tensor.size(); tensor = tensor.view(d, h * w); gram = torch.mm(tensor, tensor.t()); return gram; content = load_image('content.jpg').to('cuda'); style = load_image('style.jpg', shape=content.shape[-2:]).to('cuda'); model = StyleTransferModel().to('cuda'); style_features = model(style); content_features = model(content); style_grams = {layer: gram_matrix(style_features[layer]) for layer in style_features}; target = content.clone().requires_grad_(True).to('cuda'); style_weights = {'conv1_1': 1.0, 'conv2_1': 0.8, 'conv3_1': 0.5, 'conv4_1': 0.3, 'conv5_1': 0.1}; content_weight = 1e4; style_weight = 1e2; optimizer = torch.optim.Adam([target], lr=0.003); for i in range(1, 3001): target_features = model(target); content_loss = F.mse_loss(target_features['conv4_2'], content_features['conv4_2']); style_loss = 0; for layer in style_weights: target_feature = target_features[layer]; target_gram = gram_matrix(target_feature); style_gram = style_grams[layer]; layer_style_loss = style_weights[layer] * F.mse_loss(target_gram, style_gram); b, c, h, w = target_feature.shape; style_loss += layer_style_loss / (c * h * w); total_loss = content_weight * content_loss + style_weight * style_loss; optimizer.zero_grad(); total_loss.backward(); optimizer.step(); if i % 500 == 0: print('Iteration {}, Total loss: {}'.format(i, total_loss.item())); plt.imshow(im_convert(target)); plt.axis('off'); plt.show()