使用稳定的扩散V上的笔记本电脑上的AI驱动图像处理 - 这比您想象的要容易!(笔记本电脑.您想.扩散.图像处理.这比...)
这个脚本利用稳定的扩散v1.5从拥抱面孔的扩散器库来基于给定文本提示符生成图像变化。通过使用火炬和pil,它处理输入图像,应用ai驱动的转换并保存结果。
您可以克隆此回购以获取代码https://github.com/alexander-uspenskiy/image_variations>源代码:
PHP
import torch
from diffusers import StableDiffusionImg2ImgPipeline
from PIL import Image
import requests
from io import BytesIO
def load_image(image_path, target_size=(768, 768)):
"""
Load and preprocess the input image
"""
if image_path.startswith('http'):
response = requests.get(image_path)
image = Image.open(BytesIO(response.content))
else:
image = Image.open(image_path)
# Resize and preserve aspect ratio
image = image.convert("RGB")
image.thumbnail(target_size, Image.Resampling.LANCZOS)
# Create new image with padding to reach target size
new_image = Image.new("RGB", target_size, (255, 255, 255))
new_image.paste(image, ((target_size[0] - image.size[0]) // 2,
(target_size[1] - image.size[1]) // 2))
return new_image
def generate_image_variation(
input_image_path,
prompt,
model_id="stable-diffusion-v1-5/stable-diffusion-v1-5",
num_images=1,
strength=0.75,
guidance_scale=7.5,
seed=None
):
"""
Generate variations of an input image using a specified prompt
Parameters:
- input_image_path: Path or URL to the input image
- prompt: Text prompt to guide the image generation
- model_id: Hugging Face model ID
- num_images: Number of variations to generate
- strength: How much to transform the input image (0-1)
- guidance_scale: How closely to follow the prompt
- seed: Random seed for reproducibility
Returns:
- List of generated images
"""
# Set random seed if provided
if seed is not None:
torch.manual_seed(seed)
# Load the model
device = "cuda" if torch.cuda.is_available() else "cpu"
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(
model_id,
torch_dtype=torch.float16 if device == "cuda" else torch.float32
).to(device)
# Load and preprocess the input image
init_image = load_image(input_image_path)
# Generate images
result = pipe(
prompt=prompt,
image=init_image,
num_images_per_prompt=num_images,
strength=strength,
guidance_scale=guidance_scale
)
return result.images
def save_generated_images(images, output_prefix="generated"):
"""
Save the generated images with sequential numbering
"""
for i, image in enumerate(images):
image.save(f"images-out/{output_prefix}_{i}.png")
# Example usage
if __name__ == "__main__":
# Example parameters
input_image = "images-in/Image_name.jpg" # or URL
prompt = "Draw the image in modern art style, photorealistic and detailed."
# Generate variations
generated_images = generate_image_variation(
input_image,
prompt,
num_images=3,
strength=0.75,
seed=42 # Optional: for reproducibility
)
# Save the results
save_generated_images(generated_images)
它的工作原理:
>加载和预处理输入图像接受本地文件路径和url。
>
将图像转换为rgb格式,并将其调整为768×768,以维持纵横比。
添加填充以适合目标尺寸。
初始化稳定扩散v1.5
添加文本提示来指导转换。
强度(0-1)和引导量表(更高=更严格的提示依从性)等参数允许自定义。
将结果保存到图像输出目录。
>输出带有顺序命名方案的生成图像(生成_0.png,生成_1.png等)。
>您可以使用以下提示来将一个人的图像转换为中世纪的国王 提示=“在中世纪的环境中,将这个人当作强大的国王,逼真的和详细的。 初始图像:
结果:
cons&pros
cons:
小尺寸模型限制。
- pros:
- >在本地运行(不需要云服务)。
可重现的可选随机种子。
以上就是使用稳定的扩散V上的笔记本电脑上的AI驱动图像处理 - 这比您想象的要容易!的详细内容,更多请关注知识资源分享宝库其它相关文章!