How to Automate Your Personal Workflow with Python
Python Automation

How to Automate Your Personal Workflow with Python

Build scripts that handle the tedious parts of your day

The Automation Mindset

Every task you do more than twice is a candidate for automation. Filing emails. Renaming files. Processing data. Generating reports. These tasks consume hours weekly—hours you’ll never get back.

Python makes automation accessible. The language is readable. The standard library handles common tasks. Third-party packages cover everything else. A few lines of Python can save hours of manual work.

The question isn’t whether to automate. The question is what to automate first.

My British lilac cat, Mochi, has automated her life efficiently. Breakfast happens at 6:30 AM without negotiation. Nap locations rotate on a predictable schedule. She’s optimized her routine to minimize effort and maximize outcomes. We should aspire to her operational excellence.

This article covers practical Python automation for personal workflows. Not abstract concepts—actual scripts you can adapt. By the end, you’ll have patterns for automating your repetitive tasks, whether you’re a developer, data professional, or anyone who works with computers.

Setting Up Your Automation Environment

Before writing scripts, establish a proper environment.

Python Installation

Use Python 3.11 or later. Installation varies by platform:

macOS: Use Homebrew (brew install python) or the official installer.

Windows: Use the official installer from python.org or Windows Store.

Linux: Usually pre-installed; update with your package manager.

Verify installation:

python3 --version

Virtual Environments

Keep automation scripts in a dedicated virtual environment. This isolates dependencies and prevents conflicts.

# Create automation environment
python3 -m venv ~/automation-env

# Activate it
source ~/automation-env/bin/activate  # macOS/Linux
# or
~/automation-env\Scripts\activate  # Windows

Add activation to your shell profile for convenience.

Project Structure

Create a directory for your automation scripts:

~/automation/
├── scripts/
│   ├── file_organizer.py
│   ├── email_processor.py
│   └── report_generator.py
├── config/
│   └── settings.yaml
├── logs/
│   └── automation.log
└── requirements.txt

Essential Packages

Start with these commonly useful packages:

pip install requests beautifulsoup4 pandas python-dotenv pyyaml schedule watchdog
  • requests: HTTP requests for APIs and web
  • beautifulsoup4: HTML parsing for web scraping
  • pandas: Data manipulation
  • python-dotenv: Environment variable management
  • pyyaml: Configuration files
  • schedule: Task scheduling
  • watchdog: File system monitoring

File Organization Automation

One of the most immediately useful automations: organizing files automatically.

Basic File Sorter

This script sorts files in a folder by type:

#!/usr/bin/env python3
"""Sort files in a directory by extension."""

from pathlib import Path
import shutil

# Configuration
SOURCE_DIR = Path.home() / "Downloads"
RULES = {
    "Images": [".jpg", ".jpeg", ".png", ".gif", ".webp", ".avif"],
    "Documents": [".pdf", ".doc", ".docx", ".txt", ".md"],
    "Spreadsheets": [".xlsx", ".xls", ".csv"],
    "Archives": [".zip", ".rar", ".7z", ".tar", ".gz"],
    "Videos": [".mp4", ".mov", ".avi", ".mkv"],
    "Audio": [".mp3", ".wav", ".flac", ".m4a"],
}

def get_category(extension: str) -> str:
    """Find category for a file extension."""
    for category, extensions in RULES.items():
        if extension.lower() in extensions:
            return category
    return "Other"

def organize_files():
    """Move files to appropriate folders."""
    for file_path in SOURCE_DIR.iterdir():
        if file_path.is_file():
            category = get_category(file_path.suffix)
            dest_dir = SOURCE_DIR / category
            dest_dir.mkdir(exist_ok=True)
            
            dest_path = dest_dir / file_path.name
            # Handle duplicate names
            counter = 1
            while dest_path.exists():
                stem = file_path.stem
                suffix = file_path.suffix
                dest_path = dest_dir / f"{stem}_{counter}{suffix}"
                counter += 1
            
            shutil.move(str(file_path), str(dest_path))
            print(f"Moved: {file_path.name} -> {category}/")

if __name__ == "__main__":
    organize_files()

Automatic Organization with Watchdog

Run the organizer automatically when files appear:

#!/usr/bin/env python3
"""Watch a directory and organize new files automatically."""

from pathlib import Path
import shutil
import time
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler

SOURCE_DIR = Path.home() / "Downloads"
RULES = {
    "Images": [".jpg", ".jpeg", ".png", ".gif", ".webp"],
    "Documents": [".pdf", ".doc", ".docx", ".txt", ".md"],
    "Spreadsheets": [".xlsx", ".xls", ".csv"],
}

class FileOrganizer(FileSystemEventHandler):
    def on_created(self, event):
        if event.is_directory:
            return
        
        # Wait for file to finish writing
        time.sleep(1)
        
        file_path = Path(event.src_path)
        if not file_path.exists():
            return
            
        category = self.get_category(file_path.suffix)
        if category:
            self.move_file(file_path, category)
    
    def get_category(self, extension):
        for category, extensions in RULES.items():
            if extension.lower() in extensions:
                return category
        return None
    
    def move_file(self, file_path, category):
        dest_dir = SOURCE_DIR / category
        dest_dir.mkdir(exist_ok=True)
        dest_path = dest_dir / file_path.name
        shutil.move(str(file_path), str(dest_path))
        print(f"Auto-organized: {file_path.name} -> {category}/")

if __name__ == "__main__":
    observer = Observer()
    observer.schedule(FileOrganizer(), str(SOURCE_DIR), recursive=False)
    observer.start()
    print(f"Watching {SOURCE_DIR} for new files...")
    
    try:
        while True:
            time.sleep(1)
    except KeyboardInterrupt:
        observer.stop()
    observer.join()

Run this script in the background. Every file downloaded automatically moves to its category folder.

Batch Rename Files

Rename multiple files with consistent patterns:

#!/usr/bin/env python3
"""Batch rename files with patterns."""

from pathlib import Path
from datetime import datetime
import re

def batch_rename(directory: str, pattern: str = "file_{num:03d}"):
    """Rename files in directory using pattern."""
    dir_path = Path(directory)
    files = sorted([f for f in dir_path.iterdir() if f.is_file()])
    
    for num, file_path in enumerate(files, 1):
        new_name = pattern.format(num=num) + file_path.suffix
        new_path = file_path.parent / new_name
        file_path.rename(new_path)
        print(f"Renamed: {file_path.name} -> {new_name}")

# Example usage
if __name__ == "__main__":
    # Rename all JPGs in a folder to "vacation_001.jpg", "vacation_002.jpg", etc.
    batch_rename(
        "/path/to/photos",
        pattern="vacation_{num:03d}",
        extension_filter=".jpg"
    )

Email Automation

Email processing consumes significant time. Python can help.

Gmail Processing with IMAP

Process emails programmatically:

#!/usr/bin/env python3
"""Process Gmail messages with IMAP."""

import imaplib
import email
from email.header import decode_header
from datetime import datetime, timedelta
import os

# Configuration (use environment variables in production)
IMAP_SERVER = "imap.gmail.com"
EMAIL = os.environ.get("GMAIL_ADDRESS")
PASSWORD = os.environ.get("GMAIL_APP_PASSWORD")  # Use App Password, not regular password

def connect_to_gmail():
    """Establish IMAP connection."""
    mail = imaplib.IMAP4_SSL(IMAP_SERVER)
    mail.login(EMAIL, PASSWORD)
    return mail

def search_emails(mail, criteria="ALL", folder="INBOX"):
    """Search for emails matching criteria."""
    mail.select(folder)
    status, messages = mail.search(None, criteria)
    return messages[0].split()

def get_email_content(mail, email_id):
    """Fetch and parse email content."""
    status, msg_data = mail.fetch(email_id, "(RFC822)")
    raw_email = msg_data[0][1]
    msg = email.message_from_bytes(raw_email)
    
    # Decode subject
    subject, encoding = decode_header(msg["Subject"])[0]
    if isinstance(subject, bytes):
        subject = subject.decode(encoding or "utf-8")
    
    # Get sender
    sender = msg.get("From")
    date = msg.get("Date")
    
    # Get body
    body = ""
    if msg.is_multipart():
        for part in msg.walk():
            if part.get_content_type() == "text/plain":
                body = part.get_payload(decode=True).decode()
                break
    else:
        body = msg.get_payload(decode=True).decode()
    
    return {
        "subject": subject,
        "from": sender,
        "date": date,
        "body": body
    }

def process_unread_emails():
    """Process all unread emails."""
    mail = connect_to_gmail()
    email_ids = search_emails(mail, "UNSEEN")
    
    for email_id in email_ids:
        content = get_email_content(mail, email_id)
        print(f"\nFrom: {content['from']}")
        print(f"Subject: {content['subject']}")
        print(f"Date: {content['date']}")
        # Add your processing logic here
    
    mail.logout()

if __name__ == "__main__":
    process_unread_emails()

Email Categorization

Automatically categorize and label emails:

#!/usr/bin/env python3
"""Categorize emails based on content."""

import re

CATEGORIES = {
    "receipts": [
        r"receipt",
        r"order confirmation",
        r"thank you for your purchase",
        r"your order",
    ],
    "newsletters": [
        r"unsubscribe",
        r"email preferences",
        r"weekly digest",
    ],
    "shipping": [
        r"shipped",
        r"tracking number",
        r"delivery",
        r"out for delivery",
    ],
    "calendar": [
        r"invitation",
        r"calendar event",
        r"meeting request",
        r"rsvp",
    ],
}

def categorize_email(subject: str, body: str) -> list:
    """Return categories that match email content."""
    text = f"{subject} {body}".lower()
    matches = []
    
    for category, patterns in CATEGORIES.items():
        for pattern in patterns:
            if re.search(pattern, text, re.IGNORECASE):
                matches.append(category)
                break
    
    return matches if matches else ["uncategorized"]

# Example usage
subject = "Your Amazon.com order has shipped"
body = "Your package is on its way. Tracking number: 1234567890"
categories = categorize_email(subject, body)
print(f"Categories: {categories}")  # ['receipts', 'shipping']

Web Scraping Automation

Gather information from websites automatically.

Basic Web Scraping

Extract data from web pages:

#!/usr/bin/env python3
"""Basic web scraping example."""

import requests
from bs4 import BeautifulSoup
from datetime import datetime
import json

def scrape_headlines(url: str) -> list:
    """Scrape headlines from a news site."""
    headers = {
        "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7)"
    }
    
    response = requests.get(url, headers=headers, timeout=10)
    response.raise_for_status()
    
    soup = BeautifulSoup(response.text, "html.parser")
    
    # Adjust selectors for your target site
    headlines = []
    for article in soup.select("article h2, .headline, h3.title"):
        text = article.get_text(strip=True)
        link = article.find_parent("a") or article.find("a")
        href = link["href"] if link else None
        
        headlines.append({
            "title": text,
            "url": href,
            "scraped_at": datetime.now().isoformat()
        })
    
    return headlines

def save_results(data: list, filename: str):
    """Save scraped data to JSON."""
    with open(filename, "w") as f:
        json.dump(data, f, indent=2)

if __name__ == "__main__":
    # Example: scrape Hacker News
    url = "https://news.ycombinator.com"
    headlines = scrape_headlines(url)
    save_results(headlines, "headlines.json")
    print(f"Scraped {len(headlines)} headlines")

Price Monitoring

Track product prices over time:

#!/usr/bin/env python3
"""Monitor product prices and alert on drops."""

import requests
from bs4 import BeautifulSoup
import json
from pathlib import Path
from datetime import datetime

PRODUCTS_FILE = Path("products.json")
PRICES_FILE = Path("price_history.json")

def load_json(filepath: Path) -> dict:
    if filepath.exists():
        with open(filepath) as f:
            return json.load(f)
    return {}

def save_json(data: dict, filepath: Path):
    with open(filepath, "w") as f:
        json.dump(data, f, indent=2)

def get_price(url: str, selector: str) -> float:
    """Scrape price from product page."""
    headers = {"User-Agent": "Mozilla/5.0"}
    response = requests.get(url, headers=headers, timeout=10)
    soup = BeautifulSoup(response.text, "html.parser")
    
    price_element = soup.select_one(selector)
    if not price_element:
        return None
    
    # Extract numeric price
    price_text = price_element.get_text()
    price = float("".join(c for c in price_text if c.isdigit() or c == "."))
    return price

def check_prices():
    """Check all monitored products."""
    products = load_json(PRODUCTS_FILE)
    history = load_json(PRICES_FILE)
    
    for name, config in products.items():
        current_price = get_price(config["url"], config["selector"])
        if current_price is None:
            print(f"Could not get price for {name}")
            continue
        
        # Update history
        if name not in history:
            history[name] = []
        
        history[name].append({
            "price": current_price,
            "date": datetime.now().isoformat()
        })
        
        # Check for price drop
        if len(history[name]) > 1:
            prev_price = history[name][-2]["price"]
            if current_price < prev_price:
                drop = ((prev_price - current_price) / prev_price) * 100
                print(f"🎉 {name}: Price dropped {drop:.1f}% to ${current_price}")
        
        print(f"{name}: ${current_price}")
    
    save_json(history, PRICES_FILE)

if __name__ == "__main__":
    check_prices()

Data Processing Automation

Transform and process data automatically.

CSV Processing

Common CSV operations automated:

#!/usr/bin/env python3
"""Automate CSV processing tasks."""

import pandas as pd
from pathlib import Path
from datetime import datetime

def merge_csv_files(directory: str, output: str):
    """Merge all CSV files in directory into one."""
    dir_path = Path(directory)
    csv_files = list(dir_path.glob("*.csv"))
    
    if not csv_files:
        print("No CSV files found")
        return
    
    dfs = [pd.read_csv(f) for f in csv_files]
    merged = pd.concat(dfs, ignore_index=True)
    merged.to_csv(output, index=False)
    print(f"Merged {len(csv_files)} files into {output}")

def clean_csv(input_file: str, output_file: str):
    """Clean common issues in CSV files."""
    df = pd.read_csv(input_file)
    
    # Remove duplicates
    original_rows = len(df)
    df = df.drop_duplicates()
    
    # Strip whitespace from string columns
    for col in df.select_dtypes(include=["object"]).columns:
        df[col] = df[col].str.strip()
    
    # Remove rows with all NaN values
    df = df.dropna(how="all")
    
    # Save cleaned data
    df.to_csv(output_file, index=False)
    
    print(f"Cleaned {input_file}")
    print(f"  Removed {original_rows - len(df)} duplicate/empty rows")
    print(f"  Output: {output_file}")

def summarize_csv(input_file: str):
    """Generate summary statistics for CSV."""
    df = pd.read_csv(input_file)
    
    print(f"\n=== Summary of {input_file} ===")
    print(f"Rows: {len(df)}")
    print(f"Columns: {list(df.columns)}")
    print(f"\nColumn Types:")
    print(df.dtypes)
    print(f"\nNumeric Summary:")
    print(df.describe())

if __name__ == "__main__":
    # Example usage
    merge_csv_files("./monthly_reports/", "annual_report.csv")
    clean_csv("raw_data.csv", "clean_data.csv")
    summarize_csv("clean_data.csv")

Report Generation

Generate reports from data automatically:

#!/usr/bin/env python3
"""Generate automated reports from data."""

import pandas as pd
from datetime import datetime
from pathlib import Path

def generate_weekly_report(data_file: str, output_dir: str):
    """Generate formatted weekly report."""
    df = pd.read_csv(data_file)
    
    # Assume data has 'date', 'category', 'amount' columns
    df["date"] = pd.to_datetime(df["date"])
    
    # Calculate summaries
    total = df["amount"].sum()
    by_category = df.groupby("category")["amount"].sum().sort_values(ascending=False)
    daily_avg = df.groupby(df["date"].dt.date)["amount"].sum().mean()
    
    # Generate report content
    report_date = datetime.now().strftime("%Y-%m-%d")
    report = f"""# Weekly Report - {report_date}

## Summary
- **Total**: ${total:,.2f}
- **Daily Average**: ${daily_avg:,.2f}

## By Category
"""
    
    for category, amount in by_category.items():
        pct = (amount / total) * 100
        report += f"- {category}: ${amount:,.2f} ({pct:.1f}%)\n"
    
    # Save report
    output_path = Path(output_dir) / f"report_{report_date}.md"
    output_path.parent.mkdir(exist_ok=True)
    output_path.write_text(report)
    
    print(f"Report generated: {output_path}")
    return output_path

if __name__ == "__main__":
    generate_weekly_report("transactions.csv", "./reports/")

Scheduling Automations

Run scripts automatically on schedule.

Using the Schedule Library

Schedule Python functions to run at specific times:

#!/usr/bin/env python3
"""Schedule automation tasks."""

import schedule
import time
from datetime import datetime

def morning_report():
    """Generate and send morning report."""
    print(f"[{datetime.now()}] Generating morning report...")
    # Add your report generation logic

def hourly_backup():
    """Backup important files."""
    print(f"[{datetime.now()}] Running backup...")
    # Add your backup logic

def daily_cleanup():
    """Clean up old files."""
    print(f"[{datetime.now()}] Running cleanup...")
    # Add your cleanup logic

def price_check():
    """Check monitored prices."""
    print(f"[{datetime.now()}] Checking prices...")
    # Add your price checking logic

# Schedule tasks
schedule.every().day.at("07:00").do(morning_report)
schedule.every().hour.do(hourly_backup)
schedule.every().day.at("23:00").do(daily_cleanup)
schedule.every(4).hours.do(price_check)

print("Scheduler started. Press Ctrl+C to stop.")
print("Scheduled tasks:")
for job in schedule.get_jobs():
    print(f"  - {job}")

# Run scheduler loop
while True:
    schedule.run_pending()
    time.sleep(60)

Using System Schedulers

For persistent scheduling, use system cron (Linux/macOS) or Task Scheduler (Windows). These run scripts even when your terminal is closed.

# Edit crontab
crontab -e

# Run file organizer every 30 minutes
*/30 * * * * /usr/bin/python3 ~/automation/scripts/file_organizer.py

# Run daily report at 7 AM
0 7 * * * /usr/bin/python3 ~/automation/scripts/daily_report.py

Method

This guide combines practical experience with systematic development:

Step 1: Workflow Audit I documented my own repetitive tasks over two weeks, categorizing them by frequency and time required.

Step 2: Automation Prioritization I ranked tasks by automation potential: frequency × time saved × implementation difficulty.

Step 3: Pattern Development For each task category, I developed reusable patterns that could be adapted to similar problems.

Step 4: Testing and Refinement Each script was tested in real use, refined based on edge cases encountered, and improved for reliability.

Step 5: Documentation I documented patterns, libraries, and common pitfalls to create this guide.

API Automation

Interact with web services programmatically.

REST API Basics

#!/usr/bin/env python3
"""Interact with REST APIs."""

import requests
import os
from datetime import datetime

class APIClient:
    """Generic REST API client."""
    
    def __init__(self, base_url: str, api_key: str = None):
        self.base_url = base_url.rstrip("/")
        self.session = requests.Session()
        if api_key:
            self.session.headers["Authorization"] = f"Bearer {api_key}"
    
    def get(self, endpoint: str, params: dict = None) -> dict:
        url = f"{self.base_url}/{endpoint}"
        response = self.session.get(url, params=params, timeout=10)
        response.raise_for_status()
        return response.json()
    
    def post(self, endpoint: str, data: dict) -> dict:
        url = f"{self.base_url}/{endpoint}"
        response = self.session.post(url, json=data, timeout=10)
        response.raise_for_status()
        return response.json()

# Example: Weather API
def get_weather(city: str) -> dict:
    """Get current weather for a city."""
    api_key = os.environ.get("WEATHER_API_KEY")
    client = APIClient("https://api.openweathermap.org/data/2.5")
    
    weather = client.get("weather", {
        "q": city,
        "appid": api_key,
        "units": "metric"
    })
    
    return {
        "city": weather["name"],
        "temp": weather["main"]["temp"],
        "description": weather["weather"][0]["description"],
        "humidity": weather["main"]["humidity"]
    }

if __name__ == "__main__":
    weather = get_weather("Prague")
    print(f"Weather in {weather['city']}: {weather['temp']}°C, {weather['description']}")

Slack Notifications

Send notifications to Slack:

#!/usr/bin/env python3
"""Send Slack notifications from automations."""

import requests
import os

def send_slack_message(message: str, channel: str = None):
    """Send message to Slack via webhook."""
    webhook_url = os.environ.get("SLACK_WEBHOOK_URL")
    
    payload = {"text": message}
    if channel:
        payload["channel"] = channel
    
    response = requests.post(webhook_url, json=payload, timeout=10)
    response.raise_for_status()
    
    return response.status_code == 200

def notify_completion(task_name: str, success: bool, details: str = ""):
    """Send automation completion notification."""
    emoji = "✅" if success else "❌"
    status = "completed" if success else "failed"
    
    message = f"{emoji} Automation *{task_name}* {status}"
    if details:
        message += f"\n```{details}```"
    
    send_slack_message(message)

# Example usage
if __name__ == "__main__":
    # After completing an automation task
    notify_completion(
        "Daily Backup",
        success=True,
        details="Backed up 150 files (2.3 GB)"
    )

Error Handling and Logging

Robust automation requires proper error handling.

Logging Setup

#!/usr/bin/env python3
"""Logging configuration for automation scripts."""

import logging
from pathlib import Path
from datetime import datetime

def setup_logging(
    script_name: str,
    log_dir: str = "~/automation/logs"
) -> logging.Logger:
    """Configure logging for automation script."""
    
    log_dir = Path(log_dir).expanduser()
    log_dir.mkdir(parents=True, exist_ok=True)
    
    log_file = log_dir / f"{script_name}.log"
    
    # Create logger
    logger = logging.getLogger(script_name)
    logger.setLevel(logging.DEBUG)
    
    # File handler (all messages)
    file_handler = logging.FileHandler(log_file)
    file_handler.setLevel(logging.DEBUG)
    file_format = logging.Formatter(
        "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
    )
    file_handler.setFormatter(file_format)
    
    # Console handler (info and above)
    console_handler = logging.StreamHandler()
    console_handler.setLevel(logging.INFO)
    console_format = logging.Formatter("%(levelname)s: %(message)s")
    console_handler.setFormatter(console_format)
    
    logger.addHandler(file_handler)
    logger.addHandler(console_handler)
    
    return logger

# Usage
logger = setup_logging("file_organizer")
logger.info("Starting file organization")
logger.debug("Processing directory: /Users/me/Downloads")
logger.warning("Skipping locked file: document.pdf")
logger.error("Failed to move file: permission denied")

Robust Error Handling

#!/usr/bin/env python3
"""Error handling patterns for automation."""

import functools
import time
import logging

logger = logging.getLogger(__name__)

def retry(max_attempts: int = 3, delay: float = 1.0, backoff: float = 2.0):
    """Decorator to retry failed operations."""
    def decorator(func):
        @functools.wraps(func)
        def wrapper(*args, **kwargs):
            last_exception = None
            
            for attempt in range(1, max_attempts + 1):
                try:
                    return func(*args, **kwargs)
                except Exception as e:
                    last_exception = e
                    logger.warning(
                        f"Attempt {attempt}/{max_attempts} failed: {e}"
                    )
                    if attempt < max_attempts:
                        sleep_time = delay * (backoff ** (attempt - 1))
                        time.sleep(sleep_time)
            
            logger.error(f"All {max_attempts} attempts failed")
            raise last_exception
        return wrapper
    return decorator

def safe_execute(func, *args, default=None, **kwargs):
    """Execute function and return default on failure."""
    try:
        return func(*args, **kwargs)
    except Exception as e:
        logger.error(f"Error in {func.__name__}: {e}")
        return default

# Usage examples
@retry(max_attempts=3, delay=2.0)
def fetch_data_from_api():
    """This will retry up to 3 times on failure."""
    # API call that might fail
    pass

result = safe_execute(risky_operation, default=[])

Generative Engine Optimization

Python automation and Generative Engine Optimization intersect significantly. AI tools can accelerate automation development, and automation can enhance AI workflows.

AI-Assisted Script Generation

AI excels at generating Python scripts for common automation tasks:

# This script was generated with AI assistance
# Prompt: "Write a Python script that monitors a folder for new images 
# and automatically resizes them to max 1920px width while preserving 
# aspect ratio"

from pathlib import Path
from PIL import Image
from watchdog.observers import Observer
from watchdog.events import FileSystemEventHandler
import time

MAX_WIDTH = 1920

class ImageResizer(FileSystemEventHandler):
    def on_created(self, event):
        if event.is_directory:
            return
        
        path = Path(event.src_path)
        if path.suffix.lower() in ['.jpg', '.jpeg', '.png']:
            time.sleep(1)  # Wait for file to finish writing
            self.resize_image(path)
    
    def resize_image(self, path):
        with Image.open(path) as img:
            if img.width > MAX_WIDTH:
                ratio = MAX_WIDTH / img.width
                new_size = (MAX_WIDTH, int(img.height * ratio))
                img = img.resize(new_size, Image.LANCZOS)
                img.save(path)
                print(f"Resized: {path.name}")

The GEO skill here is bidirectional: use AI to write automation code faster, and use automation to leverage AI capabilities at scale.

Building Your Automation Library

Over time, build reusable components. Each module becomes reusable. New scripts compose existing components. Your automation capability compounds over time.

Final Thoughts

Mochi automates nothing consciously. Yet her life runs smoothly—meals appear, naps happen, attention arrives. Her automation is her environment: I’m the automation layer that makes her life predictable.

You can be that layer for your own work life. Every repetitive task is an opportunity. Every script you write saves future time. The investment in automation compounds—a script written once runs forever.

Start small. Automate one annoying task this week. Then another. Build your library. Learn the patterns. Soon you’ll see automation opportunities everywhere.

Python makes it accessible. The language reads clearly. The ecosystem covers every need. The barriers are low. The only requirement is starting.

Your repetitive tasks aren’t going away. But they don’t have to stay manual. Write the script. Save the time. Do more interesting work instead.

Open your editor. Pick a task. Start automating.