Automating Repetitive Tasks with AI Scripts

Every developer has tasks they do repeatedly that are too complex for a simple alias but too small to justify a “proper” tool. Data migrations, file processing, API integrations, report generation: the stuff that eats hours but feels too mundane to prioritize. AI has made automating these tasks almost trivially easy. Here’s how I approach it.
Identifying Automation Candidates
Not everything should be automated. I use a simple mental framework:
- Frequency: Do I do this more than twice a week?
- Time cost: Does it take more than 5 minutes each time?
- Error-prone: Do I sometimes make mistakes doing it manually?
- Well-defined: Can I describe the inputs and expected outputs clearly?
If three of four are yes, it’s worth automating. With AI, the calculation shifts because the cost of creating the automation is so much lower. Even tasks I do once a week that take 10 minutes can be worth automating if the script takes 5 minutes to generate.
Data Migration Scripts
I recently needed to migrate user data from a legacy MySQL database to a new PostgreSQL schema. The schemas were different, field names had changed, and some data needed transformation. Manually writing the mapping would have taken hours. Instead:
// migrate-users.mjs - Generated with AI, then reviewed and refined
import mysql from "mysql2/promise";
import pg from "pg";
const source = await mysql.createConnection({
host: "localhost", port: 3306,
user: "legacy_read", database: "legacy_app"
});
const dest = new pg.Pool({
connectionString: "postgresql://localhost:5432/new_app"
});
const BATCH_SIZE = 100;
let offset = 0;
let migrated = 0;
let errors = [];
while (true) {
const [rows] = await source.query(
"SELECT * FROM users ORDER BY id LIMIT ? OFFSET ?",
[BATCH_SIZE, offset]
);
if (rows.length === 0) break;
for (const row of rows) {
try {
await dest.query(
`INSERT INTO users (id, email, full_name, role, created_at, is_active)
VALUES ($1, $2, $3, $4, $5, $6)
ON CONFLICT (id) DO NOTHING`,
[
row.id,
row.email_address?.toLowerCase(),
`${row.first_name} ${row.last_name}`.trim(),
mapLegacyRole(row.user_type),
row.created_date || new Date(),
row.status === "active"
]
);
migrated++;
} catch (err) {
errors.push({ id: row.id, error: err.message });
}
}
offset += BATCH_SIZE;
console.log(`Migrated ${migrated} users...`);
}
console.log(`Done. ${migrated} migrated, ${errors.length} errors.`);
if (errors.length) console.table(errors);
function mapLegacyRole(type) {
const map = { admin: "admin", mod: "moderator", user: "member", guest: "member" };
return map[type] || "member";
}
The key insight: I described the source and target schemas to the AI, explained the transformation rules, and got a working first draft in under a minute. I then reviewed it, added the batch processing and error collection, and had a production-ready script in about 10 minutes total.
File Processing
Bulk file processing is a perfect automation target. Here’s a Python script I generated for processing uploaded images:
# process_uploads.py - Resize, optimize, and organize uploaded images
from pathlib import Path
from PIL import Image
import hashlib
import shutil
UPLOAD_DIR = Path("./uploads/raw")
OUTPUT_DIR = Path("./uploads/processed")
SIZES = {"thumb": (150, 150), "medium": (800, 600), "large": (1920, 1080)}
def process_image(filepath):
img = Image.open(filepath)
file_hash = hashlib.md5(filepath.read_bytes()).hexdigest()[:8]
stem = filepath.stem
for size_name, dimensions in SIZES.items():
output_path = OUTPUT_DIR / size_name / f"{stem}-{file_hash}.webp"
output_path.parent.mkdir(parents=True, exist_ok=True)
resized = img.copy()
resized.thumbnail(dimensions, Image.LANCZOS)
resized.save(output_path, "WEBP", quality=85)
print(f" Created {size_name}: {output_path}")
def main():
images = list(UPLOAD_DIR.glob("*.{jpg,jpeg,png,webp}"))
# Glob doesn't support multi-extension, so:
images = [f for f in UPLOAD_DIR.iterdir()
if f.suffix.lower() in (".jpg", ".jpeg", ".png", ".webp")]
print(f"Processing {len(images)} images...")
for img_path in images:
print(f"Processing: {img_path.name}")
try:
process_image(img_path)
shutil.move(str(img_path), UPLOAD_DIR / "done" / img_path.name)
except Exception as e:
print(f" ERROR: {e}")
shutil.move(str(img_path), UPLOAD_DIR / "failed" / img_path.name)
if __name__ == "__main__":
main()
Scheduling with Cron
Once you have reliable scripts, schedule them. Here’s my approach:
# Edit crontab
crontab -e
# Process uploads every 15 minutes
*/15 * * * * cd /opt/app && python process_uploads.py >> /var/log/uploads.log 2>&1
# Daily report generation at 7 AM
0 7 * * * cd /opt/app && node generate-report.mjs >> /var/log/reports.log 2>&1
# Weekly database cleanup on Sunday at 3 AM
0 3 * * 0 cd /opt/app && node cleanup-expired.mjs >> /var/log/cleanup.log 2>&1
Always redirect output to log files. Always include 2>&1 to capture errors. Always cd to the right directory first.
Error Handling Patterns
AI-generated scripts often have minimal error handling. I always add these patterns before running anything in production:
// Retry with exponential backoff
async function withRetry(fn, maxAttempts = 3) {
for (let attempt = 1; attempt <= maxAttempts; attempt++) {
try {
return await fn();
} catch (err) {
if (attempt === maxAttempts) throw err;
const delay = Math.pow(2, attempt) * 1000;
console.warn(`Attempt ${attempt} failed, retrying in ${delay}ms...`);
await new Promise(r => setTimeout(r, delay));
}
}
}
// Dead letter queue for failed items
const failures = [];
try {
await processItem(item);
} catch (err) {
failures.push({ item, error: err.message, timestamp: new Date() });
}
// At the end, write failures for manual review
if (failures.length > 0) {
await fs.writeFile(
`./failures-${Date.now()}.json`,
JSON.stringify(failures, null, 2)
);
console.error(`${failures.length} items failed. See failures log.`);
process.exit(1);
}
The pattern of using AI to generate the happy-path script, then manually adding robust error handling, gives you the best of both worlds: speed of generation with reliability of human oversight. Don’t trust AI-generated scripts to handle edge cases correctly. Review them, add error handling, and test with bad data before running on anything that matters.
Written by
Adrian Saycon
A developer with a passion for emerging technologies, Adrian Saycon focuses on transforming the latest tech trends into great, functional products.
Discussion (0)
Sign in to join the discussion
No comments yet. Be the first to share your thoughts.
Related Articles

Building and Deploying Full-Stack Apps with AI Assistance
A weekend project walkthrough: building a full-stack task manager from architecture planning to deployment, with AI as t

AI-Assisted Database Design and Query Optimization
How to use AI for schema design, index recommendations, N+1 detection, and query optimization in PostgreSQL and MySQL.

The Modern Developer’s Toolkit: Essential AI-Powered Tools
A curated guide to the best AI-powered developer tools across coding, testing, docs, design, and monitoring, with real c