Hacker Newsnew | past | comments | ask | show | jobs | submit | josh-gree's commentslogin

  Hi HN! I wanted to test the custom skills I've been building for Claude (https://github.com/josh-gree/gen-art-framework – see PRs/issues for the workflow patterns), so I asked it to build an automated generative art gallery from scratch.

  Result: https://josh-gree.github.io/gen-art-gallery/
  Repo: https://github.com/josh-gree/gen-art-gallery

  Everything here was done through conversation with Claude:
  - Set up the repo structure and GitHub Actions workflow
  - Built the static gallery site (HTML/CSS/JS)
  - Debugged workflow failures (Python version issues, uv configuration)
  - Added incremental generation (only regenerate changed scripts)
  - Added UI features (modal view, source links)
  - Created example art scripts

  The skills handle the full development lifecycle:
  - Creating tickets to capture intent (not implementation plans)
  - Planning implementations for existing tickets
  - Executing on plans
  - Reviewing PRs and addressing feedback
  - Managing git worktrees for parallel development

  The goal wasn't the gallery itself, but pinning down a good set of patterns for how Claude should handle planning, implementing, and reviewing changes in a real project. The PRs and issues in the gen-art-framework repo show the iteration on these workflows.

  Curious if others are building similar "meta" workflows for AI coding assistants – patterns that work well across different projects?


Of course they are ... It's college administration that are uncomfortable


*Location:* UK (Manchester)

*Remote:* Yes (preferred)

*Willing to relocate:* UK/EU considered for the right role

*Resume:* josh-gree.github.io/cv

*Email:* joshuadouglasgreenhalgh@gmail.com

*Technologies:* Python, SQL, R; Airflow, Prefect, Dagster; Kafka; Docker/Kubernetes; Terraform; GCP/AWS; Postgres, PostGIS, Snowflake, Redshift; Zarr/Parquet; ML/Deep Learning; HPC; React/Flask.

*Summary:* Senior Software/Data Engineer with a strong mathematical and computational modelling background. I build high-reliability data systems, complex ETL/ELT pipelines, and ML-ready data platforms—especially where datasets are large, irregular, hierarchical, or scientifically complex.

Most recently, I’ve been designing and operating large-scale data infrastructure for high-dimensional biological datasets (100k+ samples), unifying heterogeneous storage formats into lineage-aware catalogues, creating ontologies for hierarchical labels, building QC pipelines in Dagster, developing synthetic single-cell data generators, and working closely with domain scientists to formalise and scale experimental and computational workflows.

Previously: large-scale mobile-network analytics for humanitarian agencies; climate/energy data engineering; ad-tech pipelines; and HPC-driven modelling from computational research.

I’m looking for roles where difficult data problems, scientific or ML-adjacent pipelines, or complex modelling workflows need to be made robust, reproducible, and scalable. Prefer small teams, high ownership, and work with real impact.

*What I offer:* – Architecture & implementation of reliable data/ML platforms

– Workflow orchestration, data governance, and reproducibility

– Scientific/ML pipeline design (Bayesian modelling, synthetic data, QC/validation)

– Cloud infra/IaC and cost-efficient storage design

– Ability to collaborate deeply with domain experts and formalise messy processes


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: