How I Work

What this page is about

This page isn't a list of tools I use. It's an attempt to describe how I actually think about problems, collaborate with people and systems, and use technology to get things done — including AI tools. Most portfolios describe what someone made. This one tries to describe how.

If you're trying to figure out whether I'd be a good fit for a team, process is usually more predictive than output. Output can come from anywhere. Process is harder to fake.

Working with people

The QA background shapes this more than anything else. Four years at Bungie doing test engineering across large live productions taught a specific kind of discipline: understanding a system well enough to know where it's most likely to break, communicating that clearly to the people who can fix it, and building processes so the same failure mode doesn't have to be caught twice. That habit carries through all of the tech art work.

On the Left Turn projects, being the sole or lead TA meant owning the interface between the art team, engineering, and whatever the project needed. That required understanding enough of each discipline to communicate accurately — knowing what an engineer needs to understand about a shader to support it, knowing what a level designer actually needs from a foliage tool versus what they think they need. Good tools and pipelines get used without friction. Bad ones get worked around silently.

At Epic, the coordination problem was different in scale: distributed QA teams across multiple content verticals, a large live project, and the need to get coverage to scale without requiring centralized ownership of everything. The FX validation framework on Lego Fortnite was a direct product of that problem — the goal wasn't to do more testing myself, it was to put the right testing in the hands of the people closest to the work.

Working with technology

There's a kind of comfort that comes from having worked in enough different tools and codebases that "unfamiliar" stops feeling like an obstacle. Each new engine, shader system, or scripting environment is a variation on patterns I've seen before. The first thing I do with an unfamiliar tool is find the edges — what it does well, where its documented behavior stops, what the actual behavior is past that point. That's useful information whether I'm using the tool or building something on top of it.

The work I did on Relic's Zero Engine is the most direct example of this: the engine had documented limits, those limits blocked the project's visual goals, and understanding exactly where the limits were made it possible to have a productive conversation with the engineer about what was actually achievable. The same pattern shows up in the shader work (working within the constraints of a specific render pipeline), the rigging work (understanding what Unity's animation system will and won't do with constraints), and the tooling work (knowing what an editor utility widget can hook into in Unreal).

Working with Claude Code

I use Claude Code for tooling and pipeline problems — cases where the problem is well-defined, the output format is clear, and the value of having code written and iterated on quickly is high. The triage tool on Lego Fortnite is the clearest example: I needed a specific output (an HTML report with cross-platform test comparison, trend tracking, and Jira-linked failures), the data sources were documented, and I understood exactly what the report needed to communicate. Within that framing, Claude Code handled the implementation work and the tool was built and deployed in a day.

This site is also built entirely with Claude Code — plain HTML, CSS, and JavaScript, no framework, no build step. Every design decision in the stylesheet has a reason, and the codebase is written to be readable and editable without needing to re-engage AI tooling for small changes.

The way the collaboration actually works: Claude Code is most useful when I can describe the problem precisely — what it is, what constraints it has, what success looks like. That requires having done enough thinking to know what you actually want, which is most of the work anyway. The session structure matters too: Claude Code doesn't carry context between sessions, so useful collaboration requires explicit orientation at the start of each session — what the project is, where we left off, what the current task is. I've developed a consistent set of working notes and orientation protocols for ongoing projects that make that re-orientation fast.

The honest version of AI tooling fluency isn't "I can make AI write code." It's understanding what these tools are good at, what they're not, and how to structure work so the collaboration produces something worth having.

What it doesn't replace

Knowing what to build. Knowing whether it's worth building. Knowing when the technically correct solution is the wrong answer for the project. The judgment about what a shader should feel like, whether a tool will actually get used, whether a test coverage approach will hold up in production — none of that comes from a language model. It comes from having done the work, shipped things, watched them break, and built better ones.

AI tooling is fast and useful. It does not have taste. The craftsperson is still the one who decides what good looks like.