Deep Dive

Claude and OpenClaw Skills I Use Every Day to 50x My Productivity

Jon Fiorante March 24, 2026
3,000+ GitHub contributions 100-day Claude Code streak

After months of building with Claude Code daily, here are the plugins, skills, and workflows that have given my development superpowers. These methods have fundamentally changed how I approach software engineering, moving me from writing code to architecting systems. By leveraging these specific AI patterns, I have managed to scale my output beyond what I ever thought possible.

01

Superpowers: Brainstorming

claude.com/plugins/superpowers

Here is exactly how it works

1

The brainstorming phase forces real design thinking before code

The /brainstorm skill makes Claude stop and have a back-and-forth conversation about what you are actually trying to build. It explores options, presents alternatives, and refines the concept with you before anything gets written.

2

It produces real artifacts before implementation starts

Once brainstorming is done and you have agreed on the direction, it outputs a design.md and a plan.md. Actual documents that capture the decisions made and the implementation roadmap. That means you have a paper trail of why things were built a certain way, and Claude has a concrete plan to execute against rather than freestyling.

3

The clean transition into implementation mode keeps things disciplined

Instead of brainstorming and coding getting tangled together (which is Claude Code's default tendency), Superpowers draws a hard line: you are either designing or you are implementing. Once you switch to implementation, you choose between two styles:

Alternative

Plan-Driven Development

Uses a traditional task list with one agent and implements off the specs. Works well for smaller projects where the overhead of spinning up subagents is not worth it.

02

Agent-Eval

Internal Tool of AV Engine

Here is exactly how it works

1 Downloads Qwen 3.5 9B from Ollama. The base model that gets evaluated and improved.
2 Runs a baseline evaluation with 500+ scenarios and growing every day.
3 Uses Codex 5.4 as a judge via ChatGPT OAuth. The frontier model scores the local model.
4 Analyzes scores and results. Finds patterns in what the model gets right and wrong.
5 Adds improvements to the .agent folder: skills, memory, plugins, system prompt, tools. This is where the actual optimization happens.
6 Runs the eval loop again. Same scenarios, new configuration.
7 Compares results to the first run. If it is better, it gets committed. If it is worse, it is discarded. No guessing. Data decides.
8 Agent finishes with an HTML dashboard showing the improvement. Visual proof of every optimization cycle.

Why this matters

Most people fine-tune models or tweak prompts by feel. Agent-Eval turns that into a measurable, repeatable loop. Every improvement is backed by data. Every regression is caught before it ships. The model gets better every single day without any manual intervention.

03

Lossless Claw

github.com/Martian-Engineering/lossless-claw

Here is exactly how it works

1

Current context management is destructive

OpenClaw, like most AI coding agents, uses a sliding-window approach that simply truncates older messages when the context window fills up. This means decisions, file changes, and reasoning from earlier in a session are permanently lost from the model's view. Lossless Claw replaces that entirely.

2

Agents can actively recall compressed history

It is not just passive summarization. The plugin gives agents tools like lcm_grep (search across all past messages and summaries), lcm_describe (inspect any summary or stored file), and lcm_expand_query (delegate a sub-agent to search the memory and answer a specific question from compacted history). An agent working on a long task can pull back exact details that would have been thrown away under normal compaction.

3

It solves the large-file context problem

When someone pastes a big file into a conversation, it can eat most of the context window in one shot. Lossless Claw intercepts files over a configurable threshold (default 25k tokens), stores them on disk, and replaces them with a compact exploration summary. The full file is always accessible. It just does not burn your context window.

The compounding effect

Without Lossless Claw, long sessions degrade. The model forgets what it did 30 minutes ago. With it, sessions can run for hours and the agent still has access to everything it has ever seen. That is the difference between a tool that helps you code and a tool that thinks alongside you.

FREE DOWNLOAD

25 Prompts That Replace 25 Hours of Work

The exact prompts I use daily, packaged for you. Drop your email and it is yours.

No spam. Unsubscribe anytime.