Claude built all four games with the same simple prompt: "make a space shooter" and "make a racing game."
The only difference — the "After" versions were built with APOLLO knowledge delivered through MCP.
phaser-fn-phaser-physics-arcade-world-wrapobject
phaser-fn-phaser-physics-arcade-components-angular-setangularvelocity
phaser-fn-phaser-physics-arcade-image-setdrag
phaser-prop-phaser-physics-arcade-body-angle
gamemaker:image_angle (point_direction pattern)
reshade-ceejay-crt (scanline + curvature uniforms)
500+ Shadertoy particle simulations
phaser-fn-phaser-physics-arcade-image-setdrag (deceleration curves)
phaser-prop-phaser-physics-arcade-body-angle (velocity-angle relationship)
gamemaker:collision_rectangle (precise AABB detection)
reshade-ceejay-crt (scanline uniforms for retro overlay)
Shadertoy particle simulations (explosion + skid systems)
Claude built all four games with virtually no instruction beyond "aliens game" and "racecar game." Before and after — same AI, same prompt, same developer. The only variable was APOLLO's knowledge database delivered through MCP integration.
Patterns from Phaser, GameMaker, and shader code informed physics, collision, visual effects, and game feel. APOLLO didn't write the code — it provided the knowledge that made better code possible. This is what machine-learning-backed context looks like in practice.
Now imagine this for medical drug interactions. Clinical guidelines. Emergency protocols.
The knowledge layer changes everything.