Build TypeScript apps in VS Code without AI code rot.

Evidype is the VS Code AI coding agent. You describe what you want; it edits files, writes specs and tests, runs the EvidyTS compiler, and fixes failures through a stricter TypeScript loop.

It is slower than vibe-coding a toy app, but built for projects that keep growing. The difference is that generated code is forced through stricter TypeScript rules, so the project stays easier to extend after the first demo.

Free Alpha. Bring your own OpenAI or OpenRouter access. Evidype is the installable VS Code product; the EvidyTS compiler is the open-source pressure layer behind it.

33-second demo: adds 2-operator FM mode,
hits EvidyTS compile failure and repairs it.
The problem

AI generates fast. Humans clean up the mess.

Ordinary languages let AI sprawl into oversized files, weak boundaries, duplicated logic, and test debt. The human becomes the permanent QA department.

The answer

Make the language stricter where AI tends to fail.

EvidyTS keeps familiar TypeScript syntax, then adds deterministic rules for structure, naming, specs, paired tests, and behavioral verification.

AI can handle stricter engineering.

In higher-assurance software, extra rules are normal: explicit intent, restricted subsets, stronger contracts, and less room for ambiguous shortcuts. That instinct is already familiar in ecosystems such as SPARK/Ada and MISRA-style C/C++.

The LLL language-family idea brings that tradeoff into AI-native development. Humans hate ceremony when they have to type every line. Models do not care. They do not get bored, tired, or resentful about writing the extra structure reliability demands.

Extra rigor is cheap for AI.

If the main code producer can generate as much structure as needed, the language should demand more of it: required @Spec, explicit return types, paired scenarios, browser-verified behavior, and a guardrail profile that removes bug-prone shorthand.

No Evidype = AI Project Decay

The first days feel magical. The cleanup bill arrives later.

The first week of developing a new project with AI often looks incredible: features land fast, demos look polished, and progress feels effortless. Then the codebase grows. If nobody keeps structure, rules, and tests under control, the project starts to rot.

Duplication spreads. Boundaries blur. Each new feature creates regressions somewhere else. Eventually the model is not failing because the prompt is bad. It is failing because the codebase has become too messy to reason about cleanly.

Without pressure

AI keeps stacking code on top of unresolved mess. The project gets harder to understand, harder to test, and harder to extend.

With Evidype

The compiler keeps applying pressure while the system is still growing: structure rules, required specs, companion scenarios, and behavioral verification.

Debt made visible

The current toolchain already reports coverage debt as a percentage and turns it into a compile failure once it crosses the allowed threshold. The point is simple: do not let quality debt stay invisible until the project collapses.

Wrong benchmark

A tiny demo is often the wrong first test.

If you try Evidype on a toy app, it can look slower rather than better. It still has to follow the rules, write the specs, generate the companion tests, and run the compiler, while the final result may look similar to ordinary AI-generated code.

The difference shows up when the project keeps growing. That is where stricter structure and verification help the system stay coherent across many changes instead of collapsing into endless bug-fixing loops.