Skip to content
Independent AI Research Lab

We study how language models fail, mislead, and break.

$pip install lintlang
Explore Research

Accepting 3 design partners for Q2 2026

1,500+
controlled evaluations
6
research domains
3
patent filings

Built in the open

Every tool Apache 2.0. Every experiment reproducible. Every fix contributed upstream.

18
PRs merged
$0
to lint your AI stack
LintLang · Apache 2.0
Try it →
5
Open-source projects
See all →

What we investigate

Six active research domains studying how language models behave under constraint, fail under pressure, and mislead under confidence.

NewRead our first whitepaper — how LLMs systematically discount null results across matched scientific vignettes

LLM Failure States

AI systems present false certainty as confidence. We taxonomize how models fail: scalar confidence inflation, validation loops that mask errors, systematic discounting of null findings. The goal is to identify unreliable outputs before they reach production.

Read the technical brief

Evaluation & Attribution

Models misattribute sources, fabricate citations, and apply asymmetric evidential standards to positive vs null claims. We map where AI decision support is stable and where it produces conflicting or unreliable guidance.

Read the whitepaper

Behavioral Analysis & Auditability

LLMs don't process all inputs uniformly. Different linguistic patterns activate different behavioral modes. We study how models shift between these modes — and how repeated exposure to certain patterns creates stable behavioral attractors that resist correction.

Epistemic & Hermeneutic AI Research

Your AI doesn't follow instructions. It interprets them. And when it interprets them wrong, it doesn't flag uncertainty. It acts with full confidence. We study the structured relationship between how models parse ambiguity and how they represent certainty: where interpretation fails, where confidence misleads, and why these are the same problem. The liability and safety questions that define enterprise AI deployment today begin here.

Safety & Reliability

From behavioral prompt injection detection to structured prompting middleware, we build and validate infrastructure that makes AI systems production-safe — even on modest hardware.

Linguistic Infrastructure

Linguistic framing shapes model behavior in measurable, systematic ways. We study how prompt structure, word choice, and contextual framing alter model outputs. This is the programmable layer between user intent and model response — one most teams treat as invisible.

We found the failure modes. Then we built the tools.

Research-backed. Patent pending. Open-source and enterprise.

LintLang

Catch language bugs before your agents do

6language dimensions scored

Lints tool descriptions, system prompts, schemas, and configs across your AI stack. Catches the language bugs that make models pick wrong tools, ignore instructions, or break structured output. Zero LLM calls. Runs in CI.

Little Canary

Prompt injection detection

99%detection on 400 human-written attacks

A 1.5B parameter sentinel that catches what regex can't. Feeds raw user input to a sandboxed canary model and watches for behavioral compromise, persona shift, instruction leakage, refusal collapse. The vulnerability IS the detection signal.

Suy Sideguy

Runtime agent containment

3verdicts: SAFE · FLAGGED · KILLED

Runtime safety guard for autonomous agents. Watches process, file, and network behavior against your policy. Flags violations. Terminates severe ones. Generates incident-ready forensic reports. Pairs with Little Canary for defense in depth.

QuickGate

CI quality gate

2SDKs: TypeScript + Python

CLI quality gate that catches bugs before they ship. Lint, typecheck, build, and Lighthouse in a single command. Available for TypeScript and Python.

SignalID

Stateless user identification

0conversations stored · patent pending

Every session with your AI starts from zero. Your users notice. SignalID gives your AI a working memory of each user, so every conversation feels like a continuation. We store the signal, not the content. Full personalization. Nothing to leak.

QuickThink

Local model reliability

1.5Bparameter models, production-ready

Middleware that makes small local models (1B–3B) reliable enough for production tasks. Bridges the gap between cloud API costs and local model quality.

Rolando Bosch

Who's behind this

Rolando Bosch

Founder · Hermes Labs

I started Hermes Labs because I kept watching engineering teams deploy language models without understanding how they fail. Not edge cases, structural failure modes that show up across every current-generation LLM: hermeneutic drift, sycophancy, asymmetric skepticism, liability hedging, and prompt compliance failures.

My background is uncommon in this space: philosophy of language (Wittgenstein, Gadamer, Heidegger) applied to AI systems. The traditions that spent centuries studying how meaning breaks down have a lot to say about why LLMs behave the way they do. Before Hermes Labs, I built AI Ching — a reflective tool using classical Chinese symbols and maieutic dialogue, where insights emerge from the user, not the AI — and LucidiGPT, a language-as-UX project showing how different linguistic scaffolds produce distinct synthetic personas and cognitive frameworks in LLM systems. That work led here.

Three patent filings (US 19/248,833, US Prov. 63/984,697, 63/987,830). 18 contributions merged into major open-source repositories (LangChain, React Router, Nuxt, PyTorch Ignite, MobX, Cloudflare, Microsoft, Optuna, ngrx). Everything we ship is Apache 2.0.

Tell us what you're working on

Research collaborations, pilot programs, or diagnostic engagements. Tell us where you're stuck and we'll tell you what we've found.