There is a particular kind of frustration that comes from watching a capable system fail for a dumb reason. Not a conceptual failure, not a misunderstanding of physics or a flawed algorithm, but a typo. A misspelled unit name. A dimension mismatch that could have been caught and explained when it was detected.
ucon 0.7.0 is a milestone in the realization of dimensional analysis as infrastructure.
It's about eliminating whole classes of failure when using units, specifically for AI agents.
But to understand why this matters, and where the library is headed, it helps to trace the line of thinking that got me here.
The thesis
ucon started from a premise that most unit libraries don't share: unit conversion is not a lookup problem.
It is an algebra.
The distinction matters.
A lookup table maps "meter" to "foot" with a stored constant.
An algebra represents the structure of the relationship——that length and time compose into velocity, that velocity and time compose back into length, and that at no point in this chain does anyone accidentally add meters to seconds.
Most Python unit libraries occupy a middle ground. They decorate numbers with unit labels and check those labels at arithmetic boundaries. This works until it doesn't. THis works until someone builds a composite unit that the library hasn't seen before, or combines quantities across systems that use different dimensional bases, or needs to propagate uncertainty through a nonlinear conversion.
ucon chose to build from first principles: a Vector class representing exponent tuples over the SI base dimensions, a Dimension enum that gives those tuples identity, and a clean separation between Unit (an atomic symbol), UnitFactor (a unit paired with a scale), and UnitProduct (a composite expression with exponent tracking).
Closure under multiplication and division at every algebraic layer is not incidental: it is the reason the system works.
When you divide a Number in meters by a Number in seconds, the result is a Number whose unit is a UnitProduct with the correct derived dimension, not a string "m/s" that some downstream function might or might not recognize.
This foundation was the work of the earliest releases, and everything since has been an exercise in building upward from there.
From algebra to conversion
An algebra that cannot convert between representations is a curiosity, not a tool.
The conversion engine introduced a Map hierarchy——LinearMap for simple scaling, AffineMap for offset conversions like Celsius to Fahrenheit, and ComposedMap for chaining——wired into a ConversionGraph with BFS path finding.
The design decision that matters here is that maps are composable mathematical objects, not lookup entries.
They compose via the @ operator.
They have inverses.
They have derivatives——a property that seems academic until you need to propagate uncertainty through a unit conversion, at which point it becomes essential.
An affine map's derivative is constant (its slope), but a logarithmic map's derivative depends on the input value.
Building the derivative into the abstract interface from the beginning meant that uncertainty propagation, when it arrived later, required no retrofit.
The graph itself respects the algebraic structure beneath it.
Edges connect units within a dimension.
Cross-dimension conversions are rejected.
The subtle but important case of "pseudo-dimensions" (like angles and ratios that share the same zero-dimensional vector but must not interconvert) is handled through semantic isolation: radian and percent both live in zero-dimensional space, but the graph enforces that converting between them is an error.
Completing the model
Three extensions filled in the dimensional model.
Dimensionless quantities (e.g. angles, solid angles, ratios) got their own pseudo-dimensions with full unit coverage: degrees, gradians, arcminutes for angles; percent, permille, parts-per-million, basis points for ratios.
The nines unit for SRE availability notation (99.999% = 5 nines) arrived alongside LogMap and ExpMap, which formalize logarithmic conversions and lay groundwork for decibels and pH in a future release.
Uncertainty propagation gave Number an optional uncertainty field with quadrature arithmetic through both algebraic operations and conversions.
The infrastructure was already there: Map.derivative() had been part of the abstract interface since the conversion engine was built.
Adding uncertainty meant calling a method that already existed.
BasisTransform and UnitSystem opened the door to cross-system conversions using exact matrix arithmetic over Fraction exponents.
A UnitSystem names a mapping from dimensions to base units.
A BasisTransform relates one system's dimensional basis to another.
RebasedUnit preserves the provenance of cross-system conversions so you can trace how they were derived.
Each of these extensions was possible because the algebraic foundation was correct.
Pseudo-dimensions work because Dimension is an enum with identity semantics, not just a vector.
Uncertainty works because maps carry derivatives.
Cross-system conversion works because Vector uses exact Fraction arithmetic.
None of this was accidental.
Reaching outward
With the internal model complete, the focus shifted to integration.
Pydantic v2 support let Number instances live inside data models with full JSON serialization — parse a {"quantity": 5, "unit": "km"} dict, get a validated Number back, serialize it without loss.
The unit string parser handles both Unicode (m/s²) and ASCII (m/s^2) notation.
This is the kind of feature that converts a library from "interesting" to "usable in production."
But the bigger move was the MCP server.
"MCP", the Model Context Protocol, is how AI agents call external tools.
ucon's MCP server currently exposes convert, list_units, list_scales, list_dimensions, and check_dimensions as tools that any MCP-compatible agent can invoke.
The insight behind this is not complicated: dimensional analysis is exactly the kind of reasoning that language models are unreliable at but that structured tools are excellent at.
An LLM that calls convert(100, "km/h", "m/s") and gets back a validated result is strictly more trustworthy than one that attempts the arithmetic on its own.
Dimensional type safety——Number[Dimension.length] generics and the @enforce_dimensions decorator——completed the type-directed validation story.
Domain authors can annotate function parameters with dimensional constraints that are checked at call time.
The machinery is simple: the decorator precomputes which parameters carry DimConstraint annotations and checks them against the actual Number arguments on each call.
The self-correction problem
All of which brings us to 0.7.0 and the question it answers: what happens when the agent gets it wrong?
Prior to this release, if an agent called convert(100, "kilmeter", "mile"), ucon's MCP server raised an exception.
The agent saw a raw traceback — or worse, a generic error message with no actionable content.
Maybe it retried with the same misspelling.
Maybe it hallucinated an answer instead.
The failure mode was opaque and unrecoverable in a single turn.
0.7.0 replaces raw exceptions with ConversionError, a Pydantic BaseModel that serves as a structured diagnostic protocol.
Every error now carries a human-readable description, an error type ("unknown_unit", "dimension_mismatch", or "no_conversion_path"), the parameter that caused the failure, and, crucially, two tiers of suggestion.
Guardrails
likely_fix is a high-confidence mechanical correction.
When fuzzy matching scores a candidate at 0.7 or above against the unit registry, with a clear gap of at least 0.1 to the next alternative, the system promotes it to likely_fix.
The agent can apply this without reasoning.
You typed "kilmeter", you meant "kilometer"——just fix it and retry.
hints are lower-confidence suggestions.
When multiple candidates score similarly, or when the issue is structural rather than typographic, hints provide options for the agent to reason about.
The distinction matters because it lets the agent calibrate its retry strategy: a likely_fix means "apply and try again," a list of hints means "think about which of these you wanted."
For dimension mismatches, the error includes the dimension of each unit and a list of compatible units that the source dimension can actually convert to. For pseudo-dimension isolation, when someone tries to convert radians to percent, the error explains the isolation mechanism and offers workarounds specific to the pseudo-dimension involved.
The suggestion engine lives in its own module (ucon/mcp/suggestions.py), deliberately separated from the server so it can be tested independently.
It walks the ConversionGraph's edge structure to find compatible units, which means it only suggests units with actual conversion paths and not units that just happen to share a dimension.
The implementation is small. The design decision is not.
The trajectory toward 1.0
If you squint at the release history, a pattern emerges.
The earliest work established a correct algebra——the kind that can represent any physical quantity as a composition of base dimensions, and that enforces closure at every layer so operations never produce invalid results.
The conversion engine built a graph on top of that algebra, using composable mathematical maps with inverses and derivatives, so that every conversion is both reversible and differentiable.
The dimensional model was completed by handling the edge cases that most libraries ignore: dimensionless quantities with semantic isolation, cross-system basis transforms with exact arithmetic, uncertainty propagation through nonlinear conversions.
Integration brings the internal model into the ecosystems where it needs to live——pydantic for data validation, MCP for AI agents, type annotations for domain-specific constraint checking, with dataframe integration planned.
And now, with 0.7.0, the library has begun to address not just correctness but recoverability, but the ability of a system to explain its failures in terms its caller can act on.
The remaining milestones extend this trajectory in natural directions.
A compute tool for multi-step factor-label chains will let agents run dimensional analysis with safety checking at each intermediate step, not just at the final result.
Schema-level dimension constraints will expose DimConstraint annotations directly in MCP tool schemas, so LLMs can validate inputs before making a call rather than discovering errors after.
String parsing like parse("60 mph") returning a Number will close the loop between human-readable input and the algebraic core.
Physical constants with CODATA uncertainties will make the library self-sufficient for physics and chemistry.
NumPy and Polars integration will bring unit safety to vectorized computation and tabular data, the domains where silent unit corruption does the most damage.
And v1.0, itself, will not necessarily be a feature release. It is a commitment: API freeze, semantic versioning, comprehensive documentation, security review, and a two-year long-term support window. It is the point at which ucon declares its algebra, its conversion engine, its integration surface, and its error protocol stable enough to build on.
The post-1.0 vision follows from the same logic. It will include uncertainty correlation with full covariance tracking, Cython optimization, SQLAlchemy and protobuf integration, a symbolic bridge to SymPy. Each extension is possible because the foundation was built to support it.
The division of labor
ucon is built on a thesis that becomes more relevant the more AI agents interact with the physical world: if your system cannot explain its failures, it cannot recover from them.
Structured errors, confidence-tiered suggestions, and dimensional diagnostics are not features for human convenience, they are a protocol for machine self-correction.
An agent that receives a likely_fix can retry immediately.
An agent that receives hints can reason about alternatives.
An agent that receives a dimension_mismatch error with compatible unit lists can reformulate its query entirely.
None of this requires the agent to understand dimensional analysis. It only requires the agent to read a JSON response and follow a simple decision tree. The intelligence lives in the tool, not in the caller.
This is, I think, the right division of labor. Language models are general reasoners. Dimensional analysis is a specific, formally verifiable discipline. The intersection is a tool that speaks both languages; that validates physics and explains its validation in terms a language model can act on.
ucon 0.7.0 is the first, small step in that direction.
*ucon is open source under the Apache 2.0 license.
Install with pip install ucon.
Connect to your AI agent with ucon-mcp.
Source: github.com/withtwoemms/ucon.*