Unit converter

Prev Next
This content is currently unavailable in Portuguese. You are viewing the default (English) version.

.::: (info) ()
To download the Unit Converter: Mass, visit: Library
To download the Unit Converter: Volume, visit: Library
:::

In this article, one of our library developers, Tamás, walks through the development decisions behind the Unit converter building block app available in the library.

Why I Used an Object List Instead of a Table

Hi, I’m Tamás from Tulip.

When I built this unit converter component, one of the first architectural decisions I made was not to use a table.
Instead, I chose to store everything inside object lists initialized as default component variables.

The reasoning was simple: unit conversion in this context behaves like a mathematical constant, not operational data. The relationships are purely multiplicative, rarely change, and do not require runtime editing, aggregation, filtering, or relational queries.
Introducing a table would have meant:

  • creating and maintaining additional infrastructure,
  • configuring queries and aggregations,
  • adding another dependency layer to every app that uses the component.

That would have increased complexity without increasing capability.

So instead, I embedded the datasets directly inside the component:

  • Canonical base datasets:

    • _lengthToBaseDefaults
    • _massToBaseDefaults
    • _volumeToBaseDefaults
  • Synonym dictionaries:

    • _lengthUnitAliases
    • _massUnitAliases
    • _volumeUnitAliases

Both live as default variable values.
This keeps the component:

  • Portable
  • Self-contained
  • Deterministic
  • Easy to copy between apps

If the dataset behaves like a constant, I prefer protecting it inside the component rather than externalizing it.

Why Not a Table?

A table-based solution would introduce a structural dependency that this problem does not justify.

Unit definitions are not transactional data. They are structural definitions. Treating them like operational records would be disproportionate to the problem.

My goal here was architectural proportionality: use only as much infrastructure as the problem truly requires.

Design Principle: Protected Logic, Flexible Configuration

This component follows two core ideas:
1. Canonical base normalization
2. Input tolerance through synonym resolution

I intentionally separated mathematical truth from user flexibility.

  • The canonical base datasets define the mathematical relationships.
  • The synonym dictionaries define how flexible user input is interpreted.

Both layers are part of the design — not afterthoughts.

In the canonical base model, each unit is mapped to a single reference unit within its domain. Every conversion flows through that reference. This creates one normalization path and guarantees that all units share the same mathematical anchor.

Domain Base Units

I explicitly fixed the base unit per domain:

  • Length → m
  • Mass → kg
  • Volume → L

These were deliberate choices:

  • They are industry-standard.
  • They minimize rounding drift.
  • They keep the system intuitive for engineers.

Canonical Unit Keys (Supported Units)

All alias mappings must resolve to one of these canonical keys.

  • Mass (base: kg) µg, mg, g, kg, t, lb, oz
  • Length (base: m) mm, cm, m, km, in, ft, yd, mi
  • Volume (base: L) ml, cl, µl, mm3, cm3, l, m3, fl_oz_us, pt_us, qt_us, gal_us

Each domain has its own base dataset object list. Each entry defines a unit and its factorToBase.

The conversion always follows the same pattern:

  • Input → Base
  • Base → Target

No branching. No pair maintenance.

Protected Internal Interface

To prevent accidental logic modification, I isolated the core variables:

  • _inputValue
  • _inputUnit
  • _targetUnit
  • _conversionResult

The core engine depends only on these.

Two thin configuration layers surround it:

  • 01 – USER CONFIG: INPUT STAGE Maps external sources into internal variables.
  • 03 – USER CONFIG: OUTPUT STAGE Routes the calculated result outward.

This keeps the computational core stable while allowing flexible integration.

Trigger Breakdown – How the MAGIC Works

The protected MAGIC trigger performs two logical operations:

  1. Resolve unit synonyms
  2. Apply canonical base normalization

Alias Resolution Layer

Each domain has a synonym dictionary, for example:

  • _lengthUnitAliases

Each object contains:

  • raw
  • canonical

Example:

{ "raw": "mm", "canonical": "mm" }
{ "raw": "millimeter", "canonical": "mm" }
{ "raw": "m³", "canonical": "m3" }

This allows multiple writing formats to map to a single canonical key.
Importantly, I did not duplicate entries inside the base dataset. The synonym layer exists purely to absorb input variability without multiplying mathematical definitions.

Canonical Base Formula

Once the canonical units are resolved, the formula is:

result = inputValue × factorToBase(canonicalInput)
         ÷ factorToBase(canonicalTarget)

Example Trigger (Length Domain)

(@Variable._inputValue
  * array_value_at_index(
      map_to_number_list(@Variable._lengthToBaseDefaults , 'factorToBase'),
      array_index_of(
        map_to_text_list(@Variable._lengthToBaseDefaults , 'unit'),
        array_value_at_index(
          map_to_text_list(@Variable._lengthUnitAliases , 'canonical'),
          array_index_of(
            map_to_text_list(@Variable._lengthUnitAliases , 'raw'),
            @Variable._inputUnit
          )
        )
      )
    )
)
/
array_value_at_index(
  map_to_number_list(@Variable._lengthToBaseDefaults , 'factorToBase'),
  array_index_of(
    map_to_text_list(@Variable._lengthToBaseDefaults , 'unit'),
    array_value_at_index(
      map_to_text_list(@Variable._lengthUnitAliases , 'canonical'),
      array_index_of(
        map_to_text_list(@Variable._lengthUnitAliases , 'raw'),
        @Variable._targetUnit
      )
    )
  )
)

Structurally, this is layered logic:

  • Resolve raw → canonical
  • Resolve canonical → factor
  • Normalize through base

It looks dense, but it is intentionally inline to keep the engine self-contained and eliminate intermediate state.

Why This Structure Was Chosen

I wanted three guarantees:

  • No pair explosion
  • No rounding drift from multiple paths
  • No trigger rewriting when extending units

The canonical base model gives me all three.
When I add a new unit, I add exactly one factorToBase entry. The engine does not change.
When I support a new writing format, I add exactly one alias entry. The math does not change.
That separation is the core architectural decision.

Multi-Domain Reuse: Same Engine, Different Datasets

Another deliberate decision: the engine logic is domain-agnostic.
Length, mass, and volume share the same structural trigger. Only the dataset variable changes.
Instead of building three different converters, I built one canonical engine and let the dataset define the domain.
This reduces cognitive load and keeps component behavior predictable.

Why This Scales

Scaling happens at the dataset level.
Adding units does not require:

  • modifying trigger expressions,
  • adding conditional branches,
  • duplicating logic.

The algorithm remains stable. The dataset evolves.

Extending the Component

Adding a New Canonical Unit

  • Extend the relevant _ToBaseDefaults dataset.
  • Add unit and factorToBase.
  • Leave the trigger untouched.

Adding a New Writing Format

  • Extend the relevant _UnitAliases dictionary.
  • Add raw and the corresponding canonical key.
  • Leave the base dataset untouched.

This was intentional: I wanted user-friendly flexibility without multiplying mathematical definitions.

Failure Handling (Safe Lookup Pattern)

Two failure points exist:

  1. Raw unit not found in alias dictionary
  2. Canonical unit not found in base dataset

If either lookup fails, the calculation is not executed.

This guarantees:

  • No stale values
  • No silent miscalculations
  • Deterministic behavior

Why Not Separate Arrays for Units and Factors?

Storing units and factors in separate arrays would create positional dependency risk.
If one list changes order and the other does not, the system silently breaks.
By storing unit and factorToBase together inside structured objects, I eliminated that structural fragility.

Core Philosophy

This component is intentionally simple in structure and strict in behavior.

  • The math is centralized.
  • The engine is protected.
  • Flexibility exists at the edges.

I designed it so the logic remains stable over time while the supported unit set can evolve safely.
The engine does not change. The dataset grows.

Get Involved

Join the community to share improvements, propose additional units, report issues, or discuss architectural decisions