
Kedorion exists to build the intelligence layer between the digital and the physical. We develop AI systems that do not operate in abstraction — they operate in warehouses, on factory floors, on roads and in operating theatres, where the cost of being wrong is not a bad prediction score but a real-world consequence.
Our work centres on a single question: how do you make a machine reliably understand the physical world it is actually in, not the one it was trained on? That question drives everything — our sensor fusion research, our physical intelligence middleware, our embodied reasoning systems.
We believe the next decade of AI progress is not about larger models. It is about grounding intelligence in physical reality, and building the infrastructure that makes that grounding trustworthy at scale.
Kedorion was founded on the conviction that physical AI — systems that perceive, reason about, and act in the real world — is both the hardest unsolved problem in applied AI and the most consequential one to get right.
We are a research-led company. Our teams work at the boundary between machine learning, robotics, sensor engineering and control theory. We publish, we collaborate with universities, and we ship production systems to industrial partners.
We are based in Germany, operating across Europe. Our investors and partners share our long-term conviction: that reliable physical intelligence, built with rigour, will define the next generation of industrial and social infrastructure.


Our products are not demos. Every system we develop is designed from day one for deployment in environments we do not fully control — with varying lighting, unexpected obstacles, human co-workers, and hardware that ages.
We build perception stacks that produce accurate 3D world models in real time. We build physical intelligence layers that sit between planners and actuators and enforce physical feasibility. We build embodied reasoning systems that generalise from simulation to the real world without fragile fine-tuning.
The common thread is ground truth: every layer of our stack is designed to stay honest about what it knows, where it is uncertain, and when to stop and ask for help.