Openpilot 0.10.1 introduces the North Nevada Model, featuring major improvements to the World Model architecture. The system now infers 6 degree of freedom ego localization directly from images, removing the need for external localization inputs. This reduces over-constrained data and opens the door for future self-generated imagery.
To support this change, the autoencoder Compressor was upgraded with masked image modeling, switched from CNN to Vision Transformer architecture, and the World Model itself was scaled from 500 million to 1 billion parameters. All models now train on a much larger dataset of 2.5 million segments, up from 437,000, covering more vehicles, countries, and driving scenarios.
The UI has been completely rewritten, moving from Qt/Weston to Python with raylib. This reduces code complexity by about 10,000 lines, cuts boot time by 4 seconds, lowers GPU usage, and simplifies development.
Finally, the Driver Monitoring Model’s training infrastructure has been streamlined with dynamic data streaming, though the model’s functionality remains unchanged.
brucethemoose@lemmy.world 2 hours ago
Never knew this was a thing. Super cool, and how it should be; detached from the actual car.
Still, I’d be more interested in safety features, like blind spot detection or prebraking + beeping for potential accidents, than cruise control.