I think the issue is that people were expecting a custom (enough) OS, software, and firmware to justify asking $200 for a device that’s worse than a $150 phone in most every way.
Comment on Rabbit R1 AI box revealed to just be an Android app
deafboy@lemmy.world 6 months ago
I’m confused by this revelation. What did everybody think the box was?
Theharpyeagle@lemmy.world 6 months ago
w2tpmf@sh.itjust.works 6 months ago
I would expect bespoke software and OS in a $200 device to be way less impressive than what a multi billion dollar company develops.
fidodo@lemmy.world 6 months ago
I didn’t know how much work they put into customizing it, but being derived from Android does not mean it isn’t custom. Ubuntu is derived from Debian, that doesn’t mean that it isn’t a custom OS. The fact that you can run the apk on other Android devices isn’t a gotcha. You can run Ubuntu .deb files on other Debian distros too. An OS is more of a curated collection of tools, you should not be going out of your way to make applications for a derivative os incompatible with other OSes derived from the same base distro.
GustavoFring@lemmy.world 6 months ago
The Rabbit OS is running server side.
WanderingCat@lemm.ee 6 months ago
Without thinking into it I would have expected some more custom hardware, some on device AI acceleration happening. For one to go and purchase the device it should have been more than just an android app
deafboy@lemmy.world 6 months ago
The best way to do on-device AI would still be a standard SoC. We tend to forget that these mass produced mobile SoCs are modern miracles for the price, despite the crapy software and firmware support from the vendors.
No small startup is going to revolutionize this space unless some kind of new physics is discovered.
Buddahriffic@lemmy.world 6 months ago
I think the plausibility comes from the fact that a specialized AI chip could theoretically outperform a general purpose chip by several orders of magnitude, at least for inference. And I don’t even think it would be difficult to convert a NN design into a chip or that it would need to be made on a bleeding edge node to get that much more performance. The trade off would be that it can only do a single NN (or any NNs that single one could be adjusted to behave identically to, eg to remove a node you could just adjust the weights so that it never triggers).
So I’d say it’s more accurate to put it as “the easiest/cheapest way to do an AI device is to use a standard SoC”, but the best way would be to design a custom chip for it.
AdrianTheFrog@lemmy.world 6 months ago
They’re not a chip manufacturer though, and modern phone processors are already fast enough to do near real time text generation and fast image generation (20 tokens/second llama 2, ~1 second for a distilled SD 1.5, on Snapdragon 8 Gen 3)
Unfortunately, the cheapest phones with that processor seem about $650, and the Rabbit R1 costs $200 and uses a MediaTek Helio P35 from late 2018.
fidodo@lemmy.world 6 months ago
The hardware seems very custom to me. The problem is that the device everyone carries is a massive superset of their custom hardware making it completely wasteful.
jol@discuss.tchncs.de 6 months ago
Custom hardware and software I guess?
AdrianTheFrog@lemmy.world 6 months ago
Qualcomm is listed as having $10 billion in yearly profits (Intel has ~20B, Nvidia has ~80B), the news articles I can find about Rabbit say its raised around $20 million in funding ($0.02 billion). It takes a lot of money to make decent custom chips.
anlumo@lemmy.world 6 months ago
Running the Spotify app and dozens of others on a custom software stack?
anlumo@lemmy.world 6 months ago
Same. As soon as I saw the list of apps they support, it was clear to me that they’re running Android. That’s the only way to provide that feature.
fidodo@lemmy.world 6 months ago
Isn’t Lemmy supposed to be tech savvy? What do people think the vast majority of Linux OSs are? They’re derivatives of a base distribution. Often they’re even derivatives of a derivative.
Did people think a startup was going to build an entire OS from scratch? What would even be the benefit of that? Deriving Android is the right choice here. This R1 is dumb, but this is not why.
GustavoFring@lemmy.world 6 months ago
Most of the processing is done server side though.
lauha@lemmy.one 6 months ago
It could have been a local AI and some special AI chip not found in all android phones, but since it is run in cloud, the privacy is really a problem
casual_turtle_stew_enjoyer@sh.itjust.works 6 months ago
Magic
In all reality, it is a ChatGPTitty "fine"tune on some datasets they hobbled together for VQA and Android app UI driving. They did the initial test finetune, then apparently the CEO or whatever was drooling over it and said “lEt’S mAkE aN iOt DeViCe GuYs!!1!” after their paltry attempt to racketeer an NFT metaverse game.
Neither this nor Humane do any AI computation on device. It would be a stretch to say there’s even a possibility that the speech recognition could be client-side, as they are always-connected devices that are even more useless without Internet than they already are with.
Make no mistake: these money-hungry fucks are only selling you food cans labelled as magic beans. You have been warned and if you expect anything less from them then you only have your own dumbass to blame for trusting Silicon Valley.
lemann@lemmy.dbzer0.com 6 months ago
If the Humane could recognise speech on-device, and didn’t require its own data plan, I’d be reasonably interested, since I don’t really like using my phone for structuring my day.
I’d like a wearable that I can brain dump to, quickly check things without needing to unlock my phone, and keep on top of schedule. Sadly for me it looks like I’ll need to go the DIY route with an esp32 board and an e-ink display, and drop any kind of stt + tts plans
casual_turtle_stew_enjoyer@sh.itjust.works 6 months ago
Latte Panda 2 or just wait a couple years. It’ll happen eventually because it’s so obvious it’s literally unpatentable.