What is the energy density of this? It sounds like a rebrand of a hydrogen fuel cell, which has some limited applications, but has been supplanted by lithium-ion due to hydrogen’s low energy density and the fuel cells/electrolysis combo having poor energy efficiency.
Comment on Greener Alternative To Lithium-ion Battery
bernieecclestoned@sh.itjust.works 1 year agoThe battery uses a carbon electrode to store hydrogen that has been split from water, and then works as a hydrogen fuel cell to produce electricit
BombOmOm@lemmy.world 1 year ago
bernieecclestoned@sh.itjust.works 1 year ago
The new proton battery has an energy density of 245 watt hours per kilogram, nearly three times the energy density of the team’s 2018 prototype
Pretzilla@lemmy.world 1 year ago
Irrelevant to this discussion, though.
How does it compare to competing technologies?
bernieecclestoned@sh.itjust.works 1 year ago
LastYearsPumpkin@feddit.ch 1 year ago
Don’t use chatgpt as a source, there is no reason to trust anything it says.
It might be right, it might have just thrown together words that sound right, or maybe it’s completely made up.
metaStatic@kbin.social 1 year ago
it just guesses the next probable word. literally everything it says is made up.
thal3s@sh.itjust.works 1 year ago
“ChatGPT, please provide your rubble to this statement about you: […]”
TrenchcoatFullofBats@belfry.rip 1 year ago
You have just described how human brains work
thbb@kbin.social 1 year ago
Not really. We also have deductive capabilities (aka "system 2") that enable us to ensure some level of proof over our statements.
sky@codesink.io 1 year ago
right, and they’re actually pretty bad at remembering facts, that’s why we have entire institutions dedicated to maintain accurate reference material!
why do people throw all of this out the window for advice from a dumb program I’ll never understand
8ender@lemmy.world 1 year ago
Words are how we communicate knowledge so sometimes the most probable combinations of words end up being facts
SkaveRat@discuss.tchncs.de 1 year ago
while it’s technically true that it “just predicts the next word”, it’s a very misleading argument to make.
Computers are also “just some basic logic gates” and yet we can do complex stuff with them.
Complex behaviour can result from simple things.
Not defending the bullshit that LLMs generate, just to point out that you have to be careful with your arguments