rebrand into Anna AI !
Comment on Judge orders Anna’s Archive to delete scraped data; no one thinks it will comply
aeronmelon@lemmy.world 2 days ago
Hey judge, order AI companies to delete THEIR illegally-scraped data.
evol@lemmy.today 2 days ago
db2@lemmy.world 2 days ago
Couldn’t they make an argument with that pointing out that they’re being unjustly targeted because they’re smaller and easier to pick on?
SolacefromSilence@fedia.io 2 days ago
No one cares if they're small or unjustly picked on. If they want to beat the charges, they need to announce their own AI trained on the data.
tempest@lemmy.ca 2 days ago
It would make me laugh if they could train an LLM that could only regurgitate content verbatim
ilinamorato@lemmy.world 2 days ago
Well, it’s not an LLM, but “AI” doesn’t have a defined meaning, so from that perspective they kind of already did.
Dran_Arcana@lemmy.world 1 day ago
en.wikipedia.org/wiki/Markov_chain
Before the advent of AI, I wrote a slack bot called slackbutt that made Markov chains of random lengths between 2 and 4 out of the chat history of the channel. It was surprisingly coherent. Making an “llm” like that would be trivial.
Natanael@infosec.pub 1 day ago
It’s actually kinda easy. Neural networks are just weirder than usual logic gate circuits. You can program them just the same and insert explicit controlled logic and deterministic behavior.
The reason that doesn’t work well with a traditional LLM despite the node properties being well known is because of how intricately interwoven and mutually dependent all the different parts of the network is. You can’t just arbitrarily edit anything or insert more nodes or replace logic, you don’t know what you might break. It’s easier to place inserted logic outside of the LLM network and train the model to interact with it (“tool use”).
panda_abyss@lemmy.ca 2 days ago
We’ve created a state of the art model in recall and training data idempotency!
UnderpantsWeevil@lemmy.world 2 days ago
db2@lemmy.world 2 days ago
I hope they call it FUAI.