Oh noez! Anyway…
xAI Dev Leaks API Key for Private SpaceX, Tesla LLMs – Krebs on Security
Submitted 1 week ago by some_guy@lemmy.sdf.org to technology@lemmy.world
https://krebsonsecurity.com/2025/05/xai-dev-leaks-api-key-for-private-spacex-tesla-llms/
ShittyBeatlesFCPres@lemmy.world 1 week ago
What the fuck is SpaceX using a large language model for?
pennomi@lemmy.world 1 week ago
Presumably a document query system, one of the actually useful reasons to use an LLM.
Having it trawl through thousands of pages of NASA and government files to answer regulatory questions is probably legitimately helpful, even if all it does is point you towards the right pages for a human to review.
ShittyBeatlesFCPres@lemmy.world 1 week ago
That makes sense. Like you, I’ve generally found that LLMs are incredibly useful for certain, highly specific things but people (CEOs especially) need to understand their limitations.
When it first came out, I purposely used ChatGPT on a trip to evaluate it. I was in a historic city on a business trip where I stayed an extra few days so I was traveling alone. It was good at being a tour guide. Obviously, I could have researched everything and read guidebooks but I was focused on my work stuff. Being able to ask follow-up questions and have a conversation was a real improvement over traditional search.
That’s obviously a limited use case where I was asking questions that could have been answered in traditional ways but I found that to be a good consumer use case. It knew details that wouldn’t necessarily be in a Wikipedia article or Guidebook that would take me 15 Google searches to answer. Just my own little curiosity questions about an old building or whatever. I cross-checked things later and it didn’t hallucinate. Obviously, a very limited use case but it was good at it.
IsoKiero@sopuli.xyz 1 week ago
This is the one I’m waiting the most from the LLM hype. It would be a massive benefit for companies around the world (mine included) if they could just dump their documentation in all shapes and flavours into a model and have it parse standardized documents for you.
But the generic OpenAI/Copilot models aren’t reliable enough just yet, hallucinations and made up data just doesn’t go with that. I’m not even sure if those models are ever capable on such task alone, maybe it needs additional component which checks the facts from originals or something to make it actually useful.
adarza@lemmy.ca 1 week ago
munching on all that sweet government data that’s been swiped by the muskrat goon squad.
SoftestSapphic@lemmy.world 1 week ago
So that their LLM looks like it’s getting buisness
ryedaft@sh.itjust.works 1 week ago
I… You think high tech companies should just be sending all their data to OpenAI, Anthropic, and whoever else?
ShittyBeatlesFCPres@lemmy.world 1 week ago
I didn’t think they should use A.I. yet at all. I don’t think the shitty version of machine learning of today is ready for engineering giant explosive things. As someone else pointed out, document management for regulatory filings and stuff is (hopefully) the use case. I don’t care if it’s used in that way.
Basically, I think today’s “A.I.” should be treated as alpha software. It has a ton of potential but there is a lot left to do, especially on things involving human or even critter life like rocket science, self-driving cars, or military applications where “edge cases” are life or death situations. (I don’t think it should be used for military applications until it’s really fucking mature tech but it’s already apparently being used for that so the cat’s out the bag there.)