VoterFrog
@VoterFrog@lemmy.world
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 2 days ago:
What? I’ve already written the design documentation and done all the creative parts that I consider most rewarding. All that’s left for coding is answering questions like “what exactly does the API I need to use look like?” and writing a bunch of error handling if statements. That’s toil.
- Comment on Exactly Six Months Ago, the CEO of Anthropic Said That in Six Months AI Would Be Writing 90 Percent of Code 2 days ago:
Definitely depends on the person. There are definitely people who are getting 90% of their coding done with AI. I’m one of them. I have over a decade of experience and I consider coding to be the easiest but most laborious part of my job so it’s a welcome change.
One thing that’s really changed the game recently is RAG and tools with very good access to our company’s data. Good context makes a huge difference in the quality of the output. For my latest project, I’ve been using 3 internal tools. An LLM browser plugin which has access to our internal data and let’s you pin pages (and docs) you’re reading for extra focus. A coding assistant, which also has access to internal data and repos but is trained for coding. Unfortunately, it’s not integrated into our IDE. The IDE agent has RAG where you can pin specific files but without broader access to our internal data, its output is a lot poorer.
So my workflow is something like this: My company is already pretty diligent about documenting things so the first step is to write design documentation. The LLM plugin helps with research of some high level questions and helps delve into some of the details. Once that’s all reviewed and approved by everyone involved, we move into task breakdown and implementation.
First, I ask the LLM plugin to write a guide for how to implement a task, given the design documentation. I’m not interested in code, just a translation of design ideas and requirements into actionable steps (even if you don’t have the same setup as me, give this a try. Asking an LLM to reason its way through a guide helps it handle a lot more complicated tasks). Then, I pass that to the coding assistant for code creation, including any relevant files as context. That code gets copied to the IDE. The whole process takes a couple minutes at most and that gets you like 90% there.
Next is to get things compiling. This is either manual or in iteration with the coding assistant. Then before I worry about correctness, I focus on the tests. Get a good test suite up and it’ll catch any problems and let you reflector without causing regressions. Again, this may be partially manual and partially iteration with LLMs. Once the tests look good, then it’s time to get them passing. And this is the point where I start really reading through the code and getting things from 90% to 100%.
All in all, I’m still applying a lot of professional judgement throughout the whole process. But I get to focus on the parts where that judgement is actually needed and not the more mundane and toilsome parts of coding.
- Comment on 4 days ago:
As far as I understand as a layman, the measurement tool doesn’t really matter. Any observer needs to interact with the photon in order to observe it and so even the best experiment will always cause this kind of behavior.
With no observer: the photon, acting as a wave, passes through both slits simultaneously and on the other side of the divider, starts to interfere with itself. Where the peaks or troughs of the wave combine is where the photon is most likely to hit the screen in the back. In order to actually see this interference pattern we need to send multiple photons through. Each photon essentially lands in a random location and the pattern only reveals itself as we repeat the experiment. This is important for the next part…
With an observer: the photon still passes through both slits. However, the interaction with the observer’s wave function causes the part of the photon’s wave function in that slit to offset in phase. In other words, the peaks and troughs are no longer in the same place. So now the interference pattern that the photon wave forms with itself still exists but, critically, it looks completely different.
Now we repeat with more photons. BUT each time you send a photon through it comes out with a different phase offset. Why? Because the outcome of the interaction with the observer is governed by quantum randommess. So every photon winds up with a different interference pattern which means that there’s no consistency in where they wind up on the screen. It just looks like random noise.
At least that’s what I recall from an episode of PBS Space Time.
- Comment on On Black Holes... 3 weeks ago:
Unfortunately the horrible death would come long before you even reach the event horizon. The tidal forces would tear you apart and eventually, tear apart the molecules that used to make up you. Every depiction of crossing a black hole event horizon just pretends that doesn’t happen for the sake of demonstration.
- Comment on Actors that have been the least believable scientist castings, I’ll start. 4 weeks ago:
He became a rogue scholar, huh? A dark path that leads only to evil scientist.
- Comment on OKBuddyGalaxyBrain 4 weeks ago:
I don’t think it’s working. LLMs don’t have any trouble parsing it.
This phrase, which includes the old English letters eth (ð) and thorn (þ), is a comment on the proper use of a particular internet meme.
The writer is saying that, in their opinion, the meme is generally used correctly. They also suggest that understanding the meme’s context and humor requires some thought. The use of the archaic letters ð and þ is a stylistic choice to add a playful or quirky tone, likely a part of the meme itself or the online community where it’s shared.
Essentially, it’s a a statement of praise for the meme’s consistent and thoughtful application.
- Comment on Black Holes 5 weeks ago:
It’s what OP’s parents call the first day they saw him.
- Comment on Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not 2 months ago:
The language model isn’t teaching anything it is changing the wording of something and spitting it back out. And in some cases, not changing the wording at all, just spitting the information back out, without paying the copyright source.
You could honestly say the same about most “teaching” that a student without a real comprehension of the subject does for another student. But ultimately, that’s beside the point. Because changing the wording, structure, and presentation is all that is necessary to avoid copyright violation. You cannot copyright the information. Only a specific expression of it.
There’s no special exception for AI here. That’s how copyright works for you, me, the student, and the AI. And if you’re hoping that copyright is going to save you from the outcomes you’re worried about, it won’t.
- Comment on Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not 2 months ago:
If I understand correctly they are ruling you can by a book once, and redistribute the information to as many people you want without consequences. Aka 1 student should be able to buy a textbook and redistribute it to all other students for free. (Yet the rules only work for companies apparently, as the students would still be committing a crime)
A student can absolutely buy a text book and then teach the other students the information in it for free. That’s not redistribution. Redistribution would mean making copies of the book to hand out. That’s illegal for people and companies.
- Comment on Judge Rules Training AI on Authors' Books Is Legal But Pirating Them Is Not 2 months ago:
It seems like a lot of people misunderstand copyright so let’s be clear: the answer is yes. You can absolutely digitize your books. You can rip your movies and store them on a home server and run them through compression algorithms.
Copyright exists to prevent others from redistributing your work so as long as you’re doing all of that for personal use, the copyright owner has no say over what you do with it.
You even have some degree of latitude to create and distribute works from those with a violation only occurring when you distribute something pretty damn close to a copy of the original. Some examples: create a word cloud of a book, analyze the tone of news article to help you trade stocks, produce an image containing the most prominent color in every frame of a movie, or create a search index of the words found on all websites on the internet.
You can absolutely do the same kinds of things an AI does with a work as a human.
- Comment on Observer 6 months ago:
Well, famously, they’re waves and particles. The double slit which way experiment will only set off the detector in one slit, as if it was a particle. Yet, without a detector it will interfere with itself as if it were a wave that passed through both slits.
- Comment on Observer 6 months ago:
You’re right. But the thing that’s interesting about the double slit experiment though is that it works on only a single photon. It’s as if all the traffic was created by a single car. So classically you might not think that the single car should care if the freight truck is heading down a different lane than the car but I’m QM it does, because the car is in a superposition of occupying several lanes.
I’m probably driving the analogy straight into the ground of course
- Comment on Observer 6 months ago:
What are you trying to see exactly? There’s this video done with polarizers: youtu.be/unCXuRXpEhs Of course, it’s not an instant on/off but having an instant on/off doesn’t really change anything.
- Comment on It's just a Planck bro 7 months ago:
In order to accurately measure the location of something requires energy. The more precise the measurement, the more energy is required. The amount of energy required get the precision below the Planck length would literally create a black hole.
- Comment on It's just a Planck bro 7 months ago:
Not American enough. I need it in football fields.