To preface I don’t actually use ai for anything at my job, which might be a bad metric but my workflow is 10x slower if i even try using ai
That said, I want AI to be able to do unit tests in the sense that I can write some starting ones, then it be able to infer what branches aren’t covered and help me fill the rest
Obviously it’s not smart enough, and honestly I highly doubt it will ever be because that’s the nature of llm, but my peeve with unit test is that testing branches usually entail just copying the exact same test but changing one field to be an invalid value, or a dependency to throw. It’s not hard, just tedious. Branching coverage is already enforced, so you should know when you forgot to test a case.
I also think you should treat ai code as a pull request and actually review what it writes. My coworkers that do use it don’t really proofread, so it ends up having some bad practices and code smells.
MangoCats@feddit.it 1 day ago
Ideally, there are requirements before anything, and some TDD types argue that the tests should come before the code as well.
Ideally, the customer is well represented during requirements development - ideally, not by the code developer.
Ideally, the code developer is not the same person that develops the unit tests.
Ideally, someone other than the test developer reviews the tests to assure that the tests do in-fact provide requirements coverage.
Ideally, the modules that come together to make the system function have similarly tight requirements and unit-tests and reviews, and the whole thing runs CI/CD to notify developers of any regressions/bugs within minutes of code check in.
In reality, some portion of that process (often, most of it) is short-cut for one or many reasons. Replacing the missing bits with AI is better than not having them at all.
sugar_in_your_tea@sh.itjust.works 1 day ago
Why? The developer is exactly the person I want writing the tests.
There should also be integration tests written by a separate QA, but unit tests should 100% be the responsibility of the dev making the change.
I disagree. A bad test is worse than no test, because it gives you a false sense of security. I can identify missing tests with coverage reports, I can’t easily identify bad tests. If I’m working in a codebase with poor coverage, I’ll be extra careful to check for any downstream impacts of my change because I know the test suite won’t help me. If I’m working in a codebase with poor tests but high coverage, I may assume a test pass indicates that I didn’t break anything else.
If a company is going to rely heavily on AI for codegen, I’d expect tests to be manually written and have very high test coverage.
MangoCats@feddit.it 23 hours ago
True enough
Also agree, if your org has trimmed to the point that you’re just making tests to say you have tests, with no review as to their efficacy, they will be getting what they deserve soon enough.
If a company is going to rely heavily on AI for anything I’d expect a significant traditional human employee backstop to the AI until it has a track record. Not “buckle up, we’re gonna try somethin’” track record, more like two or three full business cycles before starting to divest of the human capital that built the business to where it is today. Though, if your business is on the ropes and likely to tank anyway… why not try something new?
Was a story about IBM letting thousands of workers go, replacing them with AI… then hiring even more workers in other areas with the money saved from the AI retooling. Apparently they let a bunch of HR and other admin staff go and beefed up on sales and product development. There are some jobs that you want more predictable algorithms in than potentially biased people, and HR seems like an area that could have a lot of that.
Nalivai@lemmy.world 1 day ago
It’s better if it’s a different developer, so they don’t know the nuances of your implementation and test functionality only, avoids some mistakes. You’re correct on all the other points.
sugar_in_your_tea@sh.itjust.works 1 day ago
I really disagree here. If someone else is writing your unit tests, that means one of the following is true:
Devs should write their tests, and reviewers should ensure the tests do a good job covering the logic. At the end of the day, the dev is responsible for the correctness of their code, so this makes the most sense to me.
MangoCats@feddit.it 23 hours ago
I’m mixed on unit tests - there are some things the developer will know (white box) about edge cases etc. that others likely wouldn’t, and they should definitely have input on those tests. On the other hand, independence of review is a very important aspect of “harnessing the power of the team.” If you’ve got one guy who gathers the requirements, implements the code, writes the tests, and declares the requirements fulfilled, that better be one outstandingly brilliant guy with all the time on his hands he needs to do the jobs right. If you’re trying to leverage the talents of 20 people to make a better product, having them all be solo-virtuoso actors working independently alongside each other is more likely to create conflict, chaos, duplication, and massive holes of missed opportunities and unforeseen problems in the project.
Nalivai@lemmy.world 1 day ago
Nah, bullshit tests that pretend to be tests but are essentially “if true == true then pass” is significantly worse than no test at all.
MangoCats@feddit.it 23 hours ago
Sure. But, unsupervised developers who: write the code, write their own tests, change companies every 18 months, are even more likely to pull BS like that than AI is.
You can actually get some test validity oversight out of AI review of the requirements and tests, not perfect, but better than self-supervised new hires.
Nalivai@lemmy.world 10 hours ago
You also will get some bullshit out of it. If you’re in a situation when you can’t trust your developers because they’re changing companies every 18 months, and you can’t even supervise your untrustworthy developers, then you sure as shit can’t trust whatever LLM will generate you. At least your flock of developers will bullshit you predictably to save time and energy, with LLM you have zero ideas where lies will come from, and those will be inventive lies.
themaninblack@lemmy.world 1 day ago
Saved this comment. No notes.