model_tar_gz
@model_tar_gz@lemmy.world
- Comment on Patient gamers, what are your favorite OSTs? 1 week ago:
I listen to Stellaris OST a lot these days. Turns out that good music to plan galactic domination is also good music to design and build software.
- Comment on Noice 2 weeks ago:
Pretty sure my Calc2 prof pulled this trick on us sometime in the solids of revolution unit. Started deriving something on the board, just another cylinder ok, but wtf are you calling the radius ‘z’?
- Comment on IFIXIT: Victory Is Sweet - We Can Now Fix McDonald’s Ice Cream Machines 2 weeks ago:
Costco’s soft-serve is way better than McD’s and actually is cheap.
- Comment on Bitwarden Makes Change To Address Recent Open-Source Concerns 3 weeks ago:
Bullshit. Developers never make mistakes. N.E.V.R.
- Comment on Nintendo Targets YouTube Accounts Showing Emulated Games 1 month ago:
Can’t wait for Nintendo to sue Microsoft because VS Code can be used to edit save files.
- Comment on 👣👣👣 1 month ago:
I’ve rejected someone on their 4th round before—1st round with me. That candidate had managed to convince the recruiter that they had the chops for a staff engineer (>$200k/yr!) and passed two coding rounds before mine, testing knowledge of relevant techs on our stack—at this level of role, you have to know this coming in; table stakes.
I was giving the systems design round. Asked them to design something that was on their resume—they couldn’t. They’d grossly misrepresented their role/involvement in that project and since they were interviewing for a staff level role, high-level design is going to be a big part of it and will impact the product and development team in significant ways. No doubt they’d been involved in implementing, and can code—but it was very clear that they didn’t understand the design decisions that were made and I had no confidence that they would contribute positively in our team.
Sucks for them to be rejected, but one criteria we look for is someone who will be honest when they don’t know—and we do push to find the frontiers of their knowledge. We even instruct them to just say it when they don’t know and we can problem-solve together. But a lot of people have too much ego to accept that, but we don’t have time for people like that on the team either.
Look, I get what you’re saying and clearly I’ve been on the wrong end of it too, but if we make a bad hiring decision, it costs not just the candidate their job but also the team and company they work on can get into a bad place too. What would you do in that situation? Just hire them anyway and risk the livelihood of everyone else on the team? That’s a non-starter; try to see a bigger picture.
- Comment on 👣👣👣 1 month ago:
I don’t know if I agree with that. Having been on the hiring side of the table more than a few times.
Hiring a new employee is a risk; especially when you’re hiring at a senior enough level where the wrong decisions are amplified as the complexity of the software grows—and it becomes far more expensive to un/redo bad architectural decisions.
And the amount of time it takes for even an experienced engineer to learn their way around your existing stack, understand the reasons for certain design decisions, and contribute in a way that’s not disruptive—that’s like 6 months minimum for some code bases. More if there’s crazy data flows and weird ML stuff. And if they’re “full stack (backend and frontend) then it’s gonna be even longer before you see how good of a hiring decision you really made. For a $160k+/yr senior dev role, that’s $80k (before benefits and other onboarding costs) before you really expect to see anything really significant.
So you schedule as many interviews as you need to get a feel for what they can do, because false negatives are way less expensive than false positives.
Sometimes people can be cunning: charm, wow annd woo their way past even the savviest of recruiters with the right combinations of jargon patterns.
Sometimes they can even fool a technical round interviewer.
4-5 interviews (esp. if the last is an onsite in which you’ll meet many) seems to be about the norm in my field. Even if it kinda sucks for the person looking for the job.
- Comment on 👣👣👣 1 month ago:
… come to think of it now, I would have played ball with them if they’d just been transparent about the situation upfront. It was good interview practice and in retrospect prepared me well for the interviews at my current role. And I’m way happier with this company than I would’ve been there.
The Universe does funny things.
- Comment on 👣👣👣 1 month ago:
I took an interview like this before. I checked the vast majority of the boxes of technologies used, and experience in a specific type of processing models prior to deployment. Thought it was bagged and tagged mine. 4 rounds of interviews, two technical rounds and a system design.
Asked me some hyper-specific question about X and wanted a hyper-specific implementation of Z technology to solve the problem. The way I solved it would have worked, but it wasn’t the X they were looking for.
Turns out the guy interviewing me at the second tech interview round was the manager of the guy he wanted in the role—and the guy working for him already was the founder of the startup that commercialized X, and they just needed to check a box for corporate saying they’d done their diligence looking for a relevant senior engineer.
That fucking company put me through the wringer for that bullshit. 4 rounds of interviews.
Never again.
- Comment on Soup 1 month ago:
Fun fact: the scientist’s name was Adam, and the soup was primordial.
- Comment on Top EU Court’s Advisor Explains Why Video Game Cheats Are Not Copyright Infringement 1 month ago:
If I want to wear my sunglasses while I’m watching a movie in the cinema because I have a light-sensitivity condition—because that alters my perception of the film without changing the permanent media storage of the film—am I cheating and subject to copyright infringement action?
- Comment on My wife misspoke and said "Neil Degrasse TITAN" 1 month ago:
Kneel dat Ass, my Son
- Comment on go lower 1 month ago:
Reminds me of this trad classic in Lumpy Ridge, CO: Magical Chrome Plated Semi-Automatic Enema Syringe
The novelty of the view is more fun than the climb/crux itself but the overall route is fun and worth doing.
- Comment on Climate change 1 month ago:
Some say we’ll see Armageddon soon
- Comment on Secret calculator hack brings ChatGPT to the TI-84, enabling easy cheating 1 month ago:
Stop giving me Thermo nightmares; I lived through that shit already I don’t need to sleep through it too.
- Comment on Inadmissible 1 month ago:
I need you to understand that most people just don’t go around talking about other people’s dissertations to other people.
- Comment on Ideas for storing electrons or light in a container 1 month ago:
How are you planning on handling the induced phase shifts due to the rapid polarity reversals that occur in the transgravitational electron flux arrays? I mean, this is a nonstarter if you can’t get that to work—the electropositron fields are going to decay too quickly to be useful otherwise and the quite-expensive phosphokinesis-generator will be wasted.
- Comment on Histories Mysteries 2 months ago:
Can’t find Saddam.
- Comment on *clicks pen* 2 months ago:
Oh that was such an evil trick. I liked próx mining the bottom of the vertical sliding doors in that one level that looked like a stone temple. Or the grates in bunker cuz the mines are nearly invisible on those.
- Comment on *clicks pen* 2 months ago:
No Odd Job!
Was the standard house rule in my circle of friends. We hated mines too, but allowed them. But no fucking Odd Job.
- Comment on Research shows more than 80% of AI projects fail, wasting billions of dollars in capital and resources: Report 2 months ago:
I don’t disagree. Solutions finding problems is not the optimal path—but it is a path that pushes the envelope of tech forward, and a lot of these shiny techs do eventually find homes and good problems to solve and become part of a quiver.
But I will always advocate to start with the customer and work backwards from there to arrive at the simplest engineered solution. Sometimes that’s a ML model. Sometimes a ln expert system. Sometimes a simpler heuristics/rules based system. That all falls under the ‘AI’ umbrella, by the way. :D
- Comment on Research shows more than 80% of AI projects fail, wasting billions of dollars in capital and resources: Report 2 months ago:
I’m an AI Engineer, been doing this for a long time. I’ve seen plenty of projects that stagnate, wither and get abandoned. I agree with the top 5 in this article, but I might change the priority sequence.
Five leading root causes of the failure of AI projects were identified
- First, industry stakeholders often misunderstand — or miscommunicate — what problem needs to be solved using AI.
- Second, many AI projects fail because the organization lacks the necessary data to adequately train an effective AI model.
- Third, in some cases, AI projects fail because the organization focuses more on using the latest and greatest technology than on solving real problems for their intended users.
- Fourth, organizations might not have adequate infrastructure to manage their data and deploy completed AI models, which increases the likelihood of project failure.
- Finally, in some cases, AI projects fail because the technology is applied to problems that are too difficult for AI to solve.
4 & 2 —>1. IF they even have enough data to train an effective model, most organizations have no clue how to handle the sheer variety, volume, velocity, and veracity of the big data that AI needs. It’s a specialized engineering discipline to handle that (data engineer). Let alone how to deploy and manage the infra that models need—also a specialized discipline has emerged to handle that aspect (ML engineer). Often they sit at the same desk.
1 & 5 —> 2: stakeholders seem to want AI to be a boil-the-ocean solution. They want it to do everything and be awesome at it. What they often don’t realize is that AI can be a really awesome specialist tool, that really sucks on testing scenarios that it hasn’t been trained on. Transfer learning is a thing but that requires fine tuning and additional training. Huge models like LLMs are starting to bridge this somewhat, but at the expense of the really sharp specialization. So without a really clear understanding of what can be done with AI really well, and perhaps more importantly, what problems are a poor fit for AI solutions, of course they’ll be destined to fail.
3 —> 3: This isn’t a problem with just AI. It’s all shiny new tech. Standard Gardner hype cycle stuff. Remember how they were saying we’d have crypto-refrigerators back in 2016?
- Comment on Nissan develops paint that keeps cars cool in summer heat 2 months ago:
Yes. It contains ceramic nano particles that reflect UV without interring with visibility.
- Comment on Sorry 2 months ago:
And CPD: Chronic Procrastination Disorder
- Comment on My personal favourite: "Oh, fuck me. CHRIST." 2 months ago:
Fucking work for once you piece of fuck. Fuck this day. Fuck this shit. Fuck this degree. Fuck.
- Comment on Nothing is requiring employees to be in the office five days a week 2 months ago:
Cool cool, we’re cool. I get a little triggered when I hear people say that NN/DL models are “fancy statistics”—it’s not the first time.
In what seems like another lifetime ago, my first engineering job was as a process engineer for an industrial-scale continuous chromatography unit in hydrocarbon refining. Fuck that industry, but there’s some really cool tech there nevertheless. Anyway when I was first learning the process, the technician I was learning from called it a series of “fancy filters” and that triggered me too—adsorption is a really fascinating chemical process that uses a lot of math and physics to finely-tune for desired purity, flowrate, etc. and to diminish it as “fancy filtration”!!!
He wasn’t wrong, you’re not either; but it’s definitely more nuanced than that. :)
Engineers are gonna nerd out about stuff. It’s a natural law, I think.
- Comment on Nothing is requiring employees to be in the office five days a week 2 months ago:
AI is a very broad term that also includes expert systems (such as Computational Fluid Dynamics, Finite Element Analysis, etc approaches.). Traditional machine learning approaches (like support vector machines, etc.) too. But yes, I agree—most commonly associated with deep learning/neural network approaches.
That said, it’s misleading and inaccurate to state that neural networks are just statistics. In fact they are substantially more than _just advanced statistics. Certainly statistics is a component—but so too is probability, calculus, network/graph theory, linear algebra, not to mention computer science to program, tune, and train and infer them. Information theory (hello, entropy) plays a part sometimes.
The amount of mathematical background it takes to really understand and practice the theory of both a forward pass and backpropagation is an entire undergraduate STEM curriculum’s worth. I usually advocate for new engineers in my org to learn it top down (by doing) and pull the theory as needed, but that’s not how I did it and I regularly see gaps in their decisions because of it.
And to get actually good at it? One does not simply become a AI systems engineer/technologist. It’s years of tinkering with computers and operating systems, sourcing/scraping/querying/curating data, building data pipelines, cleaning data, engineering types of modeling approaches for various data types and desired outcomes against constraints (data, compute, economic, social/political), implementing POCs, finetuning models, mastering accelerated computing (aka GPUs, TPUs), distributed computation—and many others I’m sure I’m forgetting some here. The number of adjacent fields I’ve had to deeply scratch on to make any of this happen is stressful just thinking about it.
They’re fascinating machines, and they’ve been democratized/abstracted to an extent where it’s now as simple as import torch, torch.fit, model.predict. But to be dismissive of the amazing mathematics and engineering under the hood to make them actually usable is disingenuous.
I admit I have a bias here—I’ve spent the majority of my career building and deploying NN models.
- Comment on NASA is about to make its most important safety decision in nearly a generation 2 months ago:
Can’t wait to see this project too in Google’s graveyard.
- Comment on ‘Killer robots’ are becoming a real threat in Africa. 2 months ago:
Reward models (aka reinforcement learning) and preference optimization models can come to some conclusions that we humans find very strange when they learn from patterns in the data they’re trained on. Especially when those incentives and preferences are evaluated by other models. Some of these models could very well could come to the conclusion that nuking every advanced-tech human civilization is the optimal way to improve the human species because we have such rampant racism, classism, nationalism, and every other schism that perpetuates us treating each other as enemies to be destroyed and exploited.
Sure, we will build ethical guard rails. And we will proclaim to have human-in-the-loop decision agents, but we’re building towards autonomy and edge/corner-cases always exist in any framework you constrain a system to.
I’m an AI Engineer working in autonomous agentic systems—these are things we (as an industry) are talking about—but to be quite frank, there are not robust solutions to this yet. There may never be. Think about raising a teenager—one that is driven strictly by logic, probabilistic optimization, and outcome incentive optimization.
It’s a tough problem. The naive-trivial solution that’s also impossible is to simply halt and ban all AI development. Turing opened Pandora’s box before any of our time.
- Comment on The low effort presentation of the tenured prof is often way better btw 3 months ago:
Same kinda happens in industry, too.
Intern: 12 shitty slides. No appendix. Mumbles through the entire pres.
Jr/Associate: 47 immaculate slides, full appendix, 30 minutes to present, runs short on time, skips half of them and the audience fell asleep 20 minutes ago.
Senior: 10 slides, good enough but not pretty; too busy being technical for pretty slides. Serves the dessert first because that’s what we’re fuckin’ here for then backs it up with steak. 30 appendix slides and ready for any question including “when is the heat death of the universe?”
Tech lead/director: 100 slides, 2 or 3 at the front called executive summary, agenda, recommendations; 2 more slides to back it up and introduce the team/rest of the presenters, and 95 other slides ready to go for whatever.
CTO: I don’t have slides. I have a spreadsheet. Here we go.