Comment on Business owner 'hires' ChatGPT for customer service, then fires the humans
Adalast@lemmy.world 1 year agoThanks for actually rising to the challenge, it was actually fascinating to do the research to see how AI is affecting the various industries, and how deeply. I will say that I was able to find direct evidence of replacement in 7/10 of them, 1 was work that is similar and could easily be adapted (telecom line repair), one was an analysis that I think has a lot of good points (plumber), and one was genuinely all about augmenting the capabilities of workers already in place (wildlife conservation/officer).
- Oil rig worker onestopsystems.com/blogs/…/ai-on-oil-rigs
- Plumber Answer to Can artificial intelligence replace plumbers? by George Warner www.quora.com/…/George-Warner-1?ch=15&oid=179… (not someone working on it, but a good analysis)
- Construction worker www.sciencedirect.com/…/S0926580522001716
- Landscaper/gardener americangroundskeeping.com/ai-in-landscaping-how-… ts2.space/en/ai-in-robotic-landscaping/
- Telephone repair tech netl.doe.gov/sites/default/…/20VPRSC_Zhang.pdf (sorry for the PDF. It is not specifically phone lines, but the tech could be adapted relatively easily to climb a telephone pole instead of a boiler wall)
- Mechanic …globalspec.com/…/robots-are-primed-to-replace-au…
- Firefighter …itu.int/robotics-and-ai-to-predict-and-fight-wil…
- Surveyor www.landform-surveys.co.uk/news/…/ai-surveying/
- Wildlife management officer aiworldschool.com/…/this-is-why-ai-in-wildlife-co… I will admit that this is a case where AI is augmenting more than replacing at this time.
- Police www.cnn.com/2023/06/18/asia/…/index.html This one is low-hanging fruit… I will leave it at one link.
What companies won’t realize until too late is that paying customers need jobs to pay for things. If AI causes unemployment to rise to some ungodly high, paying customers will become rare and companies will collapse in droves.
I wholeheartedly agree. Functionally, we are going to have to institute a UBI model. It is the only way that society will be able to distribute funds properly when population outpaces jobs due to the exponential growth of populations and the rapidly shrinking landscape of jobs. The corporations are going to need to pay us one way or another.
agent_flounder@lemmy.one 1 year ago
Damn… nice work on the research! I will read through these as I get time. I genuinely didn’t think there would be much for manual labor stuff. I’m particularly interested in the plumber analysis.
I think augmentation makes a lot of sense for jobs where a human body is needed and it will be interesting to see how/if trade skill requirements change.
Adalast@lemmy.world 1 year ago
Had a thought that deserved a separate post. Your selection of MV tasks was rather perverse for the tasks we were discussing. Identifying a pop can is definitely something that humans can do easily because pop cans were made for us to be able to easily identify them. Level the playing field and let’s start looking for internal stress fractures in the superstructure of a 100’ tall concrete bridge. That is something that AI drones are already being designed and deployed for. The drone can easily approach the bridge with a suite of sensors that let it see deep into the superstructure and detect future failure points. Humans would struggle to do that. I have also seen things about maintenance drones that are able to crawl on the bridge using a variety of methods (usually they are designed for specific bridges) that are able to fill cracks with sealant and ablate rust using lasers, then paint the freshly cleaned metal. The benefit of replacing a workforce with AI-driven robotics is that you can purpose-build and purpose-train the tool to do exactly what you need it to do. A robot that scurries into a crawl space to run a pipe for a plumber doesn’t need to know how to do anything but recognize where it goes, what not to touch, and how much force to use when installing it. It doesn’t need to identify a pop can, it doesn’t need to draw a Rembrandt. All it needs to do is pull a pipe and weld it in place (and yes, I am oversimplifying a bit, I know that).
The other thing that kinda gets me is the whole “cramped spaces” safety net that I kept seeing for why this job or that was going to be safe. Designing a small, agile robot is not really a challenge. Add onto it that in many situations you could use a tethered drone to do the actual work that is much smaller and the AI brain can be sitting safely outside the situation. You could even plug it into power, so battery tech doesn’t need to increase. shrug I guess I just see quite a bit of very fast advances in the tech that have a worrying trajectory to me.
agent_flounder@lemmy.one 1 year ago
All great points. I guess I need to think of this topic more from the “what is possible” mindset rather than the “this is too hard” mindset to get a fair assessment of what is coming. All while still framing it in the sense of improving worker efficiency and automating human tasks piecemeal over time.
Adalast@lemmy.world 1 year ago
Your points on MV are not unfounded, but they are also extremely homeocentric. All of your examples rely on the visible light spectrum as well as standard “vision” as we know it. Realistically any sensor can be used to generate an image if you know what you are doing with it. Radio telescopes are a great example of this. There is also a lot of research going on in giving AI’s MV senses access to other sections of the EM spectrum ( edge-ai-vision.com/…/beyond-visible-light-applica… and technologyreview.com/…/machine-vision-has-learned… ) as well as echolocation ( imveurope.com/…/echolocation-neural-net-gives-pho… ). There are many other types of “vision” that can be used that can definitely distinguish a popcan.
agent_flounder@lemmy.one 1 year ago
Agree that other parts of the EM spectrum could enhance the ability of MV to recognize things. Appreciate the insights – maybe I will be able to use this when I get back to tinkering with MV as a hobbyist.
Of course identifying one object is one level. For a general purpose replacement for humans ability, since that’s what the thread is focused (ahem) on, it has to identify tens of thousands of objects.
I need to rethink my opinion a bit. Not only how far general object recognition is but also how one can “cheat” to enable robotic automation.
Tasks that are more limited in scope and variability would be a lot less demanding. For a silly example, let’s say we want to automate replacing fuses in cars. We limit it to cars with fuse boxes in the engine bay and we can mark the fuse box with a visual tag the robot can detect. The layout of the fuses per vehicle model could be stored. The code on the fuse box identifies the model. The robot then used actuators to remove the cover and orients itself to the box using more markers and the rest is basically pick and place technology. That’s a smaller and easier problem to solve than “fix anything possibly wrong with a car”. A similar deal could be done for oil changes.
For general purpose MV object detection, I would have to go check but my guess is that what is possible with state of the art MV is identifying a dozen or maybe even hundreds of objects so I suppose one could do quite a bit with that to automate some jobs. MV is not to my knowledge at a level of general purpose replacement for humans. Yet. Maybe it won’t take that much longer.
In ~15 years in the hobbyist space we’ve gone from recognizing anything of a specified color under some lighting conditions to identifying several specific objects. And without a ton of processing power either. It’s pretty damn impressive progress, really. We have security cameras that can identify animals, people, and delivery boxes. I am probably selling short what MV will be able to do in 15 more years.