www.youtube.com/watch?v=Is_wu0VRIqQ
Recent meme post prompted this as a needed reminder of what the issue with audio is, and how it should be fixed legislatively in a civilized world.
Submitted 1 week ago by j4k3@lemmy.world to youshouldknow@lemmy.world
https://lemmy.world/pictrs/image/c014fefc-5bc6-4b99-9526-a8183a41f458.jpeg
www.youtube.com/watch?v=Is_wu0VRIqQ
Recent meme post prompted this as a needed reminder of what the issue with audio is, and how it should be fixed legislatively in a civilized world.
I really miss Tom Scott
Glad he’s moved on to doing things he likes on his own terms.
A new episode of Technical Difficulty popped up a day ago or so, if you want some jokes or hijinx.
Me too. Atomic Frontier is following a similar ultra dense respect-your-time videography-nerd polymath approach to YT.
He’s working on a new youtube project
What happened to him?
He mostly retired. He no longer posts videos regularly, though I think there’s a podcast he’s on.
Career Audio engineer chiming in here. I would say it’s a combination of the lack of LUFS regulation and improper mixes. IMHO dialogue clarity should be prioritized more than it currently is, or at the very least we should use the center channel in surround sound formats exclusively for dialog to make it easier to adjust(via software and/or or av receivers).
What would be the alternative? You don’t really expect the streaming companies to pay for TWO masters do you!? (/s if it wasn’t obvious)
I don’t really have an alternative, just trying to give my perspective on the issue.
Naww dog I just don’t speak Japanese
Skill issue
I better not catch you consuming any media that’s been translated
腕前問題
They were talking about anime (most likely). It was a joke.
Even with places like YouTube, where LUFS level is strictly defined, there’s sooo many creators who have no earthly idea what LUFS is, which levels YouTube enforces, and how it corrects for it. They post their videos with quiet narration and wonder why viewers get annoyed at all of the turning up and turning down of volume on each video.
See, YouTube enforces LUFS on videos by reducing volume on loud videos down to -14 LUFS. But, it doesn’t do anything to quiet videos. If you ever bring up the “Stats for Nerds” and look at the “Volume / Normalized” value, you might see something like “content loudness -5.9dB”. That means it’s -5.9dB quieter than it should be, and the creator should have amplified the video to normalize the volume levels before uploading it to YouTube.
So, you end up with a video that’s about -6dB quieter, and you have to turn up the volume to actually hear the narration. Then your TV or whatever device you’re watching will get blasted by the next video, which is properly normalized at around 0dB, and you’re forced to turn the damn volume back down.
YouTube has finally started to acknowledge the problem by introducing the Stable Volume feature. But, really, creators should educate themselves on how to properly mix their audio. I know editing is hard and there’s so many moving parts to deal with for YouTube uploads. But, audio quality is everything in a YouTube video. Nobody cares about whatever random B-roll video game footage, or PowerPoint slide presentation, or watermarked stock images, or videos of you presenting the narration with a lapel mic tied to a tree branch you’re using on the video side. It’s all about narration and audio quality.
There’s also the issue with different services using different methods to normalize their audio. In the music world, Spotify is pretty widely known to do some weird fuckery to your LUFS. Oftentimes, musicians need to send different versions of their songs to each service, mastered specifically for that particular service, just to get a consistent listening experience across the various platforms. It’s a lot of extra work, just for the sake of consistency.
Congenital hearing defect is why I use subtitles.
I did not mean to prejudice. Sorry.
As I am getting older, I think i can’t hear as well as before.
Bitter reality
Try some old time TV shows to see if that’s really the case. I don’t have issues watching old Star Trek, but current shows are a shot in the dark.
tiramichu@sh.itjust.works 1 week ago
Dynamic range and loudness normalisation is surely the main reason people are using subtitles, but habits are undeniably also changing too, as is the way people consume media in general.
People don’t just look at the TV for an hour straight - they are doing other things, or second-screening, or having conversations, and multiple methods being available to pick up on the show dialog is helpful.
Let’s not forget simple reasons like accessibility, either. My friend here in the UK is Hungarian, and despite being completely fluent in English he always likes to watch shows with subtitles as it helps with understanding some British accents which can be tricky for non-natives.
And people just process information in different ways. We’ve all heard by now that some individuals can be visually oriented, while other people are aural. If you get a choice, why not take it?
Not to mention that subs on streaming services are much better visual quality and timing than subs on broadcast TV used to be, which felt nasty and mis-timed, and very second-class. Clearly ‘good enough’ for hard of hearing individuals but not very pleasant.
I don’t think it’s a hot take to say that as accessibility features get better and more available, more people will use them. And accessibility is for everyone.
acockworkorange@mander.xyz 1 week ago
I think you’re missing the point. Lack of LUFS standards is what forces people that normally wouldn’t/don’t like to use subtitles to use them because they can’t understand dialogue otherwise.
tiramichu@sh.itjust.works 1 week ago
I don’t disagree with that, all I’m saying is there are additional factors in play which also account for at least some of the rise in subtitle usage. It’s not all down to a single cause.
Volume normalisation is a problem, but it’s also true that people aren’t the same as they were 20 years ago and don’t behave the same as 20 years ago.
Tar_alcaran@sh.itjust.works 1 week ago
There are native Londoners from the west of London who have trouble understanding the native Londoners from the east of London and vice versa
corsicanguppy@lemmy.ca 1 week ago
If you’ve seen subtitles lately, they used to be pretty bad but now they’re horrible. The mess up on what’s being said a LOT.
Also they spell like a primary drop-out: till, your/you’re, etc.
I_Has_A_Hat@lemmy.world 1 week ago
You clearly don’t remember the days of live close captions. Hoo-boy, it’s like you could pinpoint the moments the transcriber lost their focus.
boatswain@infosec.pub 1 week ago
Wouldn’t this make subtitles less useful rather than more? You can’t see the subtitles if you’re not just looking at the TV. For second-screening, it would be more helpful to listen to the audio while you’re also scrolling Lemmy or whatever.
tiramichu@sh.itjust.works 1 week ago
Sure, but it’s multi-modal.
If you’re having a conversation, or doing some other task that makes sound, or scrolling social media and a video starts playing, there could be a noise that momentarily covers up the audio and you miss something. If there are subs then you can also quickly glance to see what was going on.
Listening to spoken dialog allows you to look away, but subs let you catch back up if you miss the spoken dialog.