tal
@tal@lemmy.today
- Comment on Uh Oh: Nintendo Just Landed A ‘Summoning’ And ‘Battling’ Patent 1 hour ago:
copyright
This isn’t a copyright, but rather a patent.
- Comment on Frustratingly bad at self hosting. Can someone help me access LLMs on my rig from my phone 12 hours ago:
-A backend is where all the weird c++ language stuff happens to generate a response from an AI. -a front end is a pretty app or webpage that takes that response and make it more digestible to the user.
Yes.
-agreed. I’ve seen in other posts that exposing a port on windows defender firewall is the easiest (and safest?) way to go for specifically what I’m looking for. I don’t think I need to forward a port as that would be for more remote access.
Yes. I’d like to confirm that that is not happening, in fact.
The ipv6 was identical to one of the ones I have.
Hmm. Okay, thanks for mentioning the IPv6 thing. It is possible to have the ollama reachable from the Internet via IPv6, if it’s forwarded. I should have thought of that too and mentioned that. Shouldn’t need to open an IPv6 hole in the Windows Firewall, but would rather not rely on the Windows Firewall at all.
It shouldn’t be an issue if ollama is only listening on an IPv4 address. You only see the “0.0.0.0:11434” line, right? No other lines, probably with brackets in the address, that have a “:11434”, right? That could be an IPv6 address.
goes to look for an example of Windows netstat output showing a listening IPv6 socket
Here:
configserverfirewall.com/…/netstat-command-to-che…
Can you just make sure that there’s nothing like
0:[::]:11434
in there? That’d be what you’d see if it were listening for IPv6 connections.Sorry, just don’t know oollama’s behavior off the top of my head and want to be sure on this before moving ahead, don’t want to create any security issues.
The ipv4 was not identical. (But I don’t think that matters moving forward.)
Yeah, that’s expected and good. The one from the website is your public IP address, anf the one from ipconfig your private one, that you’ll use to talk to the machine wirh your phone.
I had to go into the settings in the ollama backend app to enable “expose Ollama to the network”.
Great, yeah, that was the right move.
Okay, then just want to sanity check that your iOS device is in the same address range on your WiFi network, that the 10.x.x.x address on your LLM PC isn’t from a VPN or something (since it’s a little unusual to use a 10.x.x.x address on a home broadband router, and I want to make sure that that’s where the address is from). Go ahead and put the iOS device on your WiFi network if you have not already.
This describes how to check the IP address on an iOS device.
servicehub.ucdavis.edu/servicehub?id=ucd_kb_artic…
You should also be seeing a 10.x.x.x address there. If you don’t, then let’s stop and sort that out.
If that’s a 10.x.x.x address as well, then should be good to go.
Oh, one last thing. In the ipconfig output, can you make sure that the “Subnet Mask” reads “255.0.0.0”? If it’s something different, can you provide that? It’ll affect the “/8” thst I’m listing below.
Okay, if you’ve got that set up and there are no other “:11434” lines and the Subnet Mask is “255.0.0.0”, the next is to poke a hole in Windows Firewall on IPv4 TCP port 11434.
kagis for screenshots of someone doing this on Windows 11
windowsreport.com/windows-firewall-allow-ip-range…
I’m assuming that this is Windows 11 on your PC, should have asked.
You’re going to want a new inbound rule, Protocol TCP, Port 11434.
For “local IP addresses”, you want “These IP Addresses”, and enter
10.0.0.0/8
. That’ll be every IPv4 address on your Windows LLM that has “10” as its first number — you said that you had a “10.” from ipconfig.For “remote IP addresses”, you want “These IP Addresses”, and enter
10.0.0.0/8
. Same thing all addresses that start with a “10.”, which should include your iOS device.Okay. Now you should have a hole in Windows Firewall. Just to confirm that port 11434 isn’t reachable from the Internet, I’m gonna use one of the port-open-testing services online. My first hit is for one that only does IPv4 and another that only does IPv6, but I guess doing two sites is okay. Can you go to this site (or another, if you know of a site that does port testing that your prefer)
www.yougetsignal.com/tools/open-ports/
Plug in your public IPv4 address there (not the private one from ipconfig, the one from that website thst I listed earlier) and port 11434. It should say “closed” or “blocked” or something that isn’t “open”. If it’s “open”, go back and pull thst firewall rule out, because your router is forwarding incoming IPv4 connections to your LLM PC in some way that’s getting, and we gotta work out how to stop that.
Here’s an IPv6 port tester. Plug in your IPv6 address there (which you said was the same from both the website and ipconfig) and port 11434. It should also say “closed” or “blocked” or similar. If it says “open” — I very much doubt this — then go back and pull out the firewall rule.
If both say “closed”, then go ahead and install Reins.
Based on this:
www.reddit.com/r/ollama/comments/1ijdp1e/reins/
It’ll let you input an “endpoint”.
Plug in the private IPv4 address from your LLM PC, what was in ipconfig, like “10.something.something.something:11434” and you should, hopefully, be able to chat.
- Comment on Big Tech: Convenience is a Trap 13 hours ago:
This is true of the overwhelming majority of YouTube videos I see submitted here. The information density is just abysmal compared to a page of text.
- Comment on Frustratingly bad at self hosting. Can someone help me access LLMs on my rig from my phone 14 hours ago:
Ollama does have some features that make it easier to use for a first-time user, including:
-
Calculating automatically how many layers can fit in VRAM and loading that many layers and splitting between CPU and VRAM. kobold.cpp can’t do that automatically yet.
-
Automatically unloading the model from VRAM after a period of inactivity.
I had an easier time setting up ollama than other stuff, and OP does apparently already have it set up.
-
- Comment on Frustratingly bad at self hosting. Can someone help me access LLMs on my rig from my phone 14 hours ago:
Backend/ front end. I see those a lot but I never got an explanation for it. In my case, the backend would be Ollama on my rig, and the front end would be me using it on my phone, whether that’s with and app or web ui. Is that correct?
For Web-based LLM setups, it’s not common to have two different software packages. One loads the LLM into video memory and executes queries on the hardware. That’s the backend. It doesn’t need to have a user interface at all. Ollama or llama.cpp (though I know that llama.cpp also has a minimal frontend) are examples of this.
Then there’s a frontend component. It runs a small Web server that displays a webpage that a Web browser can access, provides some helpful features, and can talk to various backends (e.g. ollama or llama.cpp or some of the cloud-based LLM services).
Normally the terms are used in the context of Web-based stuff; it’s common for Web services, even outside of LLM stuff, to have a “front end” and a “back end” and to have different people working on those different aspects. If Reins is a native iOS app, I guess it could be called a frontend.
But, okay, it sounds like probably the most-reasonable thing to do, if you like the idea of using Reins, is to run Ollama on the Windows machine, expose ollama’s port to the network, and then install Reins on iOS.
So, yeah, probably need to open a port on Windows Firewall (or Windows Defender…not sure what the correct terminology is these days, long out of date on Windows). It sounds like having said firewall active has been the default on Windows for some years. I’m pretty out-of-date on Windows, but I should be able to stumble through this.
While it’s very likely that you aren’t directly exposing your computer to the Internet — that is, nobody from the outside world can connect to an open port on your desktop — it is possible to configure consumer routers to do that. Might be called “putting a machine in the DMZ”, forwarding a port, or forwarding a range of ports. I don’t want to have you open a port on your home computer and have it inadvertently exposed to the Internet as a whole. I’d like to make sure that there’s no port forwarding to your Windows machine from the Internet.
Okay, first step. You probably have a public IP address. I don’t need or want to know that — that’d give some indication to your location. If you go somewhere like whatismyipaddress.com in a web browser from your computer, then it will show that – don’t post that here.
That IP address is most-likely handed by your ISP to your consumer broadband router.
There will then be a set of “private” IP addresses that your consumer broadband router hands out to all the devices on your WiFi network, like your Windows machine and your phone. These will very probably be
192.168.something.something
, though they could also be172.something.something.something
or10.something.something.something
. It’s okay to mention those in comments here — they won’t expose any meaningful information about where you are or your setup. This may be old hat to you, or new, but I’m going to mention it in case you’re not familiar with it; I don’t know what your level of familiarity is.What you’re going to want is your “private” IP address from the Windows machine. On your Windows machine, if you hit Windows Key-R and then enter “cmd” into the resulting dialog, you should get a command-line prompt. If you type “ipconfig” there, it should have a line listing your private IPv4 address. Probably be something like that “192.168.something.something”. You’re going to want to grab that address. It may also be possible to use the name of your Windows machine to reach it from your phone, if you’ve named it — there’s a network protocol, mDNS, that may let you do that — but I don’t know whether it’s active out-of-box on Windows or not, and would rather confirm that the thing is working via IP before adding more twists to this.
Go ahead and fire up ollama, if you need to start it — I don’t know if, on Windows, it’s installed as a Windows service (once installed, always runs) or as a regular application that you need to launch, but it sounds like you’re already familiar with that bit, so I’ll let you handle that.
Back in the console window that you opened, go ahead and run
netstat -a -b -n
.Will look kinda like this:
That should list all of the programs listening on any ports on the computer. If ollama is up and running on that Windows machine and doing so on the port that I believe it is, then you should have a line that looks like:
TCP 0.0.0.0:11434 0.0.0.0:0 LISTENING
If it’s 0.0.0.0, then it means that it’s listening on all addresses, which means that any program that can reach it over the network can talk to it (as long as it can get past Windows Firewall). We’re good, then.
Might also be “127.0.0.1”. In that case, it’ll only be listening to connections originating from the local computer. If that’s the case, then it’ll have to be configured to use 0.0.0.0.
I’m gonna stop here until you’ve confirmed that much. If that all works, and you have ollama already listening on the “0.0.0.0” address, then next step is gonna be to check that the firewall is active on the Windows machine, punch a hole in it, and then confirm that ollama is not accessible from the Internet, as you don’t want people using your hardware to do LLM computation; I’ll try and step-by-step that.
- Comment on Frustratingly bad at self hosting. Can someone help me access LLMs on my rig from my phone 15 hours ago:
Yes I have Ollama on my windows rig.
TBH, im not sure if librechat has a web ui.
Okay, gotcha. I don’t know if Ollama has a native Web UI itself; if so, I haven’t used it myself. I know that it can act as a backend for various front-end chat-based applications. I do know that kobold.cpp can operate both as an LLM backend and run a limited Web UI, so at least some backends do have Web UIs built in. You said that you’ve already used Ollama successfully. Was this via some Web-based UI that you would like to use on your phone, or just some other program (LibreChat?) running natively on the Windows machine?
- Comment on Frustratingly bad at self hosting. Can someone help me access LLMs on my rig from my phone 16 hours ago:
Oh! Also, I’m using windows on my PC. And my phone is an iPhone.
Okay, that’s a starting place. So if this is Windows, and if you only care about access on the wireless network, then I suppose that it’s probably easiest to just expose the stuff directly to other machines on the wireless network, rather than tunneling through SSH.
You said that you have ollama running on the Windows PC. I’m not familiar with LibreChat, but it has a Web-based interface? Are you wanting to access that from a web browser on the phone?
- Comment on Frustratingly bad at self hosting. Can someone help me access LLMs on my rig from my phone 17 hours ago:
ssh -L 0.0.0.0:3000:YOURPUBLICIP:3000
If you can SSH to the LLM machine, I’d probably recommend
ssh -L127.0.0.1:11434:127.0.0.1:11434 <remote hostname>
. If for some reason you don’t have or inadvertently bring down a firewall on your portable device, you don’t want to be punching a tunnel from whatever can talk to your portable device to the LLM machine.(Using 11434 instead of 3000, as it looks like that’s ollama’s port.)
- Comment on emergency remote access 17 hours ago:
-
If your problem is brief brownouts or similar — my experience is that some consumer broadband routers have cheap power supplies that leaves them in bad states when PCs will pull through — you could put them on a UPS.
-
If your problem is that your router is unstable, you could just replace your router. Like, if you need remote access and you have a flaky router, that seems like a prime choice.
-
You could have a power control device or something and have another machine on your network set up so that if it loses Internet connectivity for some sustained period of time, it power-cycles the router.
-
If this is for when you’re a long ways away, do you have a friend who you’d trust with a key and flipping a switch?
-
I expect that there are business-oriented routers that will have integrated watchdog features that will auto-reboot if they hang. I have not gone looking, though.
-
Possibly, if it’s compatible with your use case, and uptime is critical enough here, having a second, backup server elsewhere, possibly not self-hosted. I mean, your connectivity is always going to be bounded by the reliability of your residential Internet connection otherwise.
-
- Comment on RFK Jr. Blames violent video games for Mass Shootings. 18 hours ago:
Jack Thompson
For those who don’t remember this guy, he was pretty obnoxious.
- Comment on Why have energy drinks been banned for under-16s in England? The real question is why it wasn’t done sooner 21 hours ago:
beveragedaily.com/…/Scotland-scraps-plans-to-ban-…
Scotland will not pursue a ban on sales of energy drinks to children and young people, saying there is not enough evidence the policy would be successful.
I suppose that it’s always possible to journey to the exciting, anything-goes frontier land of Scotland.
- Comment on Why have energy drinks been banned for under-16s in England? The real question is why it wasn’t done sooner 21 hours ago:
Expecting parents to police kids’ intake of a psychoactive drug is unrealistic
Caffeine is a psychoactive drug that activates the central nervous system.
I have to say that having someone over for water and crumpets just doesn’t have quite the same ring as tea and crumpets.
- Comment on It's Not Just You: Music Streaming Is Broken Now 1 day ago:
Especially if it had Milkdrop Visualizer.
I wouldn’t be suprised if there’s a ProjectM port. It’s in Debian.
checks
Yup. Not on F-Droid, though.
play.google.com/store/apps/details?id=com.psperl.…
github.com/projectM-visualizer/projectm
projectM is an open-source project that reimplements the esteemed Winamp Milkdrop by Geiss in a more modern, cross-platform reusable library.
- Comment on Ed Miliband accused of subsidising ‘wasteful and dangerous’ electric SUVs 1 day ago:
Numbers of these giant cars have increased tenfold on the streets of England’s cities in the past two decades, now comprising 30% of urban vehicles.
Some of that is just energy density. Batteries aren’t as energy-dense as gasoline.
I was kind of grouchy about the size of EVs too — even if you aren’t talking something technically classified as an SUV, CUVs or even sedans/hatchbacks are getting quite large. but I kinda rethought that when I was complaining about the lack of a spare tire in a post a while back. Like, it was the EVs and hybrid models that got the spare tire squeezed out first. I’m sure that manufacturers are hunting for all the spare space they can in the vehicles. I’m pretty resigned to that just being something that’s probably going to happen.
It’s not as if they have some huge glut of unused space somewhere in the EV to stick more battery. Plus, the batteries are heavy, so they gotta stay fairly low in the vehicle to keep the center of gravity low and avoid rollovers; that’s even more constraint.
- Comment on Is there a music thing that handles single tracks 1 day ago:
opus can’t be tagged
I’m pretty sure that it supports tagging.
goes to try it out
$ yt-dlp -x https://www.youtube.com/shorts/syF8M3aeiWs >/dev/null $ opustags beep\ sound\ effect\ \[syF8M3aeiWs\].opus |grep -v ^METADATA language=eng encoder=Lavf61.7.100 title=beep sound effect date=20230316 purl=https://www.youtube.com/watch?v=syF8M3aeiWs synopsis=beep DESCRIPTION=https://www.youtube.com/watch?v=syF8M3aeiWs artist=Seth's old channel $
If you mean that this MusicBrainz Picard thing doesn’t support tagging Opus, it sounds like it does:
community.metabrainz.org/t/…/467209
I have been having trouble tagging .opus files. Every time I try to edit .opus files I get:
(error: read b’\x1aE\xdf\xa3’, expected b’OggS’, at 0x0)
Looks like this is not a valid Ogg Opus file. Opus is just an audio codec, not a file format. Files with the file extension .opus are supposed to be inside an Ogg container, and that’s what Picard supports.
And looking at the output of
yt-dlp -x
, it looks like it’s Opus in an Ogg container:$ file beep\ sound\ effect\ \[syF8M3aeiWs\].opus beep sound effect [syF8M3aeiWs].opus: Ogg data, Opus audio, version 0.1, stereo, 48000 Hz (Input Sample Rate) $
- Comment on Vape ban isn't working, says waste firm boss 1 day ago:
Hmm. The article’s talking about kids:
A government spokesperson said: “Single-use vapes get kids hooked on nicotine and blight our high streets - it’s why we’ve taken tough action and banned them.”
I’m wondering if maybe if you’re a kid, there’s a risk of the vape being seized – like, get caught with it at school or something, I assume that you don’t get it back. Probably ditto for parents keeping 'em if they find 'em. If you’re a kid, it might be rational if the goal is to mitigate cost of seizure of your vape.
kagis
Picking a random online vape shop, huffandpuffers.com, it looks like they sell disposable vapes for maybe $13 (there’s an $8 one, “Daze Clickmate Max”, but it’s out-of-stock). The cheapest reusable is a (small-capacity) fixed-battery refillable version of the disposable one, at $8 (“Daze Clickmate Max”). There are vapes with replaceable batteries, can take lithium 18650s, but those are $55 to $70, and it looks like they don’t include the 18650; it looks like an 18650 goes for maybe $3 or $4 online, so figure $60 or more.
At that ratio, a reusable costing around 7 times what a disposable might, it wouldn’t take an incredibly high seizure rate for it to be worthwhile for a kid to use disposables.
- Comment on It's Not Just You: Music Streaming Is Broken Now 1 day ago:
We still get everything compressed
I donlt know if mastering engineers are doing so, but the streaming services removed the volume benefit to doing so. If you use DRC, your music will be cut in volume.
- Comment on Vape ban isn't working, says waste firm boss 1 day ago:
Frankly, I’m kind of surprised that people use disposable vapes. It doesn’t seem like it buys them much relative to using a reusable one. It’s gonna cost more over time. I donlt believe that they require maintenance.
- Comment on It's Not Just You: Music Streaming Is Broken Now 1 day ago:
physical media CDs for music
My understanding is that the streaming services basically ended the loudness war by imposing volume normalition. I’m not sure that I want to restart it.
- Comment on Signal announces a backup feature that includes 100MB of storage for texts and the last 45 days' worth of media for free, or 100GB of storage for $1.99/month 2 days ago:
I’d mostly be interested for E2E encryption.
- Comment on Is there no good inexpensive CAD software? 4 days ago:
About the only really good examples of that that I know of are OpenSCAD and Graphviz.
Like, things that take in a text file with programming capabilities describing what to generate? I can think of a couple off the top of my head.
- Comment on "Very dramatic shift" - Linus Tech Tips opens up about the channel's declining viewership 5 days ago:
Third it has network effect going for it. Nobody is going to watch videos on your platform if there’s only a couple dozen of them total. The sheer size and scope of YouTube means no matter what you’re looking for you can find something to watch.
Yeah, though I think that you could avoid some of that with a good cross-video-hosting service search engine, as I don’t think that most people are engaging in the social media aspect of YouTube. YouTube doesn’t have a monopoly on indexing YouTube videos.
But the scale doesn’t hurt them, that’s for sure.
- Comment on 5 days ago:
I did see some depth=1 or something like that to get only a certain depth of git commits but thats about it.
Yeah, that’s a shallow clone. That reduces what it pulls down, and I did try that (you most-likely want a bit more, probably to also ask to only pull down data from a single branch) but back when I was crashing into it, that wasn’t enough for the Cataclysm repo.
It looks like it’s fixed as of early this year; I updated my comment above.
- Comment on 5 days ago:
Thanks. Yeah, I’m pretty sure that that was what I was hitting. Hmm. Okay, that’s actually good — so it’s not a git bug, then, but something problematic in GitHub’s infrastructure.
- Comment on 5 days ago:
A bit of banging away later — I haven’t touched Linux traffic shaping in some years — I’ve got a quick-and-dirty script to set a machine up to temporarily simulate a slow inbound interface for testing.
:::spoiler slow.sh test script
# !/bin/bash # Linux traffic-shaping occurs on the outbound traffic. This script # sets up a virtual interface and places inbound traffic on that virtual # interface so that it may be rate-limited to simulate a network with a slow inbound connection. # Removes induced slow-down prior to exiting. # Physical interface to slow; set as appropriate oif="wlp2s0" modprobe ifb numifbs=1 ip link set dev ifb0 up tc qdisc add dev $oif handle ffff: ingress tc filter add dev $oif parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0 tc qdisc add dev ifb0 root handle 1: htb default 10 tc class add dev ifb0 parent 1: classid 1:1 htb rate 1mbit tc class add dev ifb0 parent 1:1 classid 1:10 htb rate 1mbit echo "Rate-limiting active. Hit Control-D to exit." cat # shut down rate-limiting tc qdisc delete dev $oif ingress tc qdisc delete dev ifb0 root ip link set dev ifb0 down rmmod ifb
:::
I’m going to see whether I can still reproduce that git failure for Cataclysm on git 2.47.2, which is what’s in Debian trixie. As I recall, it got a fair bit of the way into the download before bailing out. Including here since I think that the article makes a good point that there probably should be more slow-network testing, and maybe someone else wants to test something themselves on a slow network.
Probably be better to have something a little fancier to only slow traffic for one particular application — maybe create a “slow Podman container and match on traffic going to that?” — but this is good enough for a quick-and-dirty test.
- Comment on 5 days ago:
Two major applications I’ve used that don’t deal well with slow cell links:
-
Lemmyverse.net runs an index of all Threadiverse instances and all communities on all instances, and presently is an irreplaceable resource for a user on here who wants to search for a given community. It loads an enormous amount of data for the communities page, and has some sort of short timeout. Whatever it’s pulling down internally — I didn’t look — either isn’t cached or is a single file, so reloading the page restarts from the start. The net result is that it won’t work over a slow connection.
-
This may have been fixed, but git had a serious period of time where it would smash into timeouts and not work on slow links, at least to github. This made it impossible to clone larger repositories; I remember failing trying to clone the Cataclysm: Dark Days Ahead repository, where one couldn’t even manage a shallow clone. This was greatly-exacerbated by the fact that git does not presently have the ability to resume downloads if a download is interrupted. I’ve generally wound up working around this by git cloning to a machine on a fast connection, then using rsync to pull a repository over to the machine on a slow link, which, frankly, is a little embarrassing when one considers that git really is the premier distributed VCS tool out there in 2025.
-
- Comment on Mobile Phone Brands by Market Share (2007 vs 2025) 5 days ago:
Microsoft’s interest in Nokia was being able to compete with what is now a dupopoly between Google and Apple in phones. They wanted to own a mobile platform. I am very confident that they did not want their project to flop. That being said, they’ll have had their own concerns and interests. Maybe Nokia would have done better to go down the Apple or Google path, but for Microsoft, the whole point was to get Microsoft hardware out.
- Comment on Tech companies pledge to ready Americans for an AI-dominated world 6 days ago:
And Amazon says it will help train 4 million people in AI skills and “enable AI curricula” for 10,000 educators in the US by 2028, while offering $30 million in AWS credits for organizations using cloud and AI tech in education.
So, at some point, we do have to move on policy, but frankly, I have a really hard time trying to predict what skillset will be particularly relevant to AI in ten years. I have a hard time knowing exactly what the state of AI itself will be in ten years.
Like, sure, in 2025, it’s useful to learn the quirks and characteristics of LLMs or diffusion models to do things with them. I could sit down and tell people some of the things that I’ve run into. But…that knowledge also becomes obsolete very quickly. A lot of the issues and useful knowledge for, working with, say, Stable Diffusion 1.5 are essentially irrelevant as regards Flux. For LLMs, I strongly suspect that there are going to be dramatic changes surrounding reasoning, and retaining context. Like, if you put education time into training people on that, you run the risk that they don’t learn stuff that’s relevant over the longer haul.
There have been major changes in how all of this works over the past few years, and I think that it is very likely that there will be continuing major changes.
- Comment on Under-16s to be banned from buying high-caffeine energy drinks including Monster 1 week ago:
Hmm. That’s a good point. I wonder if there’s trouble lurking there.
The government proposals will make it illegal to sell high-caffeine energy drinks containing more than 150mg of caffeine per litre to anyone under 16 in England.
Aight, so that’s their red line.
www.healthline.com/…/how-much-caffeine-in-coffee
A 12-ounce (oz) cup of brewed coffee may contain 113 to 247 milligrams (mg)Trusted Source of caffeine, whereas a smaller 8-ounce cup can contain about 95 to 200 mg.
Hmm. A liter is 2.8 times larger than a 12 fluid ounce cup, so that’d be 318 mg/L to 696 mg/L.
- Comment on where did we go wrong 1 week ago:
cyberpunk tench coat
I don’t know what makes a trench coat cyberpunk, but if it’s being reminicent of *Blade Runner, where Deckard wore a trench coat:
www.amazon.com/blade-runner-coat/s?k=blade+runner…
1-48 of 173 results for “blade runner coat”
It looks like there are also clothing companies that will tailor Blade Runner replica coats: