Open Menu
AllLocalCommunitiesAbout
lotide
AllLocalCommunitiesAbout
Login

DeepSeek Proves It: Open Source is the Secret to Dominating Tech Markets (and Wall Street has it wrong).

⁨571⁩ ⁨likes⁩

Submitted ⁨⁨2⁩ ⁨months⁩ ago⁩ by ⁨Cat@ponder.cat⁩ to ⁨technology@lemmy.world⁩

https://www.linuxfoundation.org/blog/deepseek-proves-it-open-source-is-the-secret-to-dominating-tech-markets-and-wall-street-has-it-wrong

source

Comments

Sort:hotnewtop
  • Grandwolf319@sh.itjust.works ⁨2⁩ ⁨months⁩ ago

    Pretty sure Valve has already realized the correct way to be a tech monopoly is to provide a good user experience.

    source
    • finitebanjo@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Idk, I kind of disagree with some of their updates at least in the UI department.

      source
      • b3an@lemmy.world ⁨2⁩ ⁨months⁩ ago

        Yeah. Steam and I are getting older. Would be nice to adjust simple things like text size in the tool.

        source
        • -> View More Comments
    • BlameTheAntifa@lemmy.world ⁨2⁩ ⁨months⁩ ago

      Just stay far, far away from their forums.

      source
  • Semi_Hemi_Demigod@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Image

    source
    • simplejack@lemmy.world ⁨2⁩ ⁨months⁩ ago

      I remember this being some sort of Apple meme at some point. Hence the gum drop iMac.

      source
      • diemartin@sh.itjust.works ⁨2⁩ ⁨months⁩ ago

        I think the inspiration is from Modern Humorist (note: there’s no HTTPS)

        source
        • -> View More Comments
      • Semi_Hemi_Demigod@lemmy.world ⁨2⁩ ⁨months⁩ ago

        This was likely at the time of switching to OS X, which is based on FreeBSD

        source
        • -> View More Comments
  • meowmeowbeanz@sh.itjust.works ⁨2⁩ ⁨months⁩ ago

    Wall Street’s panic over DeepSeek is peak clown logic—like watching a room full of goldfish debate quantum physics. Closed ecosystems crumble because they’re built on the delusion that scarcity breeds value, while open source turns scarcity into oxygen. Every dollar spent hoarding GPUs for proprietary models is a dollar wasted on reinventing wheels that the community already gave away for free.

    The Docker parallel is obvious to anyone who remembers when virtualization stopped being a luxury and became a utility. DeepSeek didn’t “disrupt” anything—it just reminded us that innovation isn’t about who owns the biggest sandbox, but who lets kids build castles without charging admission.

    Governments and corporations keep playing chess with AI like it’s a Cold War relic, but the board’s already on fire. Open source isn’t a strategy—it’s gravity. You don’t negotiate with gravity. You adapt or splat.

    Cheap reasoning models won’t kill demand for compute. They’ll turn AI into plumbing. And when’s the last time you heard someone argue over who owns the best pipe?

    source
    • Flocklesscrow@lemm.ee ⁨2⁩ ⁨months⁩ ago

      Governments and corporations still use the same playbooks because they’re still oversaturated with Boomers who haven’t learned a lick since 1987.

      source
  • tonytins@pawb.social ⁨2⁩ ⁨months⁩ ago

    Personally, I think Microsoft open sourcing .NET was the first clue open source won.

    source
  • Stovetop@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Deepseek is not open source.

    source
    • wise_pancake@lemmy.ca ⁨2⁩ ⁨months⁩ ago

      The model weights and research paper are, which is the accepted terminology nowadays.

      It would be nice to have the training corpus and RLHF too.

      source
      • TheOctonaut@mander.xyz ⁨2⁩ ⁨months⁩ ago

        the accepted terminology

        No, it isn’t. The OSI specifically requires the training data be available or at very least that the source and fee for the data be given so that a user could get the same copy themselves. Because that’s the purpose of something being “open source”. Open source doesn’t just mean free to download and use.

        opensource.org/ai/open-source-ai-definition

        Data Information: Sufficiently detailed information about the data used to train the system so that a skilled person can build a substantially equivalent system. Data Information shall be made available under OSI-approved terms.

        In particular, this must include: (1) the complete description of all data used for training, including (if used) of unshareable data, disclosing the provenance of the data, its scope and characteristics, how the data was obtained and selected, the labeling procedures, and data processing and filtering methodologies; (2) a listing of all publicly available training data and where to obtain it; and (3) a listing of all training data obtainable from third parties and where to obtain it, including for fee.

        As per their paper, DeepSeek R1 required a very specific training data set because when they tried the same technique with less curated data, they got R"zero’ which basically ran fast and spat out a gibberish salad of English, Chinese and Python.

        People are calling DeepSeek open source purely because they called themselves open source, but they seem to just be another free to download, black-box model. The best comparison is to Meta’s LlaMa, which weirdly nobody has decided is going to up-end the tech industry.

        In reality “open source” is a terrible terminology for what is a very loose fit when basically trying to say that anyone could recreate or modify the model because they have the exact ‘recipe’.

        source
      • kryptonidas@lemmings.world ⁨2⁩ ⁨months⁩ ago

        The training corpus of these large models seem to be “the internet YOLO”. Where it’s fine for them to download every book and paper under the sun, but if a normal person does it.

        Believe it or not:

        Image

        source
      • Stovetop@lemmy.world ⁨2⁩ ⁨months⁩ ago

        A lot of other AI models can say the same, though. Facebook’s is. Xitter’s is. Doesn’t mean I trust them for shit.

        source
        • -> View More Comments
      • ayyy@sh.itjust.works ⁨2⁩ ⁨months⁩ ago

        I wouldn’t call it the accepted terminology at all. Just because some rich assholes try to will it into existence doesnt mean we have to accept it.

        source
      • gamer@lemm.ee ⁨2⁩ ⁨months⁩ ago

        The model weights and research paper are

        I think you’re conflating “open source” with “free”

        What does it even mean for a research paper to be open source? That they release a docx instead of a pdf, so people can modify the formatting? Lol

        The model weights were released for free, but you don’t have access to their source, so you can’t recreate them yourself. Like Microsoft Paint isn’t open source just because they release the machine instructions for free. Model weights are the AI equivalent of an exe file. To extend that analogy, quants, LORAs, etc are like community-made mods.

        To be open source, they would have to release the training data and the code used to train it. They won’t do that because they don’t want competition. They just want to do the facebook llama thing, where they hope someone uses it to build the next big thing, so that facebook can copy them and destroy them with a much better model that they didn’t release, force them to sell, or kill them with the license.

        source
      • maplebar@lemmy.world ⁨2⁩ ⁨months⁩ ago

        the accepted terminology nowadays

        Let’s just redefine existing concepts to mean things that are more palatable to corporate control why don’t we?

        If you don’t have the ability to build it yourself, it’s not open source. Deepseek is “freeware” at best. And that’s to say nothing of what the data is, where it comes from, and the legal ramifications of using it.

        source
      • sem@lemmy.blahaj.zone ⁨2⁩ ⁨months⁩ ago

        They are trying to make it accepted but it’s still contested. Unless the training data provided it’s not really open.

        source
      • lemmydividebyzero@reddthat.com ⁨2⁩ ⁨months⁩ ago

        But then, people would realize that you got copyrighted material and stuff from pirating websites…

        source
      • legolas@fedit.pl ⁨2⁩ ⁨months⁩ ago

        well if they really are and methodology can be replicated, we are surely about to see some crazy number of deepseek comptention, cause imagine how many us companies in ai and finance sector that are in posession of even larger number of chips.

        Although the question rises - if the methodology is so novel why would these folks make it opensource? Why would they share results of years of their work to the public losing their edge over competition? I dont understand.

        Can somebody who actually knows how to read machine learning codebase tell us something about deepseek after reading their code?

        source
        • -> View More Comments
  • drahardja@lemmy.world ⁨2⁩ ⁨months⁩ ago

    DeepSeek shook the AI world because it’s cheaper, not because it’s open source.

    And it’s not really open source either. Sure, the weights are open, but the training materials aren’t. Good look looking at the weights and figuring things out.

    source
    • HK65@sopuli.xyz ⁨2⁩ ⁨months⁩ ago

      I think it’s both. OpenAI was valued at a certain point because of a perceived moat of training costs. The cheapness killed the myth, but open sourcing it was the coup de grace as they couldn’t use the courts to put the genie back into the bottle.

      source
    • Hackworth@lemmy.world ⁨2⁩ ⁨months⁩ ago

      True, but they also released a paper that detailed their training methods. Is the paper sufficiently detailed such that others could reproduce those methods? Beats me.

      source
      • KingRandomGuy@lemmy.world ⁨2⁩ ⁨months⁩ ago

        I would say that in comparison to the standards used for top ML conferences, the paper is relatively light on the details. But nonetheless some folks have been able to reimplement portions of their techniques.

        ML in general has a reproducibility crisis. Lots of papers are extremely hard to reproduce, even if they’re open source, since the optimization process is partly random (ordering of batches, augmentations, nondeterminism in GPUs etc.), and unfortunately even with seeding, the randomness is not guaranteed to be consistent across platforms.

        source
  • coherent_domain@infosec.pub ⁨2⁩ ⁨months⁩ ago

    I hate to disagree but IIRC deepseek is not a open-source model but open-weight?

    source
    • canadaduane@lemmy.ca ⁨2⁩ ⁨months⁩ ago

      It’s tricky. There is code involved, and the code is open source. There is a neural net involved, and it is released as open weights. The part that is not available is the “input” that went into the training. This seems to be a common way in which models are released as both “open source” and “open weights”, but you wouldn’t necessarily be able to replicate the outcome with $5M or whatever it takes to train the foundation model, since you’d have to guess about what they used.

      source
      • vrighter@discuss.tchncs.de ⁨2⁩ ⁨months⁩ ago

        I view it as the source code of the model is the training data. The code supplied is a bespoke compiler for it, which emits a binary blob (the weights). A compiler is written in code too, just like any other program. So what they released is the equivalent of the compiler’s source code, and the binary blob that it output when fed the training data (source code) which they did NOT release.

        source
        • -> View More Comments
      • bss03@infosec.pub ⁨2⁩ ⁨months⁩ ago

        Definitions are tricky, and especially for terms that are broadly considered virtuous/positive by the general public (cf. “organic”) but I tend to deny something is open source unless you can recreate any binaries/output AND it is presented in the “preferred form for modification” (i.e. the way the GPLv3 defines the “source form”).

        A disassembled/decompiled binary might nominally be in some programming language–suitable input to a compiler for that langauge–but that doesn’t actually make it the source code for that binary because it is not in the form the entity most enabled to make a modified form of the binary (normally or original author) would prefer to make modifications.

        source
  • vxx@lemmy.world ⁨2⁩ ⁨months⁩ ago

    Didnt it turn out that they used 10000 nvidia cards that had the 100er Chips, and the “low level success” is a lie?

    source
    • neons@lemmy.dbzer0.com ⁨2⁩ ⁨months⁩ ago

      also they aren’t actually open source? Only the weights are open source?

      source
  • legolas@fedit.pl ⁨2⁩ ⁨months⁩ ago

    Apparently DeepSeek is lying, they were collecting thousands of NVIDIA chips against the US embargo and it’s not about the algorithm. That’s the story I’ve heard and honeslty it sounds legit.

    Not sure if this questions has been answered: if it’s open sourced, cant we see what algorithms they used to train it? If we could then we would know the answer. I assume we cant, but if we cant, then whats so cool about it being open source on the other hand? What parts of code are valuable there besides algorithms?

    source
    • pennomi@lemmy.world ⁨2⁩ ⁨months⁩ ago

      The open paper they published details the algorithms and techniques used to train it, and it’s been replicated by researchers already.

      source
      • legolas@fedit.pl ⁨2⁩ ⁨months⁩ ago

        So are these techiques so novel and breaktrough? Will we now have a burst of deepseek like models everywhere? Cause that’s what absolutely should happen if the whole storey is true. I would assume there are dozens or even hundreds of companies in USA that are in a posession of similar number but surely more chips that Chinese folks claimed to trained their model on.

        source
        • -> View More Comments
    • Viri4thus@feddit.org ⁨2⁩ ⁨months⁩ ago

      “China bad”

      *sounds legit

      Sounds legit is what one hears about FUD spread by alglophone media every time the US oligarchy is caught with their pants down.

      Snowden: “US is illegally spying on everyone”

      Media: Snowden is Russia spy

      *Sounds legit

      France: US should not unilaterally invade a country

      Media: Iraq is full of WMDs

      *Sounds legit

      DeepSeek: Guys, distillation and body of experts is a way to save money and energy, here’s a paper on how to do same.

      Media: China bad, deepseek must be cheating

      *Sounds legit

      source
      • Deceptichum@quokk.au ⁨2⁩ ⁨months⁩ ago

        This is your brain on Chinese/Russian propaganda.

        source
        • -> View More Comments
      • mspencer712@programming.dev ⁨2⁩ ⁨months⁩ ago

        I don’t like this. Everything you’re saying is true, but this argument isn’t persuasive, it’s dehumanizing. Making people feel bad for disagreeing doesn’t convince them to stop disagreeing.

        A more enlightened perspective might be “this might be true or it might not be, so I’m keeping an open mind and waiting for more evidence to arrive in the future.”

        source
        • -> View More Comments
      • maplebar@lemmy.world ⁨2⁩ ⁨months⁩ ago

        Snowden really proved he wasn’t a Russian spy when he check notes immediately fled to Russia with troves of American secrets…

        source
        • -> View More Comments
    • gamer@lemm.ee ⁨2⁩ ⁨months⁩ ago

      There’s so much misinfo spreading about this, and while I don’t blame you for buying it, I do blame you for spreading it. “It sounds legit” is not how you should decide to trust what you read. Many people think the earth is flat because the conspiracy theories sound legit to them.

      DeepSeek probably did lie about a lot of things, but their results are not disputed. R1 is competitive with leading models, it’s smaller, and it’s cheaper. The good results are definitely not from “sheer chip volume and energy used”, and American AI companies could have saved a lot of money if they had used those same techniques.

      source
    • ayyy@sh.itjust.works ⁨2⁩ ⁨months⁩ ago

      It’s time for you to do some serious self-reflection about the inherent biases you believe about Asians.

      source
      • legolas@fedit.pl ⁨2⁩ ⁨months⁩ ago

        WTF dude. You mentioned Asia. I love Asians. Asia is vast. There are many countries, not just China bro. I think you need to do these reflections. Im talking about very specific case of Chinese Deepseek devs potentiall lying about the chips. The assumptions and generalizations you are thinking of are crazy.

        source
        • -> View More Comments
  • anzo@programming.dev ⁨2⁩ ⁨months⁩ ago

    Not exactly sure of what “dominating” a market means, but the title is on a good point: innovation requires much more cooperation than competition. And the ‘AI race’ between nations is an antiquated mainframe pushed by media.

    source
  • AnimalsDream@slrpnk.net ⁨2⁩ ⁨months⁩ ago

    I’m not too informed about DeepSeek. Is it real open-source, or fake open-source?

    source
    • ifmu@lemmy.world ⁨2⁩ ⁨months⁩ ago

      It’s semi-open, not fully open source as what is typically thought of.

      source
      • AnimalsDream@slrpnk.net ⁨2⁩ ⁨months⁩ ago

        That sounds like fake open-source. Can I download the source, build it, have the thing run locally on my own machine, and use it without it having to interact with this company’s servers?

        source
        • -> View More Comments
    • DasKapitalist@lemmy.ml ⁨2⁩ ⁨months⁩ ago

      Deepseek is the company, R1 is an MIT-licensed produce, they have the Qwen models under Apache license.

      You can download, modify, run locally. There are many copies online out of Deepseek’s control.

      source
      • AnimalsDream@slrpnk.net ⁨2⁩ ⁨months⁩ ago

        I just looked it up.

        “The AI research company DeepSeek recently released its large language model (LLM) under the MIT License, providing model weights, inference code, and technical documentation. However, the company did not release its training code, sparking a heated debate about whether DeepSeek can truly be considered “open-source.”

        This controversy stems from differing interpretations of what constitutes open-source in the context of large language models. While some argue that without training code, a model cannot be considered fully open-source, others highlight that DeepSeek’s approach aligns with industry norms followed by leading AI companies like Meta, Google, and Alibaba.”

        So fake open-source.

        source