Comment on How the Simulation Argument Dampens Future Fanaticism – Center on Long-Term Risk

<- View Parent
tetranomos@awful.systems ⁨11⁩ ⁨months⁩ ago

let’s get you up-to-date in the 21st century. back in 2001 margaret runchey prototyped her unitary technology in “model of everything”, some patented stuff happening about ontological design just before jeff bezos’ “api mandate” (2002). now we’re assessing how to model transaction artifacts that [learn] or [fail not to learn] about their own copies or clones which “own people as data”.

quote: Having a maker or owner is the source of identity. The record of civilization is charted in official claims of origination. We have institutionalized mechanisms for establishing authenticity, one of the purposes government serves. This critical step is missing in current electronic models that apply entity status and standing to define virtual transaction artifacts that own people as data.

so, that’s copies of people [theorized as data objects or entities] depending on your philosophy of definition, not meaning. why such a modeling of people is valuable is a different question than how it works. interscience as defined by reproducibility, measurability, falsifiability, etc. as borne out has tended to become a failed project (“a.i.” was deemed a downside back in 2007). so then question of pedigree is not enough (valuability): mechanism independence, estimability (predictive power), testability, theory negotiability (conservatism), sizeability (modularity) of a model explains what some join baruch spinoza in calling the power of the multitude or “collective representations” or “manipulating shadows”* (as fielding and taylor put it).

source
Sort:hotnewtop