[ sourced from TechCrunch ]
This is the best summary I could come up with:
But in weighing DeepMind’s proposals, it’s informative to look at how the lab’s parent company, Google, scores in a recent study released by Stanford researchers that ranks ten major AI models on how openly they operate.
On the other hand, in addition to its public musings about policy, DeepMind appears to be taking steps to change the perception that it’s tight-lipped about its models’ architectures and inner workings.
The lab, along with OpenAI and Anthropic, committed several months ago to providing the U.K. government “early or priority access” to its AI models to support research into evaluation and safety.
Perhaps the lab’s next big ethics test is Gemini, its forthcoming AI chatbot, which DeepMind CEO Demis Hassabis has repeatedly promised will rival OpenAI’s ChatGPT in its capabilities.
After all, proteins exist in a soup of other molecules and atoms, and predicting how they’ll interact with stray compounds or elements in the body is essential to understanding their actual shape and activity.
University of Nebraska–Lincoln CS student Luke Farritor trained a machine learning model to amplify the subtle patterns on scans of the charred, rolled-up papyrus that are invisible to the naked eye.
The original article contains 1,337 words, the summary contains 192 words. Saved 86%. I’m a bot and I’m open source!
itsonlygeorge@reddthat.com 1 year ago
Short answer: Nope.