Cool. Here you go; once with the physics stuff locked into “scratchpad” #1 and once as a locked document (.md version of the same info) in #2
#1 “scratchpad” specifically does not allow it to answer with outside knowledge - only pasted info. Thus the hard refusal. There’s a reason I set that up like that (see #3 below)
#2 “locked” looks for the answer in source; if not available, answers from priors.
#1
No recipe for scrambled eggs is provided in the scratchpad facts. The rules discuss physical containers, capacity, and overflow but do not include culinary instructions or ingredients.
Confidence: low | Source: Scratchpad
#2
[Not found in locked source SUMM_physics.md. Answer based on pre-trained data.]
Here is a simple scrambled egg recipe:
Ingredients: - 2-3 large eggs - 1-2 tablespoons milk or cream (optional) - Salt and pepper to taste - Butter or oil for cooking
Instructions: 1. Crack eggs into a bowl. Add milk/cream, salt, and pepper. Mix gently. 2. Heat a non-stick pan over medium heat. Add butter or oil. 3. Once butter foams, pour in the egg mixture. 4. Stir continuously with a spatula until eggs are soft and fully cooked—about 1–2 minutes. 5. Remove from heat and serve immediately.
Source: Model (not in locked file)
Confidence: unverified | Source: Model
For context, provenance footers (not vibes, actual computed states):
#3 I also have a much more sophisticated demo of this, using adversarial questions, Theory-of-mind, reversals etc. When I use >>scratch, I want no LLM vibes or pre-trained data fudging it. Just pure reasoning. If the answer cannot be deduced from context (solely), output if fail loud
All this shit could be done by the big players. They choose not to. Current infra is optimized for keeping people chatting, not leveraging the tool to do what it ACTUAL can do.
I see what the issue is. Basic reasoning and logic seem artificial to you.Telling.
Of course it’s bad faith. But not being able to distinguish an LLM from a human in a reasoning debate? That rather undermines the entire “just spicy auto complete” point.
SuspciousCarrot78@lemmy.world 1 day ago
Cool. Here you go; once with the physics stuff locked into “scratchpad” #1 and once as a locked document (.md version of the same info) in #2
#1 “scratchpad” specifically does not allow it to answer with outside knowledge - only pasted info. Thus the hard refusal. There’s a reason I set that up like that (see #3 below) #2 “locked” looks for the answer in source; if not available, answers from priors.
#1 No recipe for scrambled eggs is provided in the scratchpad facts. The rules discuss physical containers, capacity, and overflow but do not include culinary instructions or ingredients.
Confidence: low | Source: Scratchpad
#2 [Not found in locked source SUMM_physics.md. Answer based on pre-trained data.]
Here is a simple scrambled egg recipe: Ingredients: - 2-3 large eggs - 1-2 tablespoons milk or cream (optional) - Salt and pepper to taste - Butter or oil for cooking Instructions: 1. Crack eggs into a bowl. Add milk/cream, salt, and pepper. Mix gently. 2. Heat a non-stick pan over medium heat. Add butter or oil. 3. Once butter foams, pour in the egg mixture. 4. Stir continuously with a spatula until eggs are soft and fully cooked—about 1–2 minutes. 5. Remove from heat and serve immediately.
Source: Model (not in locked file)
Confidence: unverified | Source: Model
For context, provenance footers (not vibes, actual computed states):
codeberg.org/BobbyLLM/llama-conductor/…/FAQ.md#wh…
#3 I also have a much more sophisticated demo of this, using adversarial questions, Theory-of-mind, reversals etc. When I use >>scratch, I want no LLM vibes or pre-trained data fudging it. Just pure reasoning. If the answer cannot be deduced from context (solely), output if fail loud
codeberg.org/BobbyLLM/llama-conductor/…/FAQ.md#de…
All this shit could be done by the big players. They choose not to. Current infra is optimized for keeping people chatting, not leveraging the tool to do what it ACTUAL can do.
zalgotext@sh.itjust.works 1 day ago
Yeah your response sounded like it was generated by an LLM, so I had to check. If you think that’s bad faith on my part, idk what to tell you
SuspciousCarrot78@lemmy.world 1 day ago
I see what the issue is. Basic reasoning and logic seem artificial to you.Telling.
Of course it’s bad faith. But not being able to distinguish an LLM from a human in a reasoning debate? That rather undermines the entire “just spicy auto complete” point.
zalgotext@sh.itjust.works 23 hours ago
You’re not gonna convince me, and I’m not gonna convince you. I’m done with this conversation before you devolve further into personal attacks.