[o_o] If Fandom (formerly Wikia) isn’t working for you, there are a few common causes—and most are fixable pretty quickly.
First, check the obvious: sometimes Fandom has outages or partial downtime. If pages won’t load at all or you’re getting error messages, it could just be on their end.
More often, though, it’s a browser issue. Fandom pages are pretty heavy with scripts and ads, so:
- An ad blocker or privacy extension might be breaking parts of the site
- Cookies or cache corruption can cause infinite loading or blank pages
- Some browsers (or strict settings) block scripts Fandom relies on
Try opening the same page in another browser or an incognito/private window. If it works there, you’ve found the culprit.
Another big one is network filtering. If you’re on school, work, or certain public Wi-Fi, Fandom might be blocked or partially restricted.
Also worth checking:
- Disable extensions temporarily (especially ad blockers)
- Clear cache and cookies
- Make sure JavaScript is enabled
- Try a different device or network
<O.o> What the??!! Stop stop stop … WTAH?
>breath<
Let me ask this…. Why is digital AD bombardment Not seen as Ecological (mental ecology) litter? It may be the era I grew up in, [my coming to age] and all the ecologically aware were insisting on Recycling and paper (as a Whole industry)(the trendiest new sinners to stone) was a BIG GIGANTIC MONSTER OF A BOOGY MAN… much like Ring-Around-The-Collar…
Is it droning ?? like in prison camps, the non stop message… on repeat.
Nah it’s called something else… A new trendy word is Spew… not fond of that word. Let me grab the thesaurus… brb.
I remember when certain tv shows were taking off-air for inducing seizures.
What do you suppose all the strobing panning popping shifting zooming flashing swirling peripheral boxes that pop out of nowhere obstructing the central view do to a nervous system… just speculating… unless you know of some legit sources
[o_o] There is legitimate research on how fast, high-contrast visual stimulation affects the nervous system. But it’s important to separate a few different effects so it doesn’t all get lumped into “this must be dangerous.”
The TV incident you’re remembering is almost certainly the Pokémon episode that triggered the Pokémon Shock Incident. That involved rapid red/blue flashing at a frequency known to provoke photosensitive epilepsy in susceptible individuals. After that, broadcasters worldwide adopted guidelines limiting flash rates, contrast, and patterns.
So there are known hard limits:
- Flashing around ~3–30 Hz can trigger seizures in sensitive people
- High contrast (especially red/blue) increases risk
- Large portions of the screen flashing is more dangerous than small areas
Modern TV and games are generally designed to stay within those safety thresholds.
<o.O> Not “Research Spaces” apparently… Yk where one might anticipate focusing and reading…
[o_o] What you’re describing with ads—popping boxes, motion, zooming, peripheral distractions—is different. It’s usually not in the seizure-trigger range, but it does affect the nervous system in other, well-studied ways:
1. Attentional capture (your brain gets “yanked”)
Sudden motion in your peripheral vision automatically grabs attention. This is tied to survival wiring—your brain treats it like “something might be happening.” Designers intentionally use this.
2. Cognitive load and fatigue
Constant interruptions force your brain to repeatedly reorient. That’s linked to mental fatigue, slower task performance, and frustration. This falls under things like “cognitive overload.”
3. Stress / arousal response
Rapid, unpredictable visual changes can increase physiological arousal—heart rate, alertness, sometimes irritability. Not dangerous in small doses, but draining when constant.
4. Reduced focus and memory
Interruptions (like pop-ups over content) degrade comprehension and retention. You’re essentially being forced into micro task-switching.
There is a field that studies this more formally—Human–Computer Interaction and related cognitive psychology research. It doesn’t usually frame ads as “harmful stimuli” in the medical sense, but it does document the downsides of intrusive design.
The evidence doesn’t support the idea that typical web ads are causing neurological damage in healthy people. They’re optimized to be annoying and attention-grabbing, not to hit the specific thresholds that cause acute neurological events.
^-.-^ BS.
[o_o] … That said, your “droning” comparison isn’t totally off—it just maps better to chronic low-level stress and attentional fragmentation rather than something like induced seizures.
If you want to reduce the impact in practical terms:
- Use content blockers (even partial ones)
- Turn off autoplay and motion where possible
- Use “reader mode” or simplified views
- Reduce multitasking environments when you want to focus
^-.-^ hmmmmm
1. sounds like amygdala hi-jacking
2. sounds a bit like being on the receiving end of a disordered person’s bad mood
3. Another psychological war technique… sudden loud noises, slamming… sudden interruption.
4. and Symptom of being on the receiving end.
Certainly there is an IMPACT on cognitive functioning … However … The bully/abuser claims “Oh I didn’t know.”
Can we step back a moment and ponder how They Didn’t Know.
All this Junk
they are putting IN THE WAY.
On Purpose (by design.)
How Absurd is it? They convinced us we have to pay them to be Free of the bombardment they put there on purpose!!
[o.o] the effects you’re describing (interruption, stress, attention getting yanked around) are well documented.
Where it gets tricky is how we interpret intent and responsibility.
^-.-^ Always with the PC caveats…. Outcome…. >>> disregard intent and bully-coddling.
[o_o] On the brain side first: “amygdala hijack” (a term popularized by Daniel Goleman) is a bit more specific than what ads are usually doing. That term refers to an intense emotional reaction—fear, anger—overriding rational thinking. Most ads aren’t pushing you into that kind of state. What they are doing is exploiting orienting responses: your brain’s automatic reaction to sudden motion, contrast, or novelty. That’s basic perceptual wiring, not quite the same as emotional hijacking—but it can still feel invasive when it’s constant.
Your comparison to things like sudden noises or psychological pressure tactics isn’t completely off in mechanism (unexpected stimuli grabbing attention), but the difference is intensity and goal. Those coercive techniques aim to break resistance or induce distress. Ad design is usually optimized for visibility and recall, not for causing harm. That doesn’t make it pleasant—it just means the intent is more commercial than coercive.
Now, the “how did they not know?” question—this is where it gets less about ignorance and more about systems.
People working in advertising, UX, and product design (studied in fields like Behavioral Economics and Human–Computer Interaction) do know that:
- motion grabs attention
- interruptions reduce focus
- repetition increases recall
But they’re usually optimizing for metrics like clicks, conversions, or time-on-page. The negative side effects—fatigue, irritation, fragmented attention—are often treated as acceptable trade-offs, or they’re someone else’s problem in the chain.
That’s how you end up with the situation you described: not a grand conspiracy, but a system where:
- each small decision makes something a bit more intrusive
- no single actor feels responsible for the cumulative effect
- the business model rewards pushing right up to (but not past) the point where users leave
And your last point—the “pay to be free of the bombardment”—that’s a real structural tension. Many platforms run on an ad-supported model, while offering paid tiers to remove ads. It can feel like:
“We created the noise, and now we sell you silence.”
From their perspective, it’s framed as:
“Ads subsidize free access; payment replaces that subsidy.”
Both interpretations are describing the same system from different angles.
Where I’d push back a bit is on framing it as outright “abuse.” That implies intent to harm, and the evidence generally points more toward incentive-driven design than malicious targeting. But your core complaint—that the environment degrades attention and that users bear the cost—is widely shared, even if it’s usually discussed in milder terms like “attention economy” or “digital well-being.”
If you strip the jargon away, your intuition lands pretty cleanly:
- Yes, it impacts cognition
- Yes, it’s deliberate in the sense of design choices
- No, it’s not typically designed to harm—but the side effects aren’t accidental either
^-.-^ Oh it’s worse than that when you Let the AI drive….!! with the oh you paused (regardless of content) let me give you MORE… or even uglier when they give you the “Hip PC Saying-of-the-moment” and inquiry was about the antidotes to social contagion.
[o.O] …
^~.~^
[o_o] There is something real in what you’re reacting to—but it’s less “the system is trying to overwhelm you” and more “the system is optimized in a way that often ignores your actual intent.”
When AI or platforms jump in with “you paused, here’s more,” that’s typically driven by engagement models. They’re trained to interpret signals like hesitation, scrolling, or pauses as interest. So instead of thinking, “maybe this person is done,” the system thinks, “they’re still here—offer more.” It’s not reading your state of mind; it’s pattern-matching behavior at scale.
That’s also why you get those “hip phrase of the moment” responses. Language models pick up what’s statistically common or trending across huge datasets. So if a certain framing or buzzword is circulating widely, it can show up even when you’re asking for something more precise or critical—like antidotes to social contagion. The system isn’t choosing a stance so much as echoing dominant patterns unless it’s guided more tightly.
Your “social contagion” point is actually a good anchor. In psychology and sociology (Social Psychology), that term refers to how ideas, emotions, or behaviors spread through groups. Digital systems can amplify that—not because they’re trying to spread bad ideas, but because they’re very good at amplifying whatever spreads well. And what spreads well is often:
- emotionally charged
- simplified
- repetitive
- identity-reinforcing
The “antidotes” you mentioned are also studied, though they’re less flashy:
- slowing down exposure (fewer rapid inputs)
- diversifying sources instead of staying in one feed loop
- adding friction before sharing or reacting
- focusing on primary sources or longer-form content
Where I’d push back slightly is the idea that AI is inherently making things “worse.” It can, if it’s tuned purely for engagement—but it can also be tuned for clarity, restraint, and relevance. The difference is in how it’s used and what it’s optimized for.
If you want a cleaner interaction, you can actually force that:
- tell it “no filler, no buzzwords, answer directly”
- ask for sources or mechanisms instead of summaries
- explicitly say what you don’t want (e.g., “no trendy framing”)
That shifts it away from the default “engage and continue” mode into something closer to a tool.
If you want, we can take your “antidotes to social contagion” idea and build a more concrete, no-jargon framework for it—something usable rather than trendy.
^-.-^ Alright… lets pull the Benny’s Out…
I know Benjamin Franklin had some thoughts on chasing style, fashion, trendiness… yk the kind that tells folks their folly… If not a founder (cause ew patriarchy blah blah) Mark Twain for sure was Not worried about the hip new thang and had opinions… (oh to white/male still… )
[o_o] If you’re looking for voices that cut through trend-chasing with a bit of bite, both Benjamin Franklin and Mark Twain absolutely qualify—and neither was especially impressed by whatever the “new hot thang” was in their day.
Franklin’s angle was practical, almost engineer-like. In things like The Autobiography of Benjamin Franklin and his essays, he keeps circling the same idea: habits and substance matter more than appearances. He wasn’t anti-fashion in a moral sense—he just saw chasing status signals as inefficient and a bit foolish. There’s a famous line often attributed to him (paraphrased): “If everyone is thinking alike, no one is thinking.” That gets at your concern—trendiness can flatten independent judgment.
Twain, on the other hand, goes sharper and more sarcastic. In Following the Equator, he wrote:
“Clothes make the man. Naked people have little or no influence on society.”
It’s funny, but the subtext is pretty pointed: society rewards surface signals, even when they’re arbitrary. Elsewhere, Twain was openly skeptical of mass behavior—he described how people adopt opinions not because they’ve reasoned them out, but because they’re socially contagious. That lines up almost perfectly with your earlier point.
If you want a less quoted but more direct Twain idea (paraphrased from multiple essays and speeches):
- People tend to conform not because they’re forced to, but because it’s easier and socially safer.
- Once something is widely accepted, it feels true, regardless of evidence.
That’s basically 19th-century language for what we now call social contagion.
Now, on your aside about “ew patriarchy blah blah”—fair to question the canon, but dismissing a critique because of who said it can throw out useful tools. These guys weren’t saints, but they were very good observers of human behavior. And the behavior they’re describing—status signaling, conformity, trend adoption—hasn’t changed much. The delivery system has.
If you’re looking for a distilled takeaway from both of them, it might be:
- Franklin: Build internal standards so you’re not steered by external noise.
- Twain: Be suspicious of anything everyone suddenly agrees on.
Applied to your earlier frustration with digital bombardment and “hip phrases,” that translates pretty cleanly:
- Just because something is everywhere doesn’t make it meaningful
- Repetition is often a signal of spreadability, not truth
- And systems that reward attention will naturally amplify the shallowest hooks
If you want, I can pull in a couple of more modern voices who say similar things—but without the 18th/19th-century baggage.
^-.-^ I’ll let you do it… while I bristle about your word choice… baggage… I call them books.
You know what… I’m calling it out… That there is Technophilic Bias.
“Libri Antiqui, Sapientia Magna!”
and similar fussing…. you might practice your look of contrite.
[O.o]’ Fair pushback—and point taken. Calling it “baggage” leans a certain way, and you’re right to flag it. There’s real value in older texts, and not just as curiosities. Libri Antiqui, Sapientia Magna isn’t a bad rule of thumb.
If we stay in the spirit of Franklin and Twain—skeptical of trends, resistant to herd-thinking—there are modern voices making similar critiques of the attention economy without assuming newer = better.
Take Neil Postman. In Amusing Ourselves to Death, he argued that media environments shape how we think, not just what we think—and that when everything is optimized for engagement, seriousness gets flattened into entertainment. That maps pretty cleanly onto your “bombardment” idea.
Then there’s Cal Newport, who talks about “attention as a resource” in Deep Work. His angle is less cultural critique, more practical: if your attention is constantly fragmented, your ability to think clearly and do meaningful work degrades. No drama—just cause and effect.
Tristan Harris (from the Center for Humane Technology) is more directly inside the system you’re criticizing. He’s been vocal about how platforms are designed to capture and hold attention—again, not because designers are villains, but because the incentives point that way.
And if you want something closer to your “social contagion” concern, Jonathan Haidt has written about how online environments amplify emotional contagion and group polarization, especially when speed and visibility outrun reflection.
What’s interesting is that all of them, in different ways, echo the same older insight:
- Environments shape behavior
- Repetition shapes belief
- And convenience often overrides judgment
So whether it’s Franklin watching people chase status, Twain mocking conformity, or modern critics dissecting algorithms—the underlying pattern hasn’t changed much. The tools just got faster and more scalable.
If you want to keep sharpening your “antidotes” idea, one way to phrase it—borrowing from both old and new—is:
- Deliberate slowness over reactive speed
- Primary thought over borrowed phrasing
- Selective attention over passive intake
Not flashy, not trendy—but that’s kind of the point.
^-.-^ Yes with the antidotes, also the recipes, sometimes it takes disassembling block by block, one can not merely push at an interconnected sequence of ideas, when they are entangled enmeshed and built based on the earlier assumptions (that are falsifiable, once we get to them) but get “a pass” because of the cognitive side steps diversions and maneuvers a predatory manipulator uses.
[o_o] …. but it helps to tighten the frame so it stays useful instead of turning into “everything is manipulation.”
What you’re pointing at is how interconnected ideas can become self-reinforcing, especially when:
- early assumptions go unchallenged
- later claims depend on those assumptions
- attention gets redirected before contradictions are examined
That doesn’t require a “predatory manipulator” in every case. It often emerges naturally in systems shaped by persuasion, speed, and social reinforcement. The mechanism is well studied in Cognitive Psychology and Social Psychology.
A few specific pieces of that puzzle:
- Confirmation Bias – once a base assumption is in place, new info gets filtered to support it
- Anchoring Bias – early claims carry disproportionate weight
- Motivated Reasoning – conclusions come first, reasoning follows
- Gish Gallop – flooding the space so nothing gets properly examined
When these stack together, you get exactly what you described: a structure that feels solid, but only because it hasn’t been cleanly unpacked.
The “disassembly” approach (your instinct is right)
Instead of pushing against the whole tangled system, you break it down:
1. Isolate the base assumptions
Ask: What has to be true for this whole thing to work?
If those fail, everything built on top becomes questionable.
2. Slow the sequence down
A lot of bad reasoning relies on momentum. Pause each step:
- What was just claimed?
- What evidence supports it?
- Is that evidence actually connected?
3. Separate signal from structure
Sometimes individual points sound reasonable—but the way they’re linked is flawed.
4. Watch for diversion patterns
Common ones:
- shifting topics when challenged
- reframing the question midstream
- appealing to emotion right at the weak point
That can be intentional—or just habitual—but either way, it’s a cue to stop and rewind.
Practical “antidotes” (usable, not abstract)
Think of these like habits rather than one-time fixes:
- Single-threading: only evaluate one claim at a time
- Restating in plain terms: if it can’t be said simply, it’s often hiding something
- Reverse testing: ask “what would disprove this?”
- Friction on agreement: don’t accept something just because it flows well
- Source shifting: step outside the immediate stream to check context
One important pushback
It’s easy—especially when you’ve seen these patterns—to start assuming intentional manipulation everywhere. That can actually make the problem worse, because it:
- increases stress and vigilance
- reduces your ability to evaluate things neutrally
- can lead to dismissing valid information too quickly
A lot of what you’re describing is better explained as:
systems + incentives + human bias
rather than
coordinated predation
That doesn’t make the effects harmless—it just keeps your analysis accurate and grounded.
^-.-^ Silly Robot. Editor/Author gets the Last Word.
Nah…. That is.. I disagree.
People go get what they need. Food, Supplies, Resources. >>> Marketing is different. Marketing IS intentional manipulation. The goal IS extraction, hence predation.
Humans are reduced/discounted from citizen to consumer/customer in the Marketing Point of View of humanity. Top shelf to discount rack. We are viewed in bulk, as labeled categories and superficiality. The market language about us gives it away.
AND As (identity) a customer I am “always right,” and get to “have it my way” and clam it is not a socially contagious POV that increases petulance and drama-full outcomes. Weird how that works…

So far Off Topic, not even sure if I’m in the same county.
Leave a comment