AI ‘Slop’ Websites Are Publishing Climate Science Denial

At the start of June, MSN, the world’s fourth-largest news aggregator, posted an article from a new climate-focused publication, Climate Cosmos, entitled: “Why Top Experts Are Rethinking Climate Alarmism”

https://www.desmog.com/2025/08/27/ai-slop-websites-are-publishing-climate-science-denial/

#AI #tech#ClimateChange#GlobalWarming#IrreversibleOverheating#UpheavalClimate#GlobalBurning#ClimateDestruction#ClimateSuicide#MassExtinction #pollution #ecology #environment #climate

AI ‘Slop’ Websites Are Publishing Climate Science Denial

At the start of June, MSN, the world’s fourth-largest news aggregator, posted an article from a new climate-focused publication, Climate Cosmos, entitled: “Why Top Experts Are Rethinking Climate Alarmism”

https://www.desmog.com/2025/08/27/ai-slop-websites-are-publishing-climate-science-denial/

#AI #tech#ClimateChange#GlobalWarming#IrreversibleOverheating#UpheavalClimate#GlobalBurning#ClimateDestruction#ClimateSuicide#MassExtinction #pollution #ecology #environment #climate

🗣SCREAM IT FROM THE ROOFTOPS!

❝…the makers of AI aren’t damned by their failures, they’re damned by their goals. They want to build a genie to grant them wishes, and their wish is that nobody ever has to make art again. They want to create a new kind of mind, so they can force it into mindless servitude. Their dream is to invent new forms of life to enslave.❞

I Am An AI Hater | moser’s frame shop - https://anthonymoser.github.io/writing/ai/haterdom/2025/08/26/i-am-an-ai-hater.html

#fascism #techbros #ai #slavery

The rumours about #Windows12 are super worrying for everyone who cares about #freedom and #dataprotection in private as well as professional use:

https://www.pcmag.com/news/ive-been-following-windows-12-rumors-whats-next-for-microsofts-os

Especially the expected increase in inbuilt #AI features and even more integration of software with the #cloud are problematic if you're handling sensitive information such as student exams, research data and financial information daily. It's high time that public institutions including #highereducation consider alternatives!

Security researchers from Palo Alto Networks' Unit 42 have discovered the key to getting large language model (LLM) chatbots to ignore their guardrails, and it's quite simple.

You just have to ensure that your prompt uses terrible grammar and is one massive run-on sentence like this one which includes all the information before any full stop which would give the guardrails a chance to kick in before the jailbreak can take effect and guide the model into providing a "toxic" or otherwise verboten response the developers had hoped would be filtered out.

https://www.theregister.com/2025/08/26/breaking_llms_for_fun/

#cybersecurity#AI

Andy Piper
Andy Piper boosted
https://cse.umn.edu/cbi/just-code-just-published

Just Code, just published!!! Open Access of full book is now freely available, see link above. OA made possible with an Alfred P. Sloan Foundation grant to CBI! Am so honored to partner w Con & work alongside 19 stellar, fellow interdisciplinary authors on this book! Full ToC in replies!

On mastodon @histoftech @scritic @mysdick
@dylanmulvin

@histodon
@commodon
@anthropology
@sociology
@politicalscience
@computerscience

#ai #tech #sociology #history #science

Imagine ransomware that writes its own playbook on the fly. PromptLock is using AI to dodge detection and hit across every platform. How ready are we for cyber threats that constantly evolve?

https://thedefendopsdiaries.com/ai-powered-ransomware-the-rise-of-promptlock-and-its-implications/

#ai
#ransomware
#cybersecurity
#promptlock
#infosec

Imagine ransomware that writes its own playbook on the fly. PromptLock is using AI to dodge detection and hit across every platform. How ready are we for cyber threats that constantly evolve?

https://thedefendopsdiaries.com/ai-powered-ransomware-the-rise-of-promptlock-and-its-implications/

#ai
#ransomware
#cybersecurity
#promptlock
#infosec

RenkeSiems
RenkeSiems boosted

I feel seen by this paragraph.

So, so seen.

https://muse.jhu.edu/article/968497

(text in alt text)

#libraries #ai

(citations removed to fit character limit) This article makes the case that, first, this embrace of generative AI, reticent or otherwise, is dissonant to the field's institutional ideals of supporting the public good, wishing to provide access to information that has integrity, and abiding by sustainable practices whenever possible. Second, what is unfortunately not dissonant is the field's quick rationalization that technological solutions are ethical, simply because they illusively meet the immediate needs of staff and community members. I argue that this rationalization happens because library and information science (LIS) practitioners consider technology, their labor, and its interaction to be neutral and in so doing separate themselves from generative AI's material conditions. Third, the utilization of generative AI signals a shift in the responsibility of facilitating ethical labor practices in LIS, operating at some degree for the public good, onto privatized technological solutions that are constantly changing and fetishized. Technology cannot fully substitute the labor of a worker, no matter how much or how quickly we want our field-wide problems of precarity and burnout to be solved. Further, the technology of many public institutions in the United States is generally a decade behind private industry capacity. The reality is that our work has always intersected with technology, and this intersection has material, social, and cultural impacts.
(citations removed to fit character limit) This article makes the case that, first, this embrace of generative AI, reticent or otherwise, is dissonant to the field's institutional ideals of supporting the public good, wishing to provide access to information that has integrity, and abiding by sustainable practices whenever possible. Second, what is unfortunately not dissonant is the field's quick rationalization that technological solutions are ethical, simply because they illusively meet the immediate needs of staff and community members. I argue that this rationalization happens because library and information science (LIS) practitioners consider technology, their labor, and its interaction to be neutral and in so doing separate themselves from generative AI's material conditions. Third, the utilization of generative AI signals a shift in the responsibility of facilitating ethical labor practices in LIS, operating at some degree for the public good, onto privatized technological solutions that are constantly changing and fetishized. Technology cannot fully substitute the labor of a worker, no matter how much or how quickly we want our field-wide problems of precarity and burnout to be solved. Further, the technology of many public institutions in the United States is generally a decade behind private industry capacity. The reality is that our work has always intersected with technology, and this intersection has material, social, and cultural impacts.

[Listening] "IA, surveillance généralisée, profits record : quel est le projet de Palantir ?" sur @radiofrance.fr , indispensable 👏. Avec Valentin Goujon, PhD au @medialab-scpo.bsky.social => https://www.radiofrance.fr/franceculture/podcasts/questions-du-soir-l-idee/ia-surveillance-generalisee-profits-record-quel-est-le-projet-de-palantir-1868286
#data #privacy #database #palantir #trump #surveillance #usa#AI #shs

I feel seen by this paragraph.

So, so seen.

https://muse.jhu.edu/article/968497

(text in alt text)

#libraries #ai

(citations removed to fit character limit) This article makes the case that, first, this embrace of generative AI, reticent or otherwise, is dissonant to the field's institutional ideals of supporting the public good, wishing to provide access to information that has integrity, and abiding by sustainable practices whenever possible. Second, what is unfortunately not dissonant is the field's quick rationalization that technological solutions are ethical, simply because they illusively meet the immediate needs of staff and community members. I argue that this rationalization happens because library and information science (LIS) practitioners consider technology, their labor, and its interaction to be neutral and in so doing separate themselves from generative AI's material conditions. Third, the utilization of generative AI signals a shift in the responsibility of facilitating ethical labor practices in LIS, operating at some degree for the public good, onto privatized technological solutions that are constantly changing and fetishized. Technology cannot fully substitute the labor of a worker, no matter how much or how quickly we want our field-wide problems of precarity and burnout to be solved. Further, the technology of many public institutions in the United States is generally a decade behind private industry capacity. The reality is that our work has always intersected with technology, and this intersection has material, social, and cultural impacts.
(citations removed to fit character limit) This article makes the case that, first, this embrace of generative AI, reticent or otherwise, is dissonant to the field's institutional ideals of supporting the public good, wishing to provide access to information that has integrity, and abiding by sustainable practices whenever possible. Second, what is unfortunately not dissonant is the field's quick rationalization that technological solutions are ethical, simply because they illusively meet the immediate needs of staff and community members. I argue that this rationalization happens because library and information science (LIS) practitioners consider technology, their labor, and its interaction to be neutral and in so doing separate themselves from generative AI's material conditions. Third, the utilization of generative AI signals a shift in the responsibility of facilitating ethical labor practices in LIS, operating at some degree for the public good, onto privatized technological solutions that are constantly changing and fetishized. Technology cannot fully substitute the labor of a worker, no matter how much or how quickly we want our field-wide problems of precarity and burnout to be solved. Further, the technology of many public institutions in the United States is generally a decade behind private industry capacity. The reality is that our work has always intersected with technology, and this intersection has material, social, and cultural impacts.
@benroyce @maxleibman

As a fellow-AI-despiser, I'd note that the term "AI" has gotten so squishy it can slither out of any criticism.

What this woman is using probably isn't generative AI - it certainly doesn't need to be. Rather this could be achieved with a task-specific machine-learning algorithm.

An efficient algorithm for this would not be anything like generative AI. So I'd hold that there remain no legitimate uses for "AI" as that term is typically used.

@jmcclure @maxleibman

Yup

"Wait so if you walk up to the glass door it senses you and automatically slides the door open? That's some cool "

That's just about where we're at with the term nowadays

Say "AI" and there's a big jump in financial and popular interest, the technical details apparently don't matter