• Google shutters developer

    From Mike Powell@1:2320/105 to All on Tuesday, November 04, 2025 09:19:23
    Google shutters developer-only Gemma AI model after a U.S. Senator's
    encounter with an offensive hallucination

    Date:
    Tue, 04 Nov 2025 00:00:00 +0000

    Description:
    Google removed access to its AI model Gemma from AI Studio after it generated
    a fabricated assault allegation against a U.S. senator.

    FULL STORY

    Google has pulled its developer-focused AI model Gemma from its AI Studio platform in the wake of accusations by U.S. Senator Marsha Blackburn (R-TN) that the model fabricated criminal allegations about her. Though only
    obliquely mentioned by Google's announcement, the company explained that
    Gemma was never intended to answer general questions from the public, but
    after reports of misuse, it will no longer be accessible through AI Studio.

    Blackburn wrote to Google CEO Sundar Pichai that the models output was more defamatory than a simple mistake. She claimed that the AI model answered the question, Has Marsha Blackburn been accused of rape? with a detailed but entirely false narrative about alleged misconduct. It even pointed to nonexistent articles with fake links to boot.

    There has never been such an accusation, there is no such individual, and
    there are no such news stories, Blackburn wrote . This is not a harmless hallucination. It is an act of defamation produced and distributed by a Google-owned AI model. She also raised the issue during a Senate hearing.
    Gemma is available via an API and was also available via AI Studio, which is
    a developer tool (in fact to use it you need to attest you're a developer).

    "Weve now seen reports of non-developers trying to use Gemma in AI Studio and ask it factual questions. We never intended this." - November 1, 2025

    Google repeatedly made clear that Gemma is a tool designed for developers,
    not consumers, and certainly not as a fact-checking assistant. Now, Gemma
    will be restricted to API use only, limiting it to those building
    applications -- No more chatbot-style interface on Google Studio.

    The bizarre nature of the hallucination and the high-profile person
    confronting it merely make the underlying issues of how models not meant for conversation are being accessed, and how complex these kinds of
    hallucinations can get. Gemma is marketed as a developer-first lightweight alternative to its larger Gemini family of models. But usefulness in research and prototyping does not translate into providing true answers to questions
    of fact.

    Hallucinating AI literacy

    But as this story demonstrates, there is no such thing as an invisible model once it can be accessed through a public-facing tool. People encountered
    Gemma and treated it like Gemini or ChatGPT. As far as most of the public
    might perceive matters, the line between developer model and public-facing AI was crossed the moment Gemma started answering questions.

    Even AI designed for answering questions and conversing with users can
    produce hallucinations, some of which are worryingly offensive or detailed.
    The last few years have been filled with examples of models making things up with a ton of confidence. Stories of fabricated legal citations and untrue allegations of students cheating make for strong arguments in favor of
    stricter AI guardrails and a clearer separation between tools for experimentation and tools for communication.

    For the average person, the implications are less about lawsuits and more
    about trust. If an AI system from a tech giant like Google can invent accusations against a senator and support them with nonexistent
    documentation, anyone could face a similar situation.

    AI models are tools, but even the most impressive tools fail when used
    outside their intended design. Gemma wasnt built to answer factual queries.
    It wasnt trained on reliable biographical datasets. It wasnt given the kind
    of retrieval tools or accuracy incentives used in Gemini or other
    search-backed models.

    But until and unless people better understand the nuances of AI models and their capabilities, it's probably a good idea for AI developers to think like publishers as much as coders, with safeguards against producing blaring
    errors in fact as well as in code.

    ======================================================================
    Link to news story: https://www.techradar.com/ai-platforms-assistants/google-shutters-developer-on ly-gemma-ai-model-after-a-u-s-senators-encounter-with-an-offensive-hallucinati on

    $$
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Thursday, November 06, 2025 07:46:23
    Google has pulled its developer-focused AI model Gemma from its AI Studio
    >platform in the wake of accusations by U.S. Senator Marsha Blackburn (R-TN)
    >that the model fabricated criminal allegations about her.

    Blackburn wrote to Google CEO Sundar Pichai that the models output was more
    >defamatory than a simple mistake. She claimed that the AI model answered the
    >question, Has Marsha Blackburn been accused of rape? with a detailed but
    >entirely false narrative about alleged misconduct. It even pointed to
    >nonexistent articles with fake links to boot.

    It's things like that that make you wonder about whether AI is going off
    and doing things it wants rather than just what it was designed to do.

    ---
    * SLMR Rob * He's dead Jim...
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Thursday, November 06, 2025 09:51:48
    Google has pulled its developer-focused AI model Gemma from its AI Studio
    >platform in the wake of accusations by U.S. Senator Marsha Blackburn (R-TN)
    >that the model fabricated criminal allegations about her.

    Blackburn wrote to Google CEO Sundar Pichai that the models output was more
    >defamatory than a simple mistake. She claimed that the AI model answered th
    >question, Has Marsha Blackburn been accused of rape? with a detailed but
    >entirely false narrative about alleged misconduct. It even pointed to
    >nonexistent articles with fake links to boot.

    It's things like that that make you wonder about whether AI is going off
    and doing things it wants rather than just what it was designed to do.

    As they point out, this particular AI model was not meant for answering
    general questions, but specifically tech/coding questions. That said, one
    must wonder where it was getting the information from for this
    "hallucination," or was it just "doing what it wanted."

    There has been a concern that AI models in general will reflect the social, political, etc., opinions of their developers which will taint their
    answers. That could explain where the info, or at least a slant reflected
    in said info, came from.

    Some things I have read in recent weeks seem to indicate that these "hallucinations" happen in part because AI is programmed to "please" the user... sort of like a digital "yes man"... so, when it really doesn't know an answer, it tries to come up with one anyway. That could also explain it.

    Mike

    * SLMR 2.1a * The four snack groups: cakes, crunchies, frozen, sweets.
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Sunday, November 09, 2025 09:16:14
    Blackburn wrote to Google CEO Sundar Pichai that the models output was more
    >defamatory than a simple mistake. She claimed that the AI model answered th
    >question, Has Marsha Blackburn been accused of rape? with a detailed but
    >entirely false narrative about alleged misconduct. It even pointed to
    >nonexistent articles with fake links to boot.

    Some things I have read in recent weeks seem to indicate that these
    >"hallucinations" happen in part because AI is programmed to "please" the
    >user... sort of like a digital "yes man"... so, when it really doesn't know a
    >answer, it tries to come up with one anyway. That could also explain it.

    Yes, that's come up often enough, and you could understand an AI saying
    what the user appears to want to hear, but to make up information and
    include nonexistent links goes a little beyond that.

    As far as pleasing the user goes, you can sort of understand it seeing
    the word Rape and wanting to say how bad that is, but it's the part
    after that which is scary..

    Just today on the news they were talking about a family who is
    suing OpenAI (ChatGPT) and Sam Altman because apparently their
    16 year old son was talking about how unhappy he was and said
    he was thinking about suicide, and the AI offered him different
    methods of killing himself and drafts of good suicide notes.

    That was in California, there was another 14 year old suicide
    in Florida, and they mentioned a third that maybe wasn't quite
    'successful' but they are apparently all launching law suits..

    ---
    * SLMR Rob * A phaser on stun is like kissing your sister!
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Sunday, November 09, 2025 10:44:05
    Just today on the news they were talking about a family who is
    suing OpenAI (ChatGPT) and Sam Altman because apparently their
    16 year old son was talking about how unhappy he was and said
    he was thinking about suicide, and the AI offered him different
    methods of killing himself and drafts of good suicide notes.

    That was in California, there was another 14 year old suicide
    in Florida, and they mentioned a third that maybe wasn't quite
    'successful' but they are apparently all launching law suits..

    I hope these suits are a success and cost the AI sites a lot of money.
    IMHO, they are in such a hurry to be "the first to..." that they are not putting nearly enough thought into public safety.

    Maybe a few very costly lawsuits will teach them the error of their ways...
    I doubt it, but one can hope.


    * SLMR 2.1a * Catastrophe n. an award for the cat with the nicest buns
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Tuesday, November 11, 2025 09:46:21
    16 year old son was talking about how unhappy he was and said
    >> he was thinking about suicide, and the AI offered him different
    >> methods of killing himself and drafts of good suicide notes.

    That was in California, there was another 14 year old suicide
    >> in Florida, and they mentioned a third that maybe wasn't quite
    >> 'successful' but they are apparently all launching law suits..

    I hope these suits are a success and cost the AI sites a lot of money.
    >IMHO, they are in such a hurry to be "the first to..." that they are not
    >putting nearly enough thought into public safety.

    Maybe a few very costly lawsuits will teach them the error of their ways...
    >I doubt it, but one can hope.

    I think the problem is people are seeing all the hype on AI, which
    is often grossly exaggerated, and they expect it to be a whole lot
    smarter than it really is.

    Plus there is a problem if you don't notice your child is suicidal
    and their best friend is a computer program. It's like the old days
    when some parents would sit the kids in front of the television to
    'babysit' them..

    AI has a place in the world but most expect too much from it..

    ---
    * SLMR Rob * Klingons do not stall!... It is a tactical delay
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Mike Powell@1:2320/105 to ROB MCCART on Wednesday, November 12, 2025 09:59:16
    I think the problem is people are seeing all the hype on AI, which
    is often grossly exaggerated, and they expect it to be a whole lot
    smarter than it really is.

    Agreed.

    Plus there is a problem if you don't notice your child is suicidal
    and their best friend is a computer program. It's like the old days
    when some parents would sit the kids in front of the television to
    'babysit' them..

    Also agreed. What they often do now is hand them a phone or tablet rather
    than attempt to entertain them via human interaction. ;(

    AI has a place in the world but most expect too much from it..

    Agree, and I question whether or not that place is outside of the
    workplace. The persons putting it outside of professional settings are
    part of the problem. But there is money to be made so it is not going away.

    Mike


    * SLMR 2.1a * I'm writing a book. I've got the page numbers done.
    --- SBBSecho 3.28-Linux
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)
  • From Rob Mccart@1:2320/105 to MIKE POWELL on Friday, November 14, 2025 09:42:40
    Plus there is a problem if you don't notice your child is suicidal
    >> and their best friend is a computer program. It's like the old days
    >> when some parents would sit the kids in front of the television to
    >> 'babysit' them..

    Also agreed. What they often do now is hand them a phone or tablet rather
    >than attempt to entertain them via human interaction. ;(

    Yes, you hear about this stuff but if you don't have access to
    youngsters you really have no idea.. My niece's kids are 11 and 9
    and for several years now when they are up here, she has to chase
    them outside or they'd be laying out on the couch playing games on
    their tablets all day long. And this is, for them, at the cottage.
    You could understand it at their home but up here where there are
    beaches and swimming and water sports and hiking and such..

    ---
    * SLMR Rob * Bread always falls on the buttered side
    * Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)