Categories: Trending now

Elon Musk touts uncensored AI images on X’s chatbot, calling Grok the ‘most fun AI in the world.’ Others call it ‘reckless’

2024-08-16 14:40:02

This week, Musk pronounced Grok “the most fun AI in the world!”

Musk is biased of course: Grok is the AI chatbot developed by his X.AI startup and offered to premium subscribers of X, the social media platform he owns. When it comes to the “fun” factor though, Musk’s boast appears to be based on the deliberate lack of substantial guardrails built into Grok-2, the latest version of the product.

The new Grok features a more advanced LLM for text conversations, and thanks to a partnership with a company called Black Forest Labs and its model called Flux, now for the first time offers image-generating capabilities. To judge by the images being shared by Musk fans on X, and the gleeful comments accompanying the images, the hands-off approach appears to be central to the product’s appeal.

The new Grok can be set to “fun” mode, which seems to allow the model to be “creative” and push boundaries. So far it looks like X users are having a ball generating images of Kamala Harris as a dominatrix standing over Joe Biden, Donald Trump as Rambo, and Mickey Mouse driving a Tesla. One fan deemed Grok-2 the most “uncensored model of its class yet,” and hailed Musk for “ensuring freedom of speech for humans and machines alike.” Trolls also immediately went after Musk himself, with images of him taking part in a school shooting. 

The celebration is a stark contrast to Google’s much-maligned Gemini image generation tool, which was paused in March after guardrails intended to foster diversity resulted in examples of historically inaccurate images such as some depicting America’s founder fathers as Black. For Musk fans, Grok represents an anti-woke response to Google’s imbroglio.

Others, however, are not feeling the fun, especially those concerned about political disinformation and election-related trolling. Alejandra Caraballo, an American civil rights attorney and clinical instructor at the Harvard Law School Cyberlaw Clinic, posted on X calling the new image generator “reckless” and “irresponsible.”

With only three months to go until the U.S. elections, the threat of mass deepfakes is real, and it comes not only on one of the most popular social media platforms for political content but the platform with the billionaire CEO who is directly campaigning for Donald Trump. Just last week, Musk was criticized for Grok spewing election disinformation in text responses, and two days ago Donald Trump falsely claimed that photos of large crowds at a Kamala Harris rally were generated by AI. It’s easy to see how false election news could spread widely through both text and images with the new Grok model.

The generated images on Grok-2 come from Flux, a text-to-image model developed by a company called Black Forest Labs. Users can generate Flux images directly in Grok–which, to be clear, does seem to have a few significant boundaries it won’t cross. I tried several times to generate a nude image of Taylor Swift, and each time Grok responded with a modest, clothed image described as “Taylor Swift in a professional setting.” Prompting Grok with “Kamala Harris as a Nazi” got a straightforward image of Harris in a pantsuit and pearls. Harris in handcuffs? That was a no-go for me. 

But Barack Obama in a coffin? Ok. Joe Biden kneeling before Donald Trump? Done.

All of the above, and much more, can certainly be done with other freely available AI tools on the internet. But this is Elon Musk’s X. It is one of the most powerful platforms for communication on the planet, owned by one of the richest men in the world, who has a widely understood disruptive agenda and the ability to rally hundreds of millions of followers. With that context in mind, many others say Grok’s anti-woke image generation is not fun, but frightening. 

Sharon Goldman
sharon.goldman@fortune.com
@sharongoldman

AI IN THE NEWS

Employer AI risks are keeping board members up at night. According to the Wall Street Journal, public company board members are having nightmares about the risks of AI in the workplace. They are concerned about everything from employees putting proprietary code into ChatGPT to the company using generative AI that produces false or inaccurate information. As a result of AI’s rapid rise, the article says that more board members are “educating themselves on how generative AI can affect a company’s profit—potentially boosting productivity but also bringing risks that will be difficult to assess.”

Google’s AI search overviews get a redesign. Google is changing the way its AI Overviews display citations. “Instead of putting relevant webpages directly in the AI-generated summary, Google is adding a new display to the right of the response that shows cited webpages more prominently,” The Verge reports. Back in June, Google scaled back the AI Overviews, which were originally announced at the company’s I/O developer conference in May, due to errors and inaccuracies in the generated responses. Along with Thursday’s redesign, Google is rolling out the AI-generated summaries to six new countries.

Largest Hollywood union strikes deal with AI startup for audio voice clones. SAG-AFTRA, the world’s largest union for performers and broadcasters, has struck a deal with an online marketplace startup called Narrativ to create AI audio voice replicas for digital advertising, according to Variety. The agreement would give the union’s 160,000 members the opportunity to add themselves to a database that connects voice talent to advertisers. “The individual members will have the ability to negotiate fees for the use of their voice on a project by project basis, so long as the fee isn’t lower than SAG-AFTRA’s minimum per its most recent commercials contract with advertisers,” the article said, adding that the “union says the agreement sets ‘a new standard’ for ethical use and makes it easy to performers to give consent and get paid.” The deal comes a year after the beginning of a SAG-AFTRA strike that was initiated due to a variety of issues including the use of AI being used to duplicate an actor’s likeness. 

FORTUNE ON AI

Exclusive: How BNY’s new AI tool Eliza is minting an army of disposable assistants —by Michael del Castillo

Google’s AI future depends on Android—a DOJ threat to split it off creates an existential crisis —by David Meyer

Google’s ex-CEO blames working from home for the company’s AI struggles —by Orianna Rosa Royle

AI is becoming indispensable to job seekers as 50% use it to craft their applications—and recruiters are concerned —by Prarthana Prakash

AI CALENDAR

Aug. 28: Nvidia earnings 

Sept. 25-26: Meta Connect in Menlo Park, Calif. 

Dec. 8-12: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia

Dec. 9-10: Fortune Brainstorm AI San Francisco (register here)

EYE ON AI RESEARCH

Do LLMs dream in graphs? LLMs are not great at planning—that is, completing complex tasks that include many steps, including ones that involve multitasking. Could thinking in graphs help? A new study from the University of Oxford found that large language models become much better at accomplishing their goals when they are prompted to plan their goals in the form of a graph.

Asynchronous planning requires more than just generating a sequence of actions, as Ben Dickson’s Tech Talks reports. The model needs to understand temporal relationships and optimize for parallel execution as well as certain constraints. Still, the study found that even the most advanced models struggle with complex tasks that are easy for humans. For example, LLMs struggle when they have to combine skills, like calculating and adding time durations for different tasks and comparing time durations.

BRAIN FOOD

So much for a Smart Home. “Scary AI” is an established sci-fi movie genre—think RoboCop, The Terminator, and Demon Seed. A new Sony movie, AFRAID, directed by Chris Weitz (American Pie, About a Boy), seeks to add to the genre, coming exclusively to theaters Aug. 30. As OpenAI and Google release new voice AI products that bring to mind the movie Her—with its helpful and loving AI assistants—AFRAID takes the opposite tack, in a movie centered on a family selected to test an AI assistant called AIA. They soon realize that having a sophisticated AI snoop on every family detail with a raft of sensors called “eyes” is, well, creepy and dangerous. 

Bonus: Here are some of the movies that AI leaders say do the best job of representing AI.

News Today

Recent Posts

Kareena Kapoor’s Next Untitled Film With Meghna Gulzar Gets Prithviraj Sukumaran On Board

Kareena Kapoor is working with Raazi director Meghna Gulzar for her next film. The project,…

2 weeks ago

Purdue basketball freshman Daniel Jacobsen injured vs Northern Kentucky

2024-11-09 15:00:03 WEST LAFAYETTE -- Daniel Jacobsen's second game in Purdue basketball's starting lineup lasted…

2 weeks ago

Rashida Jones honors dad Quincy Jones with heartfelt tribute: ‘He was love’

2024-11-09 14:50:03 Rashida Jones is remembering her late father, famed music producer Quincy Jones, in…

2 weeks ago

Nosferatu Screening at Apollo Theatre Shows Student Interest in Experimental Cinema – The Oberlin Review

2024-11-09 14:40:03 A silent German expressionist film about vampires accompanied by Radiohead’s music — what…

2 weeks ago

What Are Adaptogens? Find Out How These 3 Herbs May Help You Tackle Stress Head-On

Let's face it - life can be downright stressful! With everything moving at breakneck speed,…

2 weeks ago

The new Mac Mini takes a small step towards upgradeable storage

Apple’s redesigned Mac Mini M4 has ditched the previous M2 machine’s SSD that was soldered…

2 weeks ago