AI Slop is ruining houseplant communities

  • AI-generated houseplants: Fakes like floating pots or neon-colored monstera are flooding online communities—harmless-sounding fun but rooted in deception .
  • Misinformation & scams: Sellers push seeds for impossible plants—e.g., pink monstera or blue hostas—with even Google’s assistant validating their existence .
  • Garden center fallout: Casey Schmidt Ahl at Colonial Gardens reports weekly inquiries about such plants. They now actively debunk these and warn customers .
  • Care myths amplified: AI chatbots and apps regurgitate pseudoscience—cinnamon, honey cuts—adding to a folklore-fueled misinformation ecosystem .
  • Community impact: Reddit plant mods (e.g. Caring_Cactus) ban AI content. Users find it shallow, spammy—and it disrupts real engagement .
  • Why it matters: Fake plant posts may dilute the joy and mindfulness of real gardening. They also risk undermining awe and trust in genuine plant variety. Its also happening on other forums and topics like interior design.

Gemini CLI - A new coding agent from Google

  • 🚀 Google has launched Gemini CLI, a free and open-source AI agent that brings the power of Gemini directly to developers’ terminals, offering seamless AI assistance for coding, research, and workflow automation.
  • 💡 Gemini CLI provides unmatched free usage limits—up to 60 model requests per minute and 1,000 per day—making advanced AI accessible to individual developers at no charge.
  • 🔧 The tool is highly extensible and open source (Apache 2.0), supporting custom prompts, Google Search grounding, and integration with the Model Context Protocol (MCP) for tailored workflows.
  • 🤝 Gemini CLI shares its technology with Gemini Code Assist, enabling prompt-driven, multi-step AI coding support in both the terminal and VS Code.
  • 🛠️ Getting started is easy: install Gemini CLI and log in with your Google account to unlock powerful, context-aware AI in your command line.
  • This certainly has the feel of inspired by Claude Code

AI Slop gets the John Oliver Treatment

Watch for The Cabbage Hulk !


Google Cloud donates A2A to Linux Foundation

Today at Open Source Summit North America, the Linux Foundation announced a new independent initiative, in partnership with AWS, Cisco, Google, Microsoft, Salesforce, SAP, and ServiceNow, to host the Agent2Agent (A2A) protocol—a web-standard approach donated by Google, inclusive of SDKs and dev tools . The A2A protocol empowers AI agents to discover each other, securely exchange context, and coordinate complex actions across any platform—eliminating silos and unlocking multi-agent innovation . Backed by over 100 companies, including AWS and Cisco as new validators, A2A is a major step toward interoperable AI ecosystems . Now under the neutral governance of the Linux Foundation, the project ensures open collaboration, IP stewardship, and long-term scalability—fueling the next era of agent-powered solutions across industries .


🚦A Big legal win for AI — but the road ahead is rocky for Anthropic 🚦

A federal judge in San Francisco has ruled that Anthropic’s training of Claude on legally purchased and digitized books qualifies as fair use — a landmark victory for AI models and a potential legal shield for the industry . The court recognized that ingesting and distilling these texts to train transformative systems aligns with copyright’s intent “to foster scientific progress” .

But the celebratory headlines mask an important caveat: Anthropic still faces a December trial over the alleged piracy of over 7 million books sourced from shadow libraries — a move deemed unlawful and not protected as fair use .

This ruling sends a clear signal: AI training through legitimate means is OK, even encouraged — but innovation built on stolen content won’t stand. A milestone for generative AI, but with a hard reminder: how you build your model matters as much as what it does. 👨‍⚖️📚


Brain startup Sanmai Technologies building AI powered ultrasound device for treating Mental Health

Reid Hoffman leads $12M Series A in Sanmai Technologies—a stealth startup building a sub‑$500, AI‑powered ultrasound headset to non‑invasively treat mental health and boost cognition from the comfort of home . Joining the board, Hoffman backs Sanmai’s use of low‑intensity focused ultrasound to precisely target brain regions—dabbling where traditional meds falter. The real kicker? An embedded AI companion that adapts treatment to the user and navigates technical hurdles like skull distortion . CEO Jay Sanguinetti, with over a decade of research behind him, is already running clinical trials—starting with anxiety—in Sunnyvale . As ultra‑large brain‑tech like Neuralink push surgical extremes, Sanmai bets on a safer, scalable consumer path backed by FDA‑engaged clinical data. A paradigm‑shifting step in digital therapeutics and mental wellness.

Bloomberg


Salesforce integrates MCP in Agentforce 3 adding tooling, governance, and observability for Enterprise deployment

unveiled June 23, 2025, brings enterprise-grade maturity to AI agents by tackling 2 key challenges: visibility and control. The introduction of Command Center offers a unified observability hub—complete with real-time KPIs, session tracing, AI-driven insights, and native integration into monitoring tools and Service Cloud—for monitoring, measuring, and optimizing agent performance . Native support for the Model Context Protocol (MCP) (plus future A2A support) transforms Agentforce into a plug‑and‑play ecosystem, connecting with tools like AWS, PayPal, Cisco, Box, Google Cloud, Stripe and more via the expanded AgentExchange . Under the hood, an enhanced Atlas architecture delivers lower latency, better accuracy, greater resiliency, and LLM flexibility (including Anthropic Claude now and Google Gemini later) . Real-world traction spans 8 000+ customers, with ROI examples like a 15% reduction in case handle times and 70% autonomous chat resolutions during tax season . In short, Agentforce 3 transforms AI agents from experimental novelties into controllable, scalable, and productive teammates for enterprise workflows.

Salesforce Agentforce 3


Microsoft Mu - Harnessing the power of Small Language Models

Microsoft just dropped Mu, a lightweight, locally-run small language model (SLM) designed to power AI agents directly on Windows PCs—no cloud, no GPU, just fast, focused intelligence. The first use case? A smart assistant inside the Windows 11 Settings app that helps users configure their system with natural language (e.g., “change my display resolution”). Trained on Windows-specific data and optimized for token efficiency, Mu proves that small, domain-specific models can deliver real utility with minimal resources. A big step toward practical, private AI—right on your desktop.

Microsoft Mu


Andrej Karpathy: Software Is Changing (Again)

Introduces the concept of Software 3.0, where prompts are used as programs to instruct large language models.

LLMs are “people spirits”: stochastic simulations of people.

  • Software is changing : Karpathy argues that software development is undergoing a fundamental shift, similar to changes that happened twice before in the last few years.

  • Software 3.0: He introduces the concept of Software 3.0, where prompts are used as programs to instruct large language models (LLMs).

  • LLMs as Operating Systems : Karpathy draws an analogy between LLMs and operating systems, highlighting their complexity and the way they manage resources.

  • Partial Autonomy Apps : He discusses the rise of partially autonomous applications that combine traditional interfaces with LLM integration.

  • Vibe Coding : Karpathy touches on the idea of vibe coding, where natural language is used to program computers, making programming more accessible. Building for Agents: Build software infrastructure that can be easily accessed and manipulated by AI agents.


Sam Altman on the Future of AI: Insights from the Inaugural OpenAI Podcast

The highly anticipated first episode of the OpenAI Podcast recently dropped, featuring a candid conversation between OpenAI CEO Sam Altman and host Andrew Mayne. This premiere offers a deep dive into OpenAI’s current trajectory and future ambitions.

For those tracking the bleeding edge of AI development, Altman’s discussion covers critical topics including the roadmap for GPT-5 ( sometime this summer ) , the pursuit of Artificial General Intelligence (AGI), the enigmatic Project Stargate, evolving research workflows, and even the potential implications of AI in parenting.

A recurring sentiment among the initial wave of comments highlights a strong community desire for OpenAI to eschew advertising within their AI products. This feedback underscores a prevalent user expectation for clean, unfettered access to AI tools, drawing parallels to a research-oriented experience.

This inaugural podcast serves as a valuable direct channel for insights from OpenAI’s leadership, providing a glimpse into the strategic thinking behind some of the most impactful AI advancements today.


Search with Voice chats using Gemini

Google is piloting Search Live within its AI Mode (U.S. only, via Google Labs) on Android and iOS. Users can now have real-time voice conversations with Gemini AI—ask questions, get audio replies, and keep chatting naturally. Visual inputs through the camera are supported too (though not yet live in the actual chat). The feature displays relevant links, keeps transcripts, and saves everything in your AI Mode history. Conversations even persist when you switch apps. This is the latest evolution in Google’s shift from traditional search to an interactive, multimedia-centric experience—joining similar efforts from OpenAI, Anthropic, and Apple

Google Labs


A new Chinese startup MiniMax enters the open source LLM race

Chinese AI startup MiniMax has released MiniMax-M1, a new open-source language model.

Key features of MiniMax-M1:

  • Designed to outperform Deepseek’s R1.
  • Reasoning-focused model with a context window of up to one million tokens and a “thinking” budget of up to 80,000 tokens.
  • Uses an efficient reinforcement learning approach, making it leaner than other open-source options.
  • Available for free under the Apache-2.0 license on Hugging Face.

In benchmarks, it outperforms other open models like DeepSeek-R1-0528 and Qwen3-235B-A22B. Its performance on the OpenAI MRCR test, which measures complex, multi-step reasoning across long texts, comes close to the leading closed model, Gemini 2.5 Pro.

MiniMax


Gemini 2.5 Pro and Flash from Google go GA

Google has officially advanced its Gemini 2.5 model lineup as of June 17, 2025: both Gemini 2.5 Pro and Gemini 2.5 Flash are now stable and generally available, while Gemini 2.5 Flash‑Lite enters public preview. Pro offers top-tier reasoning, multimodal understanding, coding capabilities, and handles up to a 1 million-token context—ideal for complex, mission-critical tasks. Flash strikes a balance of speed, cost-efficiency, and robust reasoning, with simplified pricing at $0.30 per million input and $2.50 per million output tokens. Flash‑Lite is the fastest and most economical option, optimized for high-throughput tasks like translation and classification, with reasoning off by default and support for tool use. All three models share the million-token context window, adaptive “thinking” control, and grounding via Google Search, code execution, function-calling, and multimodality—all accessible via Gemini app, AI Studio, Vertex AI, and more.

Here are some key features across the Gemini 2.5 models:

  • Hybrid Reasoning Models: Designed to provide excellent performance while being efficient in terms of cost and speed.
  • “Thinking” Capabilities: Models can reason through their thoughts before responding, which leads to improved accuracy. Developers have control over the “thinking budget” to balance performance and cost.
  • Native Multimodality: Understands and processes inputs across various modalities including text, images, audio, and video.
  • Long Context Window: Features a 1 million-token context length, allowing them to comprehend vast datasets and handle complex problems from different information sources.
  • Tool Integration: Can connect to tools like Google Search and code execution.

Gemini 2.5 Pro:

  • Most Advanced Model: Excels at coding and highly complex tasks.
  • Enhanced Reasoning: State-of-the-art in key math and science benchmarks.
  • Advanced Coding: Capable of generating code for web development tasks and creating interactive simulations.

Gemini 2.5 Flash:

  • Fast Performance: Optimized for everyday tasks and large-scale processing.
  • Cost-Efficient: Balances price and performance.
  • Live API Native Audio: Offers high-quality, natural conversational audio outputs with enhanced voice quality and adaptability, including features like Proactive Audio and Affective Dialog.

Gemini 2.5 Flash-Lite:

  • Most Cost-Efficient and Fastest: Designed for high-volume, latency-sensitive tasks like translation and classification.
  • Higher Quality: Outperforms 2.0 Flash-Lite on coding, math, science, reasoning, and multimodal benchmarks.
  • Lower Latency: Offers lower latency compared to 2.0 Flash-Lite and 2.0 Flash for a broad range of prompts.

Gemini Technical Report


Scouts: agents that monitor the web for anything you care about.

Yutori - a new startup that wants to monitor the web for you with agents in the cloud.

Simply tell Scouts what you’re looking to track — Any new papers on multimodal research, Flights to Tokyo under $900 in August, Price drops on the Nintendo Switch 2 — and a team of agents will immediately get deployed to monitor, either specific URLs or the entire web, and notify you when there’s a relevant update.

Yutori


V-JEPA 2 - A World Model from Meta for robots

Meta’s V-JEPA 2 is a new video-trained world model designed to help AI agents and robots predict physical outcomes before acting—much like humans use intuition to navigate, avoid obstacles, or anticipate motion. Building on its predecessor, V-JEPA 2 enhances understanding, prediction, and planning by learning how objects and people interact through vast video data. The result: robots powered by V-JEPA 2 can now perform tasks like reaching, picking up, and placing objects—even in unfamiliar environments—pushing AI one step closer to advanced machine intelligence.

Meta


EchoLeak - First known Zero click AI Vulnerability

A critical zero-click flaw called EchoLeak was discovered in Microsoft 365 Copilot, allowing data exfiltration from enterprise systems without user interaction. It hijacks Copilot’s RAG pipeline by injecting hidden prompts into seemingly benign emails, coaxing the model to spill internal data via auto-generated links or image requests. Fixed server-side in May (CVE-2025-32711) with no known exploits so far, the incident highlights a new attack surface in AI systems—where LLMs leak data silently. Enterprises should urgently tighten prompt-injection defenses, scope retrieval inputs, and sanitize model outputs to avoid similar LLM-triggered leaks.

Bleeping Computer


o3-pro

OpenAI launches o3-pro it most intelligent model. it can search the web, analyze files, reason about visual inputs, use Python, personalize responses using memory, and more. The cost of using O3-pro is priced at $20 per million input tokens and $80 per million output tokens in the API. Input tokens are tokens fed into the model, while output tokens are tokens that the model generates based on the input tokens.
 O3-pro has some limitations. Currently, temporary chats with the model in ChatGPT are disabled due to an ongoing technical issue being resolved by OpenAI. Additionally, O3-pro is unable to generate images, and Canvas, OpenAI’s AI-powered workspace feature, is not supported by O3-pro.

The model did do a good job on Simon Willisons favorite prompt “ Generate an SVG of a pelican riding a bicycle “

Was Sam’s essay inspired by what he see in the new o3-pro model launched today ?

OpenAI Release Notes

03-pro


The Gentle Singularity - An Essay by @sama

An engaging essay by Sam Altman on the Singularity being close at hand.

We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.

Sam writes about the cost of chatGPT Query - (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.)

He envisions a 2030’s decade of abundance driven by intelligence and energy. “We are climbing the long arc of exponential technological progress”

May we scale smoothly, exponentially and uneventfully through superintelligence.

Sam Altman


Apple research finds a scaling limitations in reasoning models

Apple’s latest research reveals a critical limitation in current reasoning-enabled language models: as task complexity increases, their performance sharply degrades. In structured tests using classic logic puzzles, these models initially improve with added “thinking” steps, but fail entirely on harder tasks—often generating fewer reasoning steps when more are needed. This counterintuitive “underthinking” highlights a fundamental flaw: current architectures can’t scale reasoning effectively. The findings challenge the idea that simply adding chain-of-thought prompts or compute leads to better thinking, signaling the need for fundamentally new approaches to build truly general reasoning systems.

The Decoder


Use of AI in government - How the UK tech secretary uses ChatGPT for policy advice

Peter Kyle, the UK’s Secretary of State for Science, Innovation and Technology, has been revealed to use ChatGPT for various aspects of his role, including seeking policy advice on issues like the slow adoption of AI by small and medium-sized businesses in the UK. Records obtained through Freedom of Information laws by New Scientist show he also used the AI tool for media engagement recommendations (e.g., podcasts to appear on) and for definitions of technical terms such as “quantum” and “digital inclusion.” While his department defends his use of AI as a labor-saving tool that complements, rather than replaces, official advice, the revelation has sparked discussions about the appropriate use of AI in government and concerns regarding potential biases or data security.

New Scientist