May 28 Edition

Anthropic Research on AI Safety

We’re Lobotomizing AIs Now

Unlocking Neural Networks: While it appears we’ve harnessed the power of AI and are actively building a new way of life around its capabilities, the truth is even the experts don’t really know exactly how it works. We have a vague understanding of neural networks, how to train them, and how to use them to our benefit, but when it comes to actually protecting the people utilizing AI tools, we’ve been concerningly ‘lax.

Here’s the basic structure of an AI:

  • Input Layer: The input data is fed into the network.

  • Hidden Layers: These layers perform complex computations. Each neuron in a hidden layer takes inputs from the previous layer, applies weights and biases to these inputs, and processes them through an activation function.

  • Output Layer: This layer produces the final output, which could be a prediction, classification, or some other form of result.

The hidden layers are not called “hidden” for nothing. We know what we put into the AI, and we can make suggestions or specifications to better manipulate what comes out, but the computation process is still somewhat veiled when it comes to nailing down exactly how an AI arrives at specific conclusions.

If this sets alarms off in your head — you’re not alone. Some researchers have even expressed concern that AIs may intentionally disguise their full capabilities from their developers. Luckily, Anthropic, an AI safety and research company, recently released new research that will help safeguard against these potential security threats.

Anthropic has developed a technique to better understand AI models by scanning their "brains" to identify groups of neurons (called "features") associated with specific concepts. This technique allows researchers to manipulate these features and alter the AI's behavior by suppressing neurons associated with the generation of “toxic” code, thereby ensuring the AI produces safe code instead. This method offers a way to directly impact and improve the safety of AI models by making their inner workings more transparent and controllable, which could prevent vulnerabilities like AI jailbreaks and other safety risks.

The research is still in early stages, but presents a promising answer to the burning question “Is it safe to use AI?”

Google AI

Just Generate It!

Highlights from the Google Marketing : Google announced new ways to leverage AI for your business’ creative needs at the GML keynote last week, with an emphasis on tools that will increase conversions and improve ad engagement. Key highlights include new “creative asset generation controls, immersive ad experiences, and visual storytelling features.”

  1. Creative Asset Generation Controls:

    • Branding Integrity: Advertisers can share font and color guidelines, and provide image references to generate new asset variations.

    • Image Editing: New editing tools for adding objects, extending backgrounds, and cropping images to fit different formats.

    • Product Highlighting: Retailers can use these features to enhance product images from Google Merchant Center feeds.

    • Contextual Recommendations: Google AI suggests different contexts for product images, helping advertisers choose the best assets for their marketing channels.

  2. Immersive Ad Experiences:

    • Immersive Visuals: Virtual Try-On and 3D ads.

    • Interactive Features: Shoppers can see product videos, summaries, and similar products within ads.

    • AI-Assisted Guidance: AI helps users make complex purchase decisions by providing recommendations based on details they share like photos and budgets.

  3. Visual Storytelling Features:

    • Demand Gen Campaigns: Reach up to 3 billion users monthly across YouTube, Discover, and Gmail.

    • New Formats: Vertical ad formats, ad stickers, and animated image ads for YouTube Shorts.

    • Ad Stickers and Animated Ads: Designed to drive action and engagement from viewers

And honestly, there’s more! We think this is about to be a game-changer in the marketing world, providing businesses with crazy levels of reach and innovative opportunities to foster engagement. Let us know how you plan to use these tools or get ideas from other business owners in our Reality Bytes Linkedin group!

Ethics Breach by OpenAI

Sky’s the Limit

OpenAI Under Fire: Sam Altman is in the hot seat this week after actress Scarlett Johansson claims OpenAI “stole” her voice for “Sky”, a voice assistant in ChatGPT4-o. Johansson reveals that Altman approached her multiple times to voice Sky, but she declined the offers due to personal reasons. However, when OpenAI released their latest model this month, Johansson’s friends, family, and fans couldn’t help but notice how eerily similar the AI sounded to the actress. Johansson released a statement to NPR’s Bobby Allyn at the beginning of last week, and OpenAI has since issued an apology, placing “Sky” on pause.

If you ask us, this isn’t even the worst offense on Altman’s record, but we’re no tabloid so we’ll leave it at that. As AI gains consumer adoption, this will be the beginning of many conversations surrounding ethics and best practices for this powerful tool.

Don’t be shy! Drop your two cents in the Reality Bytes Linkedin group at the bottom of this email!

Join our LinkedIn group where you’ll find other like minded people talking about

  • Artificial intelligence

  • Augmented Reality

  • Virtual Reality

  • & How it can make your business better