Meta's Segment Anything is Democratising AI

& How You Can Get Involved

Hey friends 👋 ,

On today’s Wednesday issue, we’re deep-diving Meta’s Segment Anything. In simple terms, this new AI model can cut out any object, in any image, with a single click. I’ve been trialling the demo for the past few days, and I’m impressed with the results. Here are my favourite insights.

Today's Through the Noise is brought to you by Wander.

Invest in these vacation rentals in just a few clicks ☝️

With, you can unlock access to vacation rental investing without the hassle and headache of doing it yourself.

Wander REIT is the first and only institutional-grade vacation rental investment product. That means investors get all the tax-advantaged benefits of a REIT in a new asset category: vacation home rentals. Instead of traditional apartment or office-building REITs, Wander REIT invests in the best of the best vacation rentals.

Enjoy targeted 8% dividends and a 14% targeted total return with appreciation from hand-picked, stunning vacation homes – starting with a $2,500 minimum – without having to buy a property, change light bulbs or deal with guests.

And for a limited time, new REIT investors may get an opportunity to invest in Wander’s next round of funding.

🤖 Segment Anything Model (SAM)

White Horse Example / MetaAI

3 Key Benefits

Segment Anything Model (SAM) is available to the public. I’ve been uploading images, playing around with the interface, and separating bears from salmon in Canadian wildlife photography. Here’s how SAM will impact your life.

Corgi Example / MetaAI

  1. Democratising Segmentation: SAM aims to democratise access to powerful AI models by reducing the need for specialised knowledge, expensive training equipment, and custom labelling of data for each specific task. It was trained on over 11M images and 1B masks. It is designed to be adaptable (learn from different types of data), making it more flexible and easier to use for various segmentation tasks. By making image segmentation more accessible, the goal is to make it easier for more people to perform this task and improve the quality of the results. People such as me and you now have free demo access to models that took multiple years, and millions of dollars to develop—Isn’t that awesome?

  2. Generalisation: SAM is trained on a diverse, high-quality dataset of over 1 billion masks, enabling it to generalise to new types of objects and images beyond what it observed during training. This means that practitioners will no longer need to collect their own segmentation data and fine-tune a model for their use case. In other words, a model that learns by itself, with no need for further training is now available to the public.

  3. Flexibility: SAM's promptable interface allows it to be used in flexible ways that make a wide range of segmentation tasks possible simply by engineering the right prompt for the model. It can perform both interactive segmentation and automatic segmentation, output multiple valid masks when faced with ambiguity about the object being segmented, and generate a segmentation mask for any prompt in real-time—meaning you can interact with images instantaneously, at light speed.

2 Notable Features

We’ve seen image recognition models before, but what makes SAM special?

Promptable Design / MetaAI

  1. Promptable design: SAM’s promptable interface allows it to be used in flexible ways that make a wide range of segmentation tasks possible simply by engineering the right prompt for the model. It can receive input prompts such as clicks, boxes, text, and even a user’s gaze from an AR/VR headset. Having a promptable design allows for flexible integration with other systems—meaning SAM can be used across various industries—including agriculture, construction, social media, and retail. Whilst you and I may use it for identifying cats, a farmer could utilise it to find deadly pests or signs of disease.

  2. Extensible Outputs: Output masks can be used as inputs for other AI systems. SAM can automatically recognise an object, collect data about its 3D modelling and reproduce it in a new environment. This could be used for creative tasks like collaging, or as a useful tool to imagine new furniture in your home or a Japanese Maple in your garden.

1 Integration Idea

Due to SAM’s recent release date, any ideas for big tech integrations are pure conjecture. That being said, I Imagine a model similar to Google Photo’s ‘People and Pets’ feature—which allows for the automatic recognition of beings—used throughout WhatsApp & Instagram:

Google Photos Segmentation / Online Tech Tips

  1. Automating Tags: Google Photos uses AI to recognise important figures in your life. No more filtering through thousands of images, simply click on a face and find the photo you want. Similarly, Instagram could use SAM to recognise & tag individuals, automatically identify locations/landmarks—and even search via text for specific objects or persons in any given image.

I can’t wait to see how big tech utilises AI systems such as SAM. The democratisation of such a model means we’ll likely see integrations across all industries—utilities, agriculture, retail & more. Where do you think SAM fits best?

Where does SAM fit best?

Login or Subscribe to participate in polls.

Login or subscribe to participate in polls

Let me know what you think

Login or Subscribe to participate in polls.

What are your thoughts on SAM? Reply to this email and let me know!

— Alex