Why context-aware AI agents will give us superpowers in 2025


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. learn more


2025 is the year that big tech changes from selling bigger and more powerful devices to selling us bigger and more powerful capabilities. The difference between a tool and a capability is subtle but profound. We use devices as external substances that help us overcome our organic limitations. From cars and airplanes to phones and computers, machines greatly expand what we can achieve as individuals, in large teams and as large civilizations.

Abilities are different. We experience abilities in the first person as self-embodied abilities that feel internal and immediately accessible to our conscious minds. For example, language and mathematics are man-made technologies that we download into our brains and carry around with us throughout our lives, expanding our abilities to think, create and collaborate. they are superpowers which feel so inherent to our lives that we rarely think of them as technologies at all. Fortunately, we don't have to buy a service plan.

However, the next wave of powers will not be free. But just like our abilities to think verbally and numerically, we experience these powers as self-physical abilities that we carry around with us throughout our lives. I refer to this new technology topic as developed mind and will emerge from the convergence of AI, conversational computing and augmented reality. And, in 2025 it will launch an arms race among the world's largest companies that sell us superhuman abilities.

These new superpowers will be unleashed by context-aware AI agents which are loaded in body wear devices (like AI glasses) that travel with us throughout our lives, see what we see, hear what we hear, experience what we experience and give us abilities developed to see and interpret our world. In fact, by 2030, I predict that most of us will live our lives assisted by context-aware AI agents that provide digital superpowers into our normal everyday experience.

How will our human future unfold?

First and foremost, we do twilight to these intelligent actions, and they will be look backacts as an omniscient alter ego that provides us with context-sensitive suggestions, knowledge, guidance, advice, spatial memoriesdirectional signs, laughing pirates and other verbal and visual content that coaches us through our days and teaches us about our world.

Consider this simple situation: You are walking downtown and see a store across the street. I wonder what time it will be open? So, you grab your phone and type (or say) the name of the store. You can quickly find the hours on the website and perhaps review other information about the store as well. That is the basic computing model for the use of devices that are common today.

Now, let's look at how big technology moves to a capacity computing model.

Level 1: You are wearing AI glasses that see what you see, hear what you hear and process your surroundings through a large multimodal language model (LLM). Now when you see that store across the street, you just whisper to yourself, “I wonder when it opens?” and a voice will immediately ring back in your ears “10:30 AM.”

I know this is a small departure from asking your phone to find a store name, but it feels profound. The purpose is for the context-aware AI agent to share your reality. It doesn't just track your location like GPS, it sees, hears and pays attention to what you pay attention to. This will make it feel much less like a tool, and much more like an inner ability connected to your first-person reality.

And when asked a question by the AI-powered alter ego in our ears, we often respond with just bowing our heads in affirmation (detected by sensors in the glasses) or shaking our heads to deny it. It will feel so natural and seamless, we may not even consciously realize that we have responded.

Level 2: By 2030, we won't need whiskey to AI agents that travel with us through our lives. Instead, we will be able to simply speak the words, and the AI ​​will know what we are saying by reading our lips and detecting activation signals from our muscles. I am confident that “mouthing” will be used, because it is more private, more stable in noisy places, and most importantly, it feels more personal, internal and self-embodied.

Level 3: By 2035, you may not even need to mouth the words. That's because the AI ​​will learn to interpret the signals in our muscles with such subtlety and precision, we just have to think of verbal words to express our intention. We will be able to focus our attention on any object or activity in our world and think about something, and useful information will come back from our AI glasses like omniscient voice in our head.

In fact, the capabilities go far beyond just asking about the things around you. That's because the onboard AI that shares your first-person reality will learn to anticipate the information you want before you even ask for it. For example, when a colleague comes down the hall and you can't remember his name, the AI ​​will sense your displeasure, and a voice will call out: “Gregg from engineering.”

Or when you pick up a can of soup in a store and you're curious about the carbs or wonder if it's cheaper at Walmart, the answers just ring in your ears or appear visually. It even gives you superhuman abilities to assess the emotions on other people's faces, predict their emotions, goals or intentions, and coach you in real-time conversations to make you stronger, more attractive or more exciting (see this sport. video example).

I know some people will be skeptical of the adoption rate I predict above and the fast time frame, but I don't make these claims lightly. I have spent much of my career working on technologies that expansion and expansion of human capabilitiesand I can say without question, that the mobile computing market is going to run in this direction in a very big way.

Over the past 12 months, two of the world's most influential and innovative companies, Meta and Google, have revealed that they intend to give us corporate auto-superpowers. Meta made the first big move by adding context-aware AI to their Ray-Ban glasses and by showing off their Orion mixed reality prototype that adds impressive visual capabilities. Meta is now well-positioned to leverage its large investments in AI and augmented reality (XR) and become a major player in the mobile computing market, and will likely do so by sell us superpowers we can't resist.

No older, recent Google Android XR announcedA new AI-powered operating system for contributing to our world with seamless context-aware content. They also announced a partnership with Samsung to bring new glasses and headphones to market. With over 70% market share for mobile operating systems and an increasingly strong AI presence with Gemini, I believe Google is well positioned to be a leading provider of technology-enabled human resources in the next few years.

Of course, we have to consider the risks

To mention the famous ones 1962 Spiderman game“With great power comes great responsibility.” This wisdom is literally about superpowers. The difference is that the big responsibility will not be on the users who buy these techno powers, but on the companies that provide them and the regulators that oversee them.

After all, when we wear AI-powered augmented reality (AR) glasses, each of us could find ourselves in a new reality where technologies managed by third parties we can selectively change what we see and hear, while AI-powered voices whisper in our ears with advice, information and guidance. Although the intentions are positive, even magical, the potential for abuse is just as deep.

To avoid the dystopian consequences, my main recommendation is to consumers and manufacturers adopt a subscription business model. If the arms race for superpower sales is led by the company that can provide the most amazing new capabilities for a reasonable monthly fee – we all benefit. Instead, if the business model becomes a competition to monetize the powers that be by delivering the most effective targeted impact to our eyes and ears throughout our daily lives, consumers could easy to handle with a precision and scope that we have never had before.

In the end, these superpowers won't feel optional. After all, using them could put us at a cognitive disadvantage. It is now up to the industry and the regulators to ensure that we implement these new capabilities in a way that is not disruptive, manipulative or dangerous. I am confident that this can be a magical new direction for computing, but it requires careful planning and monitoring.

Louis Rosenberg founded Immersion Corp, Outland Research and Unanimous AIand wrote Our Next Reality.

Data Decision Makers

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people who do data work, can share insights and innovation related to data.

If you want to read about cutting-edge ideas and information, best practices, and the future of data and data technology, join us at DataDecisionMakers.

You may even be considering contributing to an article by yourself!

Read more from DataDecisionMakers



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *