Cryptocurrency February 6, 2026

ZenO Opens Public Beta to Gather First-Person Audio-Visual Data for Physical AI, Built on Story Blockchain

Palo Alto startup launches an anonymized, rights-aware pipeline for egocentric data capture to train robots and embodied models

By Hana Yamamoto
ZenO Opens Public Beta to Gather First-Person Audio-Visual Data for Physical AI, Built on Story Blockchain

ZenO has begun a public beta to collect, anonymize, and structure first-person audio, video, and image data from smart glasses and smartphones to support the training of physical AI systems. The platform leverages Story's Layer-1 blockchain technology for onchain consent and provenance tracking, and the company is participating in the NVIDIA Inception program to access GPU resources and technical support. The beta will run for roughly 6-8 weeks and aims to validate end-to-end data capture, QA, anonymization, and marketplace storage workflows, while offering contributors immediate rewards and revenue sharing for downstream sales.

Key Points

  • ZenO launched a public beta to collect first-person audio, video, and image data for training physical AI systems, using smart glasses and smartphones.
  • The platform records wallet-signed consent and dataset identifiers on Story's Layer-1 blockchain and plans future programmable data-rights and licensing features written to Story.
  • Contributor economics include immediate XP rewards during the beta and revenue sharing in stablecoins if contributed data is sold, aligning incentives with long-term data quality and commercial demand.

Palo Alto, CA, February 6, 2026 - ZenO has opened a public beta for a new data collection platform that targets a core shortfall in physical AI development: access to rights-cleared, first-person real-world data. The company said the service will gather egocentric audio, video, and image streams captured from ZenO-branded smart glasses and participating smartphones, process and anonymize the footage, and catalog approved datasets for use in training robots, autonomous agents, and embodied AI models.

ZenO positions the beta as a response to a growing gap between where physical AI research has reached and the data available to bring those systems into real-world production. The startup argues that models trained mainly on web-scraped content or simulated environments face limitations when confronted with the messy, variable conditions of everyday human activity. By collecting what people actually see, hear, and do from a first-person perspective, ZenO aims to create datasets that enable models to better perceive, generalize, and operate reliably beyond laboratory settings.

The platform is built on Story's Layer-1 blockchain, which ZenO will use to record wallet-signed consent and dataset identifiers onchain during the beta. The company says this approach creates a verifiable record of contributor authorization and dataset provenance, and it plans further data rights and IP management features for a future release. ZenO also intends to write metadata and licensing terms for user-generated datasets to Story in later stages, enabling what it describes as programmable data rights, transparent licensing, and automated revenue distribution.

ZenO has joined a member of the NVIDIA Inception program to accelerate development of its Physical AI Data Network. The startup said participation grants access to NVIDIA's GPU ecosystem, technical expertise, cloud infrastructure benefits, and go-to-market resources. According to ZenO, this support will help scale the enterprise-grade, rights-cleared data infrastructure required to train robotics and embodied AI systems for complex physical environments.


Beta scope and operational flow

The public beta builds on an existing minimum viable product, which ZenO hosts at https://app.zen-o.xyz/. The test period will run for approximately 6-8 weeks and is focused on validating the full product pipeline from capture through anonymization and marketplace ingestion.

During the beta, contributors can:

  • Capture continuous first-person audio, video, and images using ZenO-branded smart glasses or smartphones;
  • Upload footage via ZenO's application for automated formatting and integrity checks;
  • Have submissions pass through a multi-stage quality assurance process combining AI-based screening and human review;
  • Apply automated anonymization tools to remove or obscure sensitive information such as faces and identifiable text;
  • Add structured metadata that describes actions and environments after anonymization; and
  • Have approved datasets securely stored and cataloged within ZenO's marketplace infrastructure.

ZenO emphasizes that egocentric data - what a person sees and hears from their own perspective - differs materially from synthetic datasets or scraped online content, and that physical AI models require this type of ground truth to improve real-world performance.


Contributor incentives and hardware options

The company uses a two-stage incentive model for contributors. During the beta, participants receive immediate rewards denominated in XP. If datasets are sold downstream, contributors will share in revenue, with payouts made in stablecoins. ZenO describes this structure as aligning contributor incentives with long-term data quality and commercial demand rather than one-off labeling tasks.

ZenO's smart glasses are produced through an OEM relationship and released under the ZenO brand. The startup says the glasses support audio and video capture, hands-free operation, and all-day wearability, with specifications comparable to leading consumer smart glasses. Contributors can alternatively complete missions using smartphones depending on the data-collection requirements.


Onchain record-keeping and future rights management

During the beta, ZenO records wallet-signed consent and dataset identifiers onchain to create verifiable traces of contributor assent and dataset provenance. The company noted that full intellectual property and data-rights management capabilities are planned for a subsequent product release. The roadmap envisions writing dataset metadata and licensing information onto Story to enable programmatic licensing and automated revenue distribution in the future.

"The real world doesn't look like the internet," said Dawn Kim, Co-Founder of ZenO. "Physical AI systems need high-quality, rights-cleared, first-person data captured in real environments. This beta is about proving the foundation for how that data can be collected, structured, and used to train models that actually work outside the lab."


Partner engagement and next steps

ZenO said it is working with early data demand partners and plans to disclose traction metrics following the beta period. The beta is intended to validate end-to-end technical flows and contributor economics ahead of broader commercialization efforts.

For those interested in participating, ZenO directs readers to https://zen-o.xyz for more information or to join the beta.


About ZenO

ZenO is a Physical AI data collection platform focused on capturing first-person human behavior for training robotics and embodied AI systems. The company facilitates the upload of video and image data generated from everyday activities via smart glasses and smartphones, and is designed to support scalable, compliant collection of real-world data for next-generation Physical AI systems.

About Story

Story is an AI-native blockchain network intended to operate as a provenance, licensing, and economic layer for AI data and models. Powered by the $IP token, Story aims to enable datasets, models, and AI outputs to be registered as intellectual property, licensed programmatically, and monetized with built-in attribution. Backed by $136 million from investors including a16z crypto, Polychain Capital, and Samsung Ventures, Story launched its mainnet in February 2025 and is positioning itself as infrastructure for the AI data economy.

Risks

  • The beta's outcomes are limited to validating the end-to-end capture, QA, anonymization, and marketplace workflows during a 6-8 week period; future product capabilities and traction metrics will be disclosed only following the beta.
  • Full intellectual property and data rights management functionality is not yet implemented and is planned for a future release, leaving some governance and licensing capabilities untested during the beta.
  • Reliance on automated anonymization and a combined AI-human QA process introduces uncertainty about whether sensitive information will be consistently and adequately anonymized across diverse real-world captures.

More from Cryptocurrency

Bitcoin inches higher toward $68,000 but stays vulnerable as rate and geopolitical risks mount Feb 20, 2026 Sai Debuts Perps Platform Aiming to Pair CEX-Style Speed with Full Onchain Settlement Feb 18, 2026 Mubadala Increases BlackRock Bitcoin ETF Holdings Despite Sharp Crypto Downturn Feb 18, 2026 MYX Secures Consensys-Led Strategic Funding to Accelerate V2 Infrastructure Feb 18, 2026 Bitcoin slips under $68,000 as traders await U.S. economic data; altcoins show mixed moves Feb 18, 2026