Physical Intelligence Asset Reuse Taskforce

Vessel
Technologies

Secure Infrastructure for Digital Model Exchange

PI-ART provides the missing national backbone that allows physical-AI progress to compound instead of resetting each program cycle — enabling repeatable, secure, and program-office-controlled evaluation of autonomy systems across performers and programs.

UEI: Q59ZG6AJ8UK9CAGE: 18F96Location: Pleasant View, Utah 84414Phone: 970-988-0453
The Problem

Physical-AI Progress Resets Every Program Cycle

DARPA and DoD programs spend tens of millions generating environments, USD models, and simulation infrastructure — then trained models are not preserved in an executable, reusable form. Subsequent programs are forced to start from zero.

Fragmented Asset Exchange

No centralized or secure marketplace for non-open-source assets. Transfers rely on manual processes, time-bound links, and informal coordination with insufficient access controls.

Lack of Standardization

Incompatible file formats, simulator versions, and configurations force each performer to independently convert assets. No standardized APIs for sensors, dynamics, kinematics, or I/O.

No Authoritative Evaluation Framework

Versioning of environments, robots, simulators, and scoring functions is not systematically enforced. Cross-performer benchmarking is slow, manual, or infeasible.

The Solution

PI-ART in Plain Language

PI-ART answers one critical question: “Can this robot perform this task in this environment?”

Three Core Asset Types

Environments
Digital representations of the physical world — terrain, buildings, obstacles, weather
Robots
Models of robotic platforms including sensors, actuators, and physical constraints
Tasks
Definitions of what the robot must do — navigate, manipulate, complete mission objectives

Evaluation Pipeline

  1. 1

    Select one environment, one robot, and one task from the secure library to define a test scenario

  2. 2

    A performer submits an autonomy solution (foundational or learned policy) claiming to solve that scenario

  3. 3

    PI-ART executes a standardized pipeline — containerized simulation via NVIDIA Omniverse, measuring performance with consistent metrics

  4. 4

    Results, logs, and artifacts are stored with version-locked assets ensuring evaluations are comparable, repeatable, and trustworthy across performers

Platform Capabilities

What Vessel Technologies Delivers

🔒

Secure Digital Asset Exchange

Curated library of standardized environments, robots, and policies stored in secure cloud infrastructure with role-based access control, managed onboarding, and large-file transfer support.

⚙️

Standardization & Reproducibility

Unified asset representations using USD for environments and robots, with versioned model artifacts. Standardized APIs abstract implementation details across sensors, dynamics, kinematics, and I/O.

🧪

Containerized Evaluation

Standardized evaluation pipelines execute via NVIDIA Omniverse (Isaac Sim, Isaac Lab). Containerized execution ensures consistency across performers with automated provisioning and teardown.

🏛️

Governance & Collaboration

Performer-based organizational structure with delegated administrators, enforced role assignment and access control, storage quotas, and integration with DARPA and DoD research programs.

Technical Architecture

Built on AWS, NVIDIA, and DoD Standards

Deployed on AWS Cloud with secure authentication, authorization, asset upload, versioning, and controlled download. Evaluation workflows execute within Docker containers running NVIDIA Omniverse tools — producing traceable logs, metrics, and artifacts aligned with DoD T&E expectations.

Transition Readiness

% of trained policies remaining executable after program closeout

Time to Reuse

Reduction in time for new performers to run existing models on validated environments

Evaluation Disputes

Eliminated through version-locked assets and standardized pipelines

Asset Persistence

% of program-funded environments, robots, and models retained for transition

Proposed Execution

Three-Phase Roadmap

01
Phase 1 · FY26

Core Infrastructure

Deliverable

Secure multi-performer asset repository, access controls, and version-locked environment and robot asset exchange.

Success Milestone: Minimum 4 TIAMAT performer groups actively exchanging shared assets.
02
Phase 2 · FY27

Standardization & Validation

Deliverable

Automated ingestion pipelines, standardized environment and robot validation, and policy compatibility checks.

Success Milestone: Four or more performers executing identical environments with reproducible results.
03
Phase 3 · FY28

Authoritative Evaluation

Deliverable

Containerized evaluation, scoring, and audit-ready reporting across programs.

Success Milestone: Contingent on successful multi-performer reproducibility in Phase 2.
Collaboration & Integration

Government & Academic Partners

Designed for integration with DARPA TIAMAT and adjacent programs across DoD, national labs, and leading research universities.

DARPAStanford UniversityOregon State UniversityUniversity of PennsylvaniaUC BerkeleyUniversity of Central FloridaFlorida International UniversityUniversity at BuffaloUniversity of MarylandJohns Hopkins UniversityGE ResearchUniversity of MichiganSRI InternationalUT AustinDuke UniversityArmy Research Laboratory (ARL)Naval Research Laboratory (NRL)NISTAFRLTRMC
NAICS Codes
  • 541715R&D in Physical, Engineering & Life Sciences
  • 518210Computing Infrastructure & Web Hosting
  • 541512Computer Systems Design Services
PSC Codes
  • R425Engineering/Technical Support
  • 7H20IT & Telecom: Platform
  • 7A20IT & Telecom: Application Development
  • 7J20IT & Telecom: Security & Compliance
Get In Touch

Ready to Learn More?

Whether you are a government program office, DoD research laboratory, or industry partner — reach out directly to discuss how PI-ART can support your program.

marksoulier@vessel-technologies.com