Futures
Access hundreds of perpetual contracts
TradFi
Gold
One platform for global traditional assets
Options
Hot
Trade European-style vanilla options
Unified Account
Maximize your capital efficiency
Demo Trading
Introduction to Futures Trading
Learn the basics of futures trading
Futures Events
Join events to earn rewards
Demo Trading
Use virtual funds to practice risk-free trading
Launch
CandyDrop
Collect candies to earn airdrops
Launchpool
Quick staking, earn potential new tokens
HODLer Airdrop
Hold GT and get massive airdrops for free
Pre-IPOs
Unlock full access to global stock IPOs
Alpha Points
Trade on-chain assets and earn airdrops
Futures Points
Earn futures points and claim airdrop rewards
Google’s New ‘Vantage’ Platform Uses AI Avatars To Test Critical Thinking, Collaboration, And Real-World Skills
In Brief
Google introduces Vantage AI system to develop and assess future human skills including critical thinking, collaboration, creativity, conflict resolution, and project management as AI advances.
Presented as “Vantage,” an AI-powered experimental system designed to support the development and assessment of these competencies through simulated interaction environments, the initiative has been developed in collaboration with pedagogy experts and researchers, including contributors from New York University. It is intended to function as a structured sandbox for students to practice and be evaluated on future-ready skills using methodologies similar to those applied in core academic subjects such as mathematics or science. The system is currently available in English via Google Labs.
The process works by placing users in simulated multi-agent environments where they interact with AI-generated avatars in open-ended scenarios such as debates, collaborative problem-solving tasks, or project planning exercises. Within this setup, a coordinating “Executive LLM” uses predefined assessment frameworks to guide the interaction and dynamically adjust conversational conditions. This includes introducing disagreement, challenging assumptions, or steering dialogue direction in order to generate observable behavioural evidence relevant to targeted skills.
Simulation-Based AI Framework For Assessing Future-Ready Skills
Meanwhile, a separate AI evaluation model analyses the full interaction once the task is complete. Using the same structured rubrics, it assesses the conversation transcript and produces a detailed performance profile that maps observed behaviours to specific skill categories. The output includes both quantitative scoring and qualitative feedback, translating complex interpersonal interactions into structured and measurable indicators of skill performance.
In order to ensure methodological reliability, the system has been tested in partnership with New York University through controlled studies involving 188 participants aged 18 to 25. These evaluations focused on collaboration-related competencies such as conflict resolution and project coordination. Results indicated that adaptive AI-driven conversational steering generated a higher density of assessable skill evidence compared with non-directed interaction models, while maintaining coherent and natural dialogue flow across multiple tasks.
Additional validation with external partners, including OpenMic, extended testing to creative and language-based tasks involving multimedia and literature-based exercises. In these cases, AI-generated evaluations demonstrated strong correlation with expert human scoring, reinforcing the system’s potential applicability beyond structured teamwork scenarios into more open-ended creative domains.
Such simulation-based systems could be integrated into educational environments as an additional evaluative layer alongside traditional assessment methods in the near future. This would enable students to be evaluated not only on subject knowledge but also on applied interpersonal and cognitive skills within controlled simulated settings. The broader aim of the research is to make future-ready competencies more measurable at scale and to align educational evaluation more closely with evolving workforce demands.