
With the rapid advancement of generative AI technologies, distinguishing between authentic and synthetic content has become increasingly challenging. Young people frequently encounter potentially misleading or AI-generated material across various platforms—often without the tools to recognize it. Current verification methods typically require users to interrupt their browsing experience and use separate services to assess content reliability.
The AI Information Assistant helps users identify AI-generated content and access verification resources—directly within their regular browsing experience. For any content of interest, users receive objective indicators about source attributes, content authenticity, and verification status from established information quality partners. The assistant also offers educational context to help users better understand content origins and reliability.
We're developing a mobile application that works alongside popular social and information platforms. Users can interact with content through simple gestures to receive verification insights powered by content analysis technology, reference databases, and information quality partnerships. Created with input from diverse young users, the interface emphasizes accessibility, transparency, and educational value while respecting different perspectives.





Currently in development, the AI Information Assistant is designed collaboratively with students, academic researchers, and media professionals from diverse backgrounds. Beta testing is scheduled for late 2025. The project aims to equip users with practical digital literacy skills to evaluate information quality and recognize synthetic content—making verification accessible without requiring specialized expertise.