Our VActor™ system, allows a live talent to remotely perform interactive shows with audiences by directing 3D animated characters with their voice and controller-driven puppeteering. These shows can be delivered to physical locations with event screens, kiosks and for a variety of unique displays.
AI-VActor™ simply described, is an entirely autonomous and directable interactive experience that brings IP/Brands and well-known figures to animated life! Leveraging “AI with guardrails” to ensure characters won’t ever go off-brand or outside a domain, AI-VActor™ provides a new world of interactive advertising, marketing, entertainment, and educational exploration.
VActor™ Cloud will deliver live performed interactions to consumers virtually through mobile devices or web-based devices, expanding a new medium for fandom and consumer brand engagement. SimGraphics has partnered with Visaic, a sports and entertainment enterprise AI, Data, and video services company, that will help to prototype and support SimGraphics AI-VActor™ Cloud services to consumers everywhere.
VActor™ technology enables performers to remotely operate interactive 3D animated characters, typically housed in a kiosk, exhibit or larger event screen that include, speakers to hear characters, directional microphones, and cameras to hear and see guests.
Voice performers can be remotely located from displays, even globally, with decent internet connection, using SimGraphics performance software to interact with guests. Our proprietary lip-sync animation ensures a character’s face and mouth animate in sync with the voice performance. A character can be puppetted by the voice actor or another operator using a standard game controller.
Our systems can be deployed as stand-alone displays comprised of off-the-shelf PCs, microphones, and cameras, and can also be delivered remotely over the web or to mobile devices. SimGraphics offers turnkey equipment packages for all VActor™ products.
We re-use existing 3D models, if available, or can create a new 3D model from a customer’s concept. Model creation typically takes about 10-14 days. Creating the animation library typically takes 3-weeks depending on desired movements and complexities. At which point the character’s ready to be performed.
The session itself, including all audio and video of the character and the guest, can be optionally recorded for offline viewing. These recordings can be used as source data to create an autonomous character (AI-VActor™) over time. This allows performances to start quickly with live performers and transition over time to a fully autonomous experience.
Our new software allows for fully autonomous 3D animated characters to engage interactive conversations with guests — no live performers or operators needed. This expands IP/brand characters into the realm of interactive engagement, by delivering multiple simultaneous experiences, that extends the character’s life beyond the capabilities of a performer.
The spectrum of autonomous systems range from “state machines” to fully automated AI. Our technologies combine the power of natural language processing and computer vision with a highly directable framework that provides real-time engagement, while flexible enough to envelope evolving AI solutions. We deliver experiences from free-form to directed that can guide guests to desired topics or outcomes, ideal for advertising, education, and marketing.
Systems can be deployed as stand-alone displays, and we work with the operator/licensee to define the location, environment, and install requirements using recommended off-the-shelf PCs, microphones, and cameras. The system runs on 2-3 PCs and can be remotely administered by SimGraphics for an additional monthly service fee or maintained by local IT staff. Coming soon, systems will have the ability to be deployed as a cloud-based entity to online and mobile users. SimGraphics recently partnered with Visaic, to prototype and support AI-VActor™ cloud-based services.
Content is the combination of the IP/brand character model, animation library, environment, setting, and voice performance. The voice performance can be directly recorded from the official character’s voice talent, pulled from recorded archives, or potentially synthesized by one of our voice synthesis partners. We create a recording plan to optimize capture of the necessary phrases that are merged to create the resulting interactive engagement.
Entirely new characters take 3-4 months to create the experience, animate, and implement into the AI-VActor™ conversation engine. We can re-use existing 3D models and performance recordings, if already created. Our development plan for prioritized content creation tools, will allow customers to add their own characters and content. SimGraphics must be involved in this process until such tools are available.
Content can be added, modified, or removed, and system logic can be tuned to change the experience’s overall direction. Adding content requires defining the desired content domain, creating the content, and integrating into the system logic. An entirely new topic domain (e.g., supporting new product release) typically takes 2-3 weeks to add. Our developing content creation tools will greatly reduce the time and effort required.
The system can handle any number of topic domains and directed to guide guests conversationally toward desired experiences. A session can include multiple characters that can switch off with each other as directed by the system logic, or requested by the guest. It can detect the guest’s presence or absence and initiate and/or terminate conversations as appropriate. It can also be directed to recognize and respond to basic objects the guest may have, enhancing the perception of conscious interaction.
Engagement is based on three basic factors: the amount of content available for discussion, the presence/absence of a guest, and a configurable session timer. The system keeps track of topics and interactions and avoids repetition. If the system runs out of topics, the guest abruptly leaves, or if the session time limit is reached, the system transitions to a parting interaction and ends. Typical interactions tend to be anywhere from 1-3 minutes.
Guest sessions, including all audio and video of the interactive experience, can be optionally recorded for offline viewing. Complete text transcription of the character-guest conversation, along with full context information, is automatically logged for offline data mining and analysis. This data is very useful to provide a new source of interactive-captured information for marketing analysis and training purposes.