Application of Robotic Crane and AR Technology in News Studios
Robotic crane and AR technology, when applied in TV news studios, have brought innovative changes to news presentation, significantly enhancing program production efficiency and quality.
Robotic camera system offers precise control over various parameters, including pan, tilt, movement speed, trajectory, zoom, focus, and aperture adjustments. This level of control is crucial for ensuring the accuracy and stability of camera movements, especially in scenarios where smooth and shake-free transitions are essential.
In the recording of news programs, camera movements typically involve transitioning from wide panoramic shots to medium and close-up shots. These camera movements tend to be relatively slow-paced, placing high demands on the precision and stability of the robotic camera system, especially when it comes to achieving a steady and shake-free final shot. Moreover, news studios often require the incorporation of devices like a teleprompter and instant replay equipment in front of the camera, which necessitates that the robotics maintain stability, accuracy, and the safety of high-repetition trajectories, even under high camera payload conditions. This is why, when selecting robotic camera systems for political news program production, there is a significant emphasis on the precision, stability, and overall reliability of safe operations.
There are several key reasons why SEEDER's robotic crane is the preferred choice in the market. Firstly, it can remote photography through network transmission, the robotic control panel offers a nearly lag-free operation experience, setting it apart from other similar models. It automatically calculates the optimal acceleration and deceleration based on distance and time, ensuring stable shooting, especially excelling in high-speed start-stop anti-shake and multi-point continuous trajectory movement. It operates without lags, delivering smooth and steady performance.
Secondly, the robotics is equipped with collision radar and emergency stop features, enhancing safety during real-world usage by preventing accidents and potential hazards.
Additionally, the AI facial tracking function, utilizing the primary camera's SDI output signal for facial recognition and skeletal animation tracking. It can lock onto recognized subjects, offering practical benefits.
Finally, the robotics demonstrates excellent noise control during operation. These advantages collectively contribute to the appeal of this robotic crane, making it the preferred choice in the market.
Integration and Compatibility of Robotic Cranes with AR Systems
1. Calibration between Robotic Cranes and AR Systems
In this system, precise and stable coordination between AR systems and robotic cranes is crucial. The first step is to calibrate the AR system with the robotic crane.
This involves determining the six degrees of freedom for the spatial positioning of the robotic crane: X, Y, Z, Pan, Tilt, and Roll, as well as the Focus and Zoom of the camera lens.
Through calibration, the virtual cameras is completely controlled by real cameras. The real camera's positional information (X, Y, Z) and its Pan, Tilt, Zoom, and Focus details are transmitted to the rendering engine via the Free-D protocol. The rendering engine then maps this information onto the virtual camera to achieve precise position matching.
2. Research on Free-D Protocol
The Free-D protocol serves as the communication protocol between robotic cranes and AR systems. In a Free-D system, each camera requires a Free-D processing unit, primarily responsible for calculating the real-time position and orientation of the cameras within the studio. This information is then stored in a data packet, which is sent to the AR system via network UDP. Since network UDP operates as a broadcast package, the robotic crane can send UDP tracking data to a specified IP address, effectively avoiding network storms. Depending on the camera lens selection, there's an integrated FREE-D data lens delay feature for ease of use.
3. Collaborative Adjustment between Robotic Cranes and AR Systems
During system installation, AR virtual composite signals directly enter the switcher. However, due to the time it takes for AR virtual scenes to be computed and generated, there can be a delay of 3 to 4 video frames, resulting in audio-visual asynchrony. Additionally, when the virtual host has no signal output, the screen remains black, which is inconvenient for use. To address this issue, we employ an external keying method to configure the virtual host, altering input and output signals. This allows key and fill signals to be output from the virtual host, which are then connected to the switcher. AR virtual scenes are overlaid using an external key, resolving the audio-visual synchronization problem. Even when the virtual host has no signal output, it doesn't affect the real studio scene.
The AR system collaborates with the robotic crane system to enable virtual foreground insertion in the studio. The video captured by the camera on the robotic crane and the tracking signal enter the virtual host. Virtual scenes and real camera-captured scenes change synchronously. The virtual foreground data is adapted to the real video, and the final image is superimposed and output via key signals.
Through the robotic crane control interface, we design motion trajectories, and with real-time positional information from the camera and the virtual rendering engine of the AR system, we create various 3D effects in the live output. The tracking system accurately reads data on the robotic crane's motion trajectory and camera lens changes for real-time tracking. This allows the integration and interaction of virtual information with real studio scenes, resulting in a seamless and interactive on-screen experience where the virtual and real worlds complement each other.