Back to posts

Building a Cloud-to-MOS Synchronization Engine for Modern Rundown Systems

As newsrooms transition to cloud-based tools, one of the most technically challenging tasks is connecting cloud editorial systems to on-prem broadcast hardware. Video servers, graphics engines, prompters, and automation devices all live inside protected studio networks. They are not directly accessible from the cloud. A cloud-to-MOS synchronization engine solves this problem.

A typical synchronization engine includes several components:

  • a cloud host
  • a local MOS gateway
  • a device registry
  • a message validation layer
  • a persistent connection manager
  • a message ordering buffer
  • a reconnection supervisor
  • an event dispatcher

The cloud-based rundown system, such as Falcon Rundown, sends changes through a secure outbound WebSocket to the gateway. This avoids firewall issues because no inbound port exposure is needed. The gateway maintains a persistent session with the cloud system.

When the producer changes a story, the cloud system generates a structured event. This event includes:

  • story ID
  • rundown ID
  • action (create, replace, move, delete)
  • fields updated
  • metadata
  • MOS object references

The synchronization engine transforms this event into one or more MOS messages. These are validated for compliance before being queued for delivery.

MOS requires deterministic ordering. To achieve this, the synchronization engine maintains an internal state machine. It records which MOS messages have been sent, which devices acknowledged them, and which messages must be queued until a parent object is delivered.

For example:

  • A graphics MOS object cannot be delivered before a rundown is created.
  • A story cannot be moved until its creation message has been acknowledged.
  • A deletion cannot occur until the device acknowledges the previous change.

This logic prevents devices from entering undefined states.

Another key challenge is failover and reconnection. MOS devices may disconnect unpredictably. A cloud-to-MOS synchronization engine must buffer outgoing messages until devices reconnect. It must also support replaying messages to rebuild device state.

The engine must also detect conflicts. Suppose a producer deletes a story at the same moment a graphics operator modifies an attached MOS object. The system must decide how to reconcile these updates and send MOS messages in a state-consistent manner.

Latency management is another technical challenge. MOS devices expect messages at local network speeds. Cloud systems cannot provide sub-millisecond response times, so the gateway acts as a smoothing layer. It accepts bursts of updates and outputs them to devices at a steady rate.

Security is a major concern. Cloud connections must be authenticated and encrypted. The gateway must ensure that only legitimate servers can issue MOS updates. Some broadcast facilities require the gateway to run entirely offline except for its outbound connection.

Another layer involves object storage. Many MOS messages contain references to external media. The gateway may need to fetch templates, thumbnails, or metadata from the cloud and store them locally for devices to consume.

A well-designed synchronization engine logs all MOS interactions. This allows engineers to diagnose issues. If a video clip fails to load or a graphic fails to trigger, the logs reveal exactly which messages were exchanged.

Finally, the system must be vendor-compatible. MOS implementations vary between Vizrt, Ross XPression, CasparCG, Chyron, Octopus, and other vendors. A cloud-to-MOS engine must include adapters or normalization layers that convert vendor-specific quirks into a unified internal representation.

Falcon Rundown’s MOS integration layer is designed around these principles. It uses distributed event systems, ordering buffers, vendor adapters, and secure transport channels to ensure that cloud editorial workflows remain fully compatible with MOS-based studio hardware.

A cloud-to-MOS synchronization engine is not a simple connector. It is a deeply engineered translation layer that ensures that modern cloud-native rundown software can continue to drive legacy broadcast equipment reliably for years to come.