Persistent TCP Publishing Architecture
Persistent TCP (PTCP) is Mango's built-in mechanism for reliable, guaranteed-delivery data synchronization between Mango instances. It is designed for distributed architectures where edge Mango instances collect data from local devices and publish it to a central Mango instance for aggregation, analytics, alarming, and visualization. Unlike generic protocols like MQTT or HTTP, PTCP is purpose-built for Mango-to-Mango communication and includes features such as persistent disk-backed queuing, automatic point synchronization, and permission propagation.
This page provides an architectural overview of the PTCP system. For detailed data source and publisher configuration, see the Persistent TCP Data Source page.
Overview
In a distributed Mango deployment, the PTCP system consists of two complementary components:
- PTCP Publisher (on the edge instance) -- monitors local data points and queues their values for transmission to the central instance. Values are persisted to disk so that no data is lost during network outages.
- PTCP Data Source (on the central instance) -- receives incoming data from the edge publisher and stores the values as local data points.
Together, these components form a reliable data pipeline that survives network interruptions, server restarts, and intermittent connectivity -- conditions that are common in field-deployed SCADA and IoT architectures.
The two generations
The Persistent TCP module includes two implementations of this architecture:
| Generation | Transport | Encryption | Status |
|---|---|---|---|
| Legacy PTCP | Custom binary protocol over raw TCP | Optional (TLS wrapping) | Supported, not recommended for new installations |
| gRPC (recommended) | gRPC over HTTP/2 | Built-in TLS with mutual authentication | Recommended for all new installations |
Both generations share the same architectural concepts (persistent queue, point synchronization, reliable delivery), but gRPC provides significant improvements in security, performance, and maintainability. New deployments should always use the gRPC transport.
How PTCP works
Data flow
The end-to-end data flow through the PTCP system follows these steps:
- Point value generated -- An edge Mango data source (e.g., Modbus, BACnet, MQTT) reads a value from a local device.
- Value queued -- The PTCP publisher receives the new value (based on the configured update event) and adds it to the persistent queue on disk.
- Value transmitted -- The publisher transmits queued values to the central Mango instance over the network. Values are sent in order, and the publisher waits for acknowledgment before removing them from the queue.
- Value received -- The PTCP data source on the central instance receives the values and stores them as local data points.
- Acknowledgment sent -- The central instance acknowledges receipt, and the edge instance removes the acknowledged values from its queue.
If the network connection is lost at any point, the publisher continues to queue values on disk. When the connection is restored, the publisher transmits all queued values in chronological order, ensuring that no data is lost and the central instance receives a complete, gap-free history.
Persistent queue
The persistent queue is the core of PTCP's reliability. Unlike in-memory queues that are lost on restart, the PTCP queue is backed by disk storage:
- Disk-backed -- Values are written to files on the edge instance's disk before being transmitted. This means data survives application restarts, host reboots, and power failures.
- Ordered delivery -- Values are always transmitted in the order they were generated, preserving temporal integrity.
- Acknowledgment-based removal -- Values are only removed from the queue after the central instance confirms receipt.
- Configurable sizing -- Queue size is limited by available disk space. Administrators can configure warning thresholds and maximum sizes.
Queue sizing and disk usage
The disk space required by the queue depends on the number of points, the update frequency, and the duration of network outages:
| Scenario | Approximate Disk Usage |
|---|---|
| 100 points, 1 update/minute, 1 hour outage | ~5 MB |
| 100 points, 1 update/minute, 24 hour outage | ~120 MB |
| 1,000 points, 1 update/second, 1 hour outage | ~3 GB |
| 1,000 points, 1 update/second, 24 hour outage | ~72 GB |
Plan disk capacity based on the worst-case outage duration you need to survive without data loss. Monitor queue size using Mango's built-in alarms (the publisher raises a warning alarm when the queue exceeds the configured warning threshold).
Point synchronization
One of PTCP's most powerful features is automatic point synchronization between the edge and central instances. When configured, the publisher automatically creates and configures matching data points on the central instance:
- Automatic point creation -- When a new point is added to the edge publisher, a corresponding point is automatically created on the central PTCP data source.
- Metadata propagation -- Point names, descriptions, engineering units, and other metadata are synchronized from edge to center.
- Configuration updates -- Changes to point configuration on the edge are propagated to the central instance on reconnection.
This eliminates the need to manually configure data points on both the edge and central instances, reducing setup time and the risk of configuration mismatches in large deployments.
Permission synchronization
The gRPC transport supports synchronizing user permissions from the edge instance to the central instance. This ensures that access control policies defined at the edge are respected when users access the aggregated data on the central instance. Permission synchronization is optional and can be disabled when the central instance has its own permission model.
Legacy PTCP vs. gRPC transport
Legacy PTCP
The original PTCP implementation uses a custom binary protocol over raw TCP sockets:
- Transport -- Raw TCP with an application-layer binary protocol.
- Security -- Optional TLS wrapping for encryption. No mutual authentication by default.
- Connection -- Single persistent TCP connection. Reconnects automatically on failure.
- Compression -- No built-in compression.
- Multiplexing -- Single stream per connection; multiple publishers require multiple connections.
Legacy PTCP is fully functional and will continue to be supported for existing deployments. However, it lacks the security and performance features of the gRPC transport.
gRPC transport (recommended)
The gRPC replacement provides significant improvements:
- Transport -- gRPC over HTTP/2, which provides built-in multiplexing, flow control, and header compression.
- Security -- TLS encryption is always enabled. Supports mutual TLS (mTLS) for strong mutual authentication between edge and central instances.
- Connection -- HTTP/2 multiplexes multiple streams over a single connection. Automatic reconnection with exponential backoff.
- Compression -- Built-in gzip compression for reduced bandwidth usage, especially beneficial over WAN links.
- Protocol Buffers -- Uses Protocol Buffers for efficient binary serialization, reducing payload sizes compared to the legacy binary format.
- Health checking -- Built-in gRPC health checking for monitoring connection status.
Comparison summary
| Feature | Legacy PTCP | gRPC |
|---|---|---|
| Encryption | Optional TLS | Always TLS |
| Mutual authentication | Not supported | mTLS supported |
| Compression | None | gzip |
| Multiplexing | No | Yes (HTTP/2) |
| Serialization | Custom binary | Protocol Buffers |
| Connection monitoring | Custom keepalive | gRPC health check |
| Firewall friendliness | Custom port | Standard HTTPS port |
| New installations | Not recommended | Recommended |
Migrating from legacy PTCP to gRPC
Existing deployments can migrate from legacy PTCP to gRPC:
- Install or update the Persistent TCP module on both edge and central instances.
- Create a new gRPC publisher on the edge instance, configuring it with the same points as the legacy publisher.
- Create a new gRPC data source on the central instance.
- Enable the gRPC publisher and verify data flow.
- Once confirmed, disable the legacy PTCP publisher and data source.
- Optionally remove the legacy publisher/data source after verifying that all historical data has been transferred.
Both legacy and gRPC can run simultaneously during the migration period. This allows for a gradual, zero-downtime migration.
When to use PTCP vs. other publishers
Mango offers several publishers for sending data to external systems. Here is guidance on when to use each one:
| Publisher | Best For | Limitations |
|---|---|---|
| PTCP / gRPC | Mango-to-Mango synchronization with guaranteed delivery, automatic point sync, and permission propagation. | Only works between Mango instances. Requires the Persistent TCP module on both ends. |
| MQTT Sparkplug | Publishing to MQTT brokers for consumption by any MQTT client. Standard IoT interoperability. | No persistent disk queue -- relies on MQTT QoS and broker persistence. No automatic point sync. |
| HTTP Sender | Sending data to any HTTP endpoint (REST APIs, webhooks, cloud services). | No persistent disk queue. Limited retry logic. No automatic point sync. |
| gRPC Publisher (standalone) | When you need gRPC transport without the full PTCP feature set. | Does not include point synchronization or permission propagation unless using the PTCP module. |
Choose PTCP/gRPC when:
- You are building a distributed Mango architecture (edge + central).
- You need guaranteed delivery with no data loss during network outages.
- You want automatic point synchronization between instances.
- You need permission propagation from edge to central.
Choose MQTT or HTTP when:
- The destination is not a Mango instance.
- You need interoperability with third-party systems.
- The data is non-critical or can tolerate occasional loss.
Architecture patterns
Single edge, single central
The simplest deployment: one edge Mango collects data and publishes to one central Mango.
[Edge Mango] --gRPC--> [Central Mango]
| |
Modbus Dashboard
BACnet Alarms
MQTT Reports
Multiple edge, single central
Multiple field sites publish to a central instance for enterprise-level aggregation.
[Edge Site A] --gRPC-->
[Edge Site B] --gRPC--> [Central Mango]
[Edge Site C] --gRPC--> |
Dashboard
Analytics
Enterprise Reports
Cascading (edge, regional, central)
For very large deployments, regional Mango instances aggregate data from local edges, then publish to a central instance.
[Edge A1] --gRPC-->
[Edge A2] --gRPC--> [Regional A] --gRPC-->
[Central Mango]
[Edge B1] --gRPC--> |
[Edge B2] --gRPC--> [Regional B] --gRPC--> Enterprise
Each hop in the cascade uses its own PTCP publisher/data source pair, with independent persistent queues at each level.
Encryption and security
TLS configuration (gRPC)
The gRPC transport uses TLS for all connections. By default, the central instance presents a server certificate that the edge instance validates. For maximum security, enable mutual TLS (mTLS) so that both sides authenticate each other:
- Generate or obtain TLS certificates for both the edge and central instances.
- Configure the central gRPC data source with its server certificate and private key.
- Configure the central instance to require client certificates.
- Configure the edge gRPC publisher with its client certificate and private key, plus the CA certificate that signed the central instance's server certificate.
With mTLS enabled, only authorized edge instances can publish to the central instance, and the edge instance verifies it is connecting to the legitimate central instance.
Legacy PTCP TLS
Legacy PTCP supports optional TLS wrapping. When enabled, the raw TCP connection is wrapped in a TLS layer. This provides encryption but does not support mutual authentication. For deployments requiring strong authentication, migrate to the gRPC transport.
Related pages
- Persistent TCP Data Source -- Detailed data source and publisher configuration
- Publishers Overview -- Common publisher configuration and queue management
- gRPC Publisher -- gRPC publisher configuration details
- MQTT Sparkplug Publisher -- Alternative for MQTT-based publishing
- HTTP Sender Publisher -- Alternative for HTTP-based publishing