Mirror Networking: Mastering KCP Transport Settings

by Alex Johnson 52 views

Hey there, fellow game developers! It's great to connect with you all here, even if I'm a bit old-fashioned and not on Discord. Today, we're diving deep into the fascinating world of Mirror Networking, specifically focusing on configuration questions related to its KCP transport. If you're looking to fine-tune your game's network performance and understand the inner workings of how data travels, you've come to the right place.

Understanding Network Manager Send Rate and KCP Interval

Let's start with a fundamental question that often pops up: if I limit my Network Manager send rate to 30 on a headless server, will each network loop take 33 ms, making a KCP interval lower than 33 ms useless? This is a fantastic question that gets to the heart of network synchronization. When you set your NetworkManager.sendRate to 30, you're essentially telling the server to try and send network updates 30 times per second. In a standard game loop, this translates to roughly one update every 33.33 milliseconds (1000 ms / 30 updates).

Now, consider the KCP (Kinetic UDP) transport. KCP is a reliable UDP protocol that operates on top of UDP, adding features like packet ordering, retransmission, and congestion control. The KCP interval dictates how often KCP itself checks for acknowledgments, resends lost packets, and generally manages the data stream. If your game loop is processing updates at 33 ms intervals, and KCP is also trying to send and process data within that same loop, setting the KCP interval lower than 33 ms might seem redundant at first glance. The idea is that KCP can't realistically send more updates than the game loop is generating. However, it's not quite that simple. While the game loop dictates the maximum rate of new data generation, KCP's interval is more about the timing of its internal processes. These processes include sending keep-alives, checking for acknowledgments, and deciding whether to retransmit packets that haven't been acknowledged. If KCP's interval is too high (e.g., greater than 33 ms), it might delay critical acknowledgments or retransmissions, potentially leading to perceived lag or packet loss from the perspective of the client, even if the game server is sending data frequently. Therefore, while there's a relationship, a KCP interval slightly lower than the inverse of the send rate can still be beneficial for ensuring timely KCP-level operations, especially in the face of network jitter or packet loss. It's a delicate balance between the game's update rate and KCP's internal clock.

The Nuances of KCP's Retransmission Timeout (RTO)

Continuing our exploration of KCP, let's tackle the second part of your question: even with fast mode enabled, the max RTO of a segment before a deadlink will take 9 seconds (calculated using the formula with nodelay enabled). This points to a key aspect of KCP's reliability mechanism: the Retransmission Timeout (RTO). The RTO is the amount of time KCP waits for an acknowledgment before assuming a packet has been lost and needs to be resent. The formula you've referenced from KCP.cs is crucial here. When nodelay is enabled (which is generally recommended for low-latency applications), the rx_rto (receive retransmission timeout) is clamped between 30ms and 60000ms. The calculation segment.rto += step / 2 is part of KCP's adaptive retransmission mechanism. It dynamically adjusts the RTO based on network conditions.

However, your observation about the 9-second maximum RTO before a dead link is significant. Let's break down why this might be the case and its implications. The formula shows that segment.rto increases incrementally. If nodelay is set to 1 (meaning