92 lines
3.7 KiB
Markdown
92 lines
3.7 KiB
Markdown
# Rethinking the Routing Memory Pool
|
|
Date: 2025-05-21
|
|
|
|
## Goals and Expectations
|
|
To finish the RX and TX queues.
|
|
|
|
## Results
|
|
Nope. I'm half way through the TX queue and I'm gonna rework the
|
|
entire thing.
|
|
|
|
## Thought Train
|
|
Separating the TX queue to be per-interface is amazing. But making it
|
|
a multi-headed queue is a disaster. In this case, it doesn't simplify
|
|
the logic, while taking away one of the benefits of a shared memory
|
|
pool.
|
|
|
|
Allow me to walk through this: If we have a symmetric design, where
|
|
all interfaces send and receive at the same speed, synced, then this
|
|
would not have been a problem. But in the real world, the interfaces
|
|
won't guarantee that. Which means for the multi-headed queue, I'd
|
|
have to implement a separate queue tracking which packets are
|
|
complete - one of the reasons why I chose to separate the queues in
|
|
the first place. It meant tracking, per interface, which packets are
|
|
complete, which is as complex as a shared memory pool. And in the
|
|
shared memory pool case, it would've handled bursts better. So, why
|
|
not just implement the shared memory pool and let each interface keep
|
|
track of the complete packets, and let the central routing logic
|
|
handle a bi-directional multi-headed queue where each interface gets a
|
|
read and write pointer.
|
|
|
|
## Reworking details
|
|
|
|
### Rework the central logic
|
|
The `hub` would now keep track of the packet queues:
|
|
|
|
1. When there's an incoming byte from an interface, throw it in the
|
|
appropriate place. If it's a new packet, get a new spot for the
|
|
packet or drop it, if it's part of an existing packet (in memory),
|
|
append it to that. Benefit: no more buffering packets inside of
|
|
the interfaces.
|
|
2. When a packet is complete (i.e. reached the pre-agreed length),
|
|
parse its header and figure out where to send it, and send a
|
|
message to the interface telling it where the packet is. No more
|
|
buffering for the header in the interfaces to tell the hub where to
|
|
send the packet.
|
|
|
|
In addition to that, let the command `000000` always be a command to
|
|
the header, with the packet length, and the rest of the packet is all
|
|
just info for the hub (who knows why the hub needs at least 63 bytes
|
|
of data for a command). This means that there's no need for an
|
|
`rx_cmd` section, just let the hub store the entire packet and parse
|
|
it later.
|
|
|
|
#### IMPORTANT NOTE
|
|
There may be the need for reserved memory for to-the-hub commands,
|
|
otherwise when the packet queue is full, the hub would drop the
|
|
packet.
|
|
|
|
#### More notes
|
|
The hub will now contain *ALL* the logic for congestion control. If
|
|
it's full, toggle a bit to let the interfaces know and start sending
|
|
out messages.
|
|
|
|
### Rework the interfaces
|
|
The interfaces will contain less logic. It only knows the following
|
|
things:
|
|
|
|
1. Upon receiving a byte, if the hub has space, send it to the hub,
|
|
otherwise send back a message telling the device to stop congesting
|
|
the fabric.
|
|
2. When the hub tells it that another packet for that interface is
|
|
ready, start sending it if it's not sending anything else, or add
|
|
it to the to-send queue. **NOTE:** Congestion messages are
|
|
top-priority, it's always the first packet to check for.
|
|
|
|
## Potential problem
|
|
If one flow is congesting the fabric, then the entire network would be
|
|
congested. However, there are CC methods and we can always have an
|
|
upper bound for the TX to-send queues.
|
|
|
|
## Reflections
|
|
Good planning is still the way. Plan as you go. See the trade-offs.
|
|
Also, try, trying would make the plan and the project better.
|
|
|
|
It's good that I was able to catch this before I implement the entire
|
|
thing. And what I already completed isn't in vain - the logic is
|
|
still there, what I learned from doing it is still there, they've just
|
|
been repurposed to something else, something more elegant.
|
|
|
|
## Next steps
|
|
Put the reworking into action.
|