Home › Forums › Discussions › Support › WinpkFilter detecting missed traffic and internal queue size
Tagged: winpkfilter drop queue full
- This topic has 3 replies, 2 voices, and was last updated 2 years, 3 months ago by Vadim Smirnov.
-
AuthorPosts
-
July 14, 2022 at 7:21 pm #12284
Hi,
I’ve been continuing to experiment with WinpkFilter in C++ and I was wondering if there’s any way to detect traffic that might have been missed (with the adapter in listen mode) if the driver’s queue was to be full, e.g. if I can register an event to be notified if this situation occurs.
I also wanted to double-check one thing regarding the internal driver queue size, from what I found in some other thread ( https://www.ntkernel.com/forums/topic/jumbo-frame-9000-bytes-length-frames/#post-9330 ) the driver has 1000 buffers for 1514-byte sized packets. Does this mean that the queue will only ever store 1000 packets of size up to 1514 bytes or is the number of packets in the queue variable depending on their size?
Kind regards,
Tomasz
July 15, 2022 at 8:42 am #12285Hi,
The standard driver builds for Windows Vista and later pre-allocate (for performance reasons) 2048 packets of 1514 bytes each (9014 bytes for a jumbo-frame-enabled build). And this is the upper limit for the packet queue. There is no special event to signal when the queue limit has been reached, but if you are reading packets from the driver, providing 2048 INTERMEDIATE_BUFFERS, and getting all the 2048 packets returned, there is a good chance that some packets have passed by in listening mode.
It is not a big problem to increase the internal driver packet pool and/or add an extra event (note that signaling an event comes at a cost) in a custom build of the driver. However, if you need to capture vast amounts of data from a high-speed network, you might want to consider using the experimental Fast I/O API instead. It allows up to 16 shared memory sections to be allocated to deliver packets from the kernel to user space instead of using the driver’s internal packet queue and the ReadPacket API.
Hope it helps!
August 10, 2022 at 11:48 am #12311Hi again,
I wanted to ask about that for a while now, would you say Fast I/O should be faster than events-based approach?
Secondly, is there any way to figure out why
ReadPacketsUnsorted
fails (returnsFALSE
)?
I can’t seem to read the traffic I’ve missed between setting and resetting thefast_io_header.read_in_progress_flag
without it.
I also don’t seeReadPacketsUnsorted
being used in thefastio_packet_filter
class from the cpp examples.Also, slightly off-topic, but is there any way to get the packet timestamps out of the captured traffic? I find that getting it from std::chrono for every piece of captured traffic is having an impact on how well it performs and was wondering if I’ve just missed it. So far I’ve moved it out so that the timestamp is generated once for every batch of traffic and I wonder how inaccurate that will be.
Kind regards,
Tomasz
PS.: Just wanted to let you know that
AddSecondaryFastIo
is not in the documentation, but I saw how it’s used in the examples.August 16, 2022 at 1:24 pm #12312I wanted to ask about that for a while now, would you say Fast I/O should be faster than events-based approach?
Fast I/O is not about being faster than events-based approach. The main idea of Fast I/O is to guarantee minimum latency, i.e. the time interval between the arrival of a packet on the network adapter and the moment it begins to be processed.
Secondly, is there any way to figure out why ReadPacketsUnsorted fails (returns FALSE)?
Normally, it returns FALSE when there are no packets in the queue.
I also don’t see ReadPacketsUnsorted being used in the fastio_packet_filter class from the cpp examples.
It is not really necessary in you have the sufficient number of shared memory sections.
Also, slightly off-topic, but is there any way to get the packet timestamps out of the captured traffic? I find that getting it from std::chrono for every piece of captured traffic is having an impact on how well it performs and was wondering if I’ve just missed it. So far I’ve moved it out so that the timestamp is generated once for every batch of traffic and I wonder how inaccurate that will be.
The timestamp is not implemented in kernel mode, but if it is needed, then adding it to a custom build is not a big deal.
PS.: Just wanted to let you know that AddSecondaryFastIo is not in the documentation, but I saw how it’s used in the examples.
Thanks for the point. Since this was an experimental API, I postponed its documentation until later. And then I completely forgot about it.
-
AuthorPosts
- You must be logged in to reply to this topic.