This updates the gtp_psc flow item to use the net header
definition of the gtp_psc to be based on RFC 38415-g30
Signed-off-by: Raslan Darawsheh <rasland@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
The struct rte_flow_action was missing from DPDK API documentation.
Fixes: 3850cf0c8c ("ethdev: add tunnel encap/decap actions")
Cc: stable@dpdk.org
Signed-off-by: Jan Viktorin <viktorin@cesnet.cz>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Aman Deep Singh <aman.deep.singh@intel.com>
The Doxygen comments are placed before the related lines,
but the markers were /**< instead of /**
The struct rte_flow_item_integrity did not appear in Doxygen output
because there was no general comment for the struct.
Fixes: b10a421a1f ("ethdev: add packet integrity check flow rules")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Currently, the flow meter policy does not support multiple actions
per color; also the allowed action types per color are very limited.
In addition, the policy cannot be pre-defined.
Due to the growing in flow actions offload abilities there is a potential
for the user to use variety of actions per color differently.
This new meter policy API comes to allow this potential in the most ethdev
common way using rte_flow action definition.
A list of rte_flow actions will be provided by the user per color
in order to create a meter policy.
In addition, the API forces to pre-define the policy before
the meters creation in order to allow sharing of single policy
with multiple meters efficiently.
meter_policy_id is added into struct rte_mtr_params.
So that it can get the policy during the meters creation.
Allow coloring the packet using a new rte_flow_action_color
as could be done by the old policy API.
Add two common policy template as macros in the head file.
The next API function were added:
- rte_mtr_meter_policy_add
- rte_mtr_meter_policy_delete
- rte_mtr_meter_policy_update
- rte_mtr_meter_policy_validate
The next struct was changed:
- rte_mtr_params
- rte_mtr_capabilities
The next API was deleted:
- rte_mtr_policer_actions_update
To support this API the following app were changed:
app/test-flow-perf: clean meter policer
app/testpmd: clean meter policer
To support this API the following drivers were changed:
net/softnic: support meter policy API
1. Cleans meter rte_mtr_policer_action.
2. Supports policy API to get color action as policer action did.
The color action will be mapped into rte_table_action_policer.
net/mlx5: clean meter creation management
Cleans and breaks part of the current meter management
in order to allow better design with policy API.
Signed-off-by: Li Zhang <lizh@nvidia.com>
Signed-off-by: Haifei Luo <haifeil@nvidia.com>
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
This commit introduces the conntrack action and item.
Usually the HW offloading is stateless. For some stateful offloading
like a TCP connection, HW module will help provide the ability of a
full offloading w/o SW participation after the connection was
established.
The basic usage is that in the first flow rule the application should
add the conntrack action and jump to the next flow table. In the
following flow rule(s) of the next table, the application should use
the conntrack item to match on the result.
A TCP connection has two directions traffic. To set a conntrack
action context correctly, the information of packets from both
directions are required.
The conntrack action should be created on one ethdev port and supply
the peer ethdev port as a parameter to the action. After context
created, it could only be used between these two ethdev ports
(dual-port mode) or a single port. The application should modify the
action via the API "rte_action_handle_update" only when before using
it to create a flow rule with conntrack for the opposite direction.
This will help the driver to recognize the direction of the flow to
be created, especially in the single-port mode, in which case the
traffic from both directions will go through the same ethdev port
if the application works as an "forwarding engine" but not an end
point. There is no need to call the update interface if the
subsequent flow rules have nothing to be changed.
Query will be supported via "rte_action_handle_query" interface,
about the current packets information and connection status. The
fields query capabilities depends on the HW.
For the packets received during the conntrack setup, it is suggested
to re-inject the packets in order to make sure the conntrack module
works correctly without missing any packet. Only the valid packets
should pass the conntrack, packets with invalid TCP information,
like out of window, or with invalid header, like malformed, should
not pass.
Naming and definition:
https://elixir.bootlin.com/linux/latest/source/include/uapi/linux/
netfilter/nf_conntrack_tcp.h
https://elixir.bootlin.com/linux/latest/source/net/netfilter/
nf_conntrack_proto_tcp.c
Other reference:
https://www.usenix.org/legacy/events/sec01/invitedtalks/rooij.pdf
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Currently, DPDK application can offload the checksum check,
and report it in the mbuf.
However, as more and more applications are offloading some or all
logic and action to the HW, there is a need to check the packet
integrity so the right decision can be taken.
The application logic can be positive meaning if the packet is
valid jump / do actions, or negative if packet is not valid
jump to SW / do actions (like drop) and add default flow
(match all in low priority) that will direct the miss packet
to the miss path.
Since currently rte_flow works in positive way the assumption is
that the positive way will be the common way in this case also.
When thinking what is the best API to implement such feature,
we need to consider the following (in no specific order):
1. API breakage.
2. Simplicity.
3. Performance.
4. HW capabilities.
5. rte_flow limitation.
6. Flexibility.
First option: Add integrity flags to each of the items.
For example add checksum_ok to IPv4 item.
Pros:
1. No new rte_flow item.
2. Simple in the way that on each item the app can see
what checks are available.
Cons:
1. API breakage.
2. Increase number of flows, since app can't add global rule and must
have dedicated flow for each of the flow combinations, for example
matching on ICMP traffic or UDP/TCP traffic with IPv4 / IPv6 will
result in 5 flows.
Second option: dedicated item
Pros:
1. No API breakage, and there will be no for some time due to having
extra space. (by using bits)
2. Just one flow to support the ICMP or UDP/TCP traffic with IPv4 /
IPv6.
3. Simplicity application can just look at one place to see all possible
checks.
4. Allow future support for more tests.
Cons:
1. New item, that holds number of fields from different items.
For starter the following bits are suggested:
1. packet_ok - means that all HW checks depending on packet layer have
passed. This may mean that in some HW such flow should be split to
number of flows or fail.
2. l2_ok - all check for layer 2 have passed.
3. l3_ok - all check for layer 3 have passed. If packet doesn't have
L3 layer this check should fail.
4. l4_ok - all check for layer 4 have passed. If packet doesn't
have L4 layer this check should fail.
5. l2_crc_ok - the layer 2 CRC is O.K.
6. ipv4_csum_ok - IPv4 checksum is O.K. It is possible that the
IPv4 checksum will be O.K. but the l3_ok will be 0. It is not
possible that checksum will be 0 and the l3_ok will be 1.
7. l4_csum_ok - layer 4 checksum is O.K.
8. l3_len_OK - check that the reported layer 3 length is smaller than the
frame length.
Example of usage:
1. Check packets from all possible layers for integrity.
flow create integrity spec packet_ok = 1 mask packet_ok = 1 .....
2. Check only packet with layer 4 (UDP / TCP)
flow create integrity spec l3_ok = 1, l4_ok = 1 mask l3_ok = 1
l4_ok = 1
Signed-off-by: Ori Kam <orika@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Right now, rte_flow_shared_action_* APIs are used for some shared
actions, like RSS, count. The shared action should be created before
using it inside a flow. These shared actions sometimes are not
really shared but just some indirect actions decoupled from a flow.
The new functions rte_flow_action_handle_* are added to replace
the current shared functions rte_flow_shared_action_*.
There are two types of flow actions:
1. the direct (normal) actions that could be created and stored
within a flow rule. Such action is tied to its flow rule and
cannot be reused.
2. the indirect action, in the past, named shared_action. It is
created from a direct actioni, like count or rss, and then used
in the flow rules with an object handle. The PMD will take care
of the retrieve from indirect action to the direct action
when it is referenced.
The indirect action is accessed (update / query) w/o any flow rule,
just via the action object handle. For example, when querying or
resetting a counter, it could be done out of any flow using this
counter, but only the handle of the counter action object is
required.
The indirect action object could be shared by different flows or
used by a single flow, depending on the direct action type and
the real-life requirements.
The handle of an indirect action object is opaque and defined in
each driver and possibly different per direct action type.
The old name "shared" is improper in a sense and should be replaced.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*", the testpmd application code and command
line interfaces also need to be updated to do the adaption.
The testpmd application user guide is also updated. All the "shared
action" related parts are replaced with "indirect action" to have a
correct explanation.
The parameter of "update" interface is also changed. A general
pointer will replace the rte_flow_action struct pointer due to the
facts:
1. Some action may not support fields updating. In the example of a
counter, the only "update" supported should be the reset. So
passing a rte_flow_action struct pointer is meaningless and
there is even no such corresponding action struct. What's more,
if more than one operations should be supported, for some other
action, such pointer parameter may not meet the need.
2. Some action may need conditional or partial update, the current
parameter will not provide the ability to indicate which part(s)
to update.
For different types of indirect action objects, the pointer could
either be the same of rte_flow_action* struct - in order not to
break the current driver implementation, or some wrapper
structures with bits as masks to indicate which part to be
updated, depending on real needs of the corresponding direct
action. For different direct actions, the structures of indirect
action objects updating will be different.
All the underlayer PMD callbacks will be moved to these new APIs.
The RTE_FLOW_ACTION_TYPE_SHARED is kept for now in order not to
break the ABI. All the implementations are changed by using
RTE_FLOW_ACTION_TYPE_INDIRECT.
Since the APIs are changed from "rte_flow_shared_action*" to the new
"rte_flow_action_handle*" and the "update" interface's 3rd input
parameter is changed to generic pointer, the mlx5 PMD that uses these
APIs needs to do the adaption to the new APIs as well.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
There is no reason for the DPDK libraries to all have 'librte_' prefix on
the directory names. This prefix makes the directory names longer and also
makes it awkward to add features referring to individual libraries in the
build - should the lib names be specified with or without the prefix.
Therefore, we can just remove the library prefix and use the library's
unique name as the directory name, i.e. 'eal' rather than 'librte_eal'
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>