net/memif: replace master/slave arguments
Replace master/slave terms in this driver. The memory interface drivers uses a client/server architecture so change the variable names and device arguments to that. The previous devargs are maintained for compatibility, but if used cause a notice in the log. Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
This commit is contained in:
parent
20b6fd653f
commit
d250589d57
@ -13,13 +13,13 @@ The created device transmits packets in a raw format. It can be used with
|
||||
Ethernet mode, IP mode, or Punt/Inject. At this moment, only Ethernet mode is
|
||||
supported in DPDK memif implementation.
|
||||
|
||||
Memif works in two roles: master and slave. Slave connects to master over an
|
||||
Memif works in two roles: server and client. Client connects to server over an
|
||||
existing socket. It is also a producer of shared memory file and initializes
|
||||
the shared memory. Each interface can be connected to one peer interface
|
||||
at same time. The peer interface is identified by id parameter. Master
|
||||
creates the socket and listens for any slave connection requests. The socket
|
||||
at same time. The peer interface is identified by id parameter. Server
|
||||
creates the socket and listens for any client connection requests. The socket
|
||||
may already exist on the system. Be sure to remove any such sockets, if you
|
||||
are creating a master interface, or you will see an "Address already in use"
|
||||
are creating a server interface, or you will see an "Address already in use"
|
||||
error. Function ``rte_pmd_memif_remove()``, which removes memif interface,
|
||||
will also remove a listener socket, if it is not being used by any other
|
||||
interface.
|
||||
@ -31,58 +31,58 @@ net_memif1, and so on. Memif uses unix domain socket to transmit control
|
||||
messages. Each memif has a unique id per socket. This id is used to identify
|
||||
peer interface. If you are connecting multiple
|
||||
interfaces using same socket, be sure to specify unique ids ``id=0``, ``id=1``,
|
||||
etc. Note that if you assign a socket to a master interface it becomes a
|
||||
listener socket. Listener socket can not be used by a slave interface on same
|
||||
etc. Note that if you assign a socket to a server interface it becomes a
|
||||
listener socket. Listener socket can not be used by a client interface on same
|
||||
client.
|
||||
|
||||
.. csv-table:: **Memif configuration options**
|
||||
:header: "Option", "Description", "Default", "Valid value"
|
||||
|
||||
"id=0", "Used to identify peer interface", "0", "uint32_t"
|
||||
"role=master", "Set memif role", "slave", "master|slave"
|
||||
"role=server", "Set memif role", "client", "server|client"
|
||||
"bsize=1024", "Size of single packet buffer", "2048", "uint16_t"
|
||||
"rsize=11", "Log2 of ring size. If rsize is 10, actual ring size is 1024", "10", "1-14"
|
||||
"socket=/tmp/memif.sock", "Socket filename", "/tmp/memif.sock", "string len 108"
|
||||
"socket-abstract=no", "Set usage of abstract socket address", "yes", "yes|no"
|
||||
"mac=01:23:45:ab:cd:ef", "Mac address", "01:ab:23:cd:45:ef", ""
|
||||
"secret=abc123", "Secret is an optional security option, which if specified, must be matched by peer", "", "string len 24"
|
||||
"zero-copy=yes", "Enable/disable zero-copy slave mode. Only relevant to slave, requires '--single-file-segments' eal argument", "no", "yes|no"
|
||||
"zero-copy=yes", "Enable/disable zero-copy client mode. Only relevant to client, requires '--single-file-segments' eal argument", "no", "yes|no"
|
||||
|
||||
**Connection establishment**
|
||||
|
||||
In order to create memif connection, two memif interfaces, each in separate
|
||||
process, are needed. One interface in ``master`` role and other in
|
||||
``slave`` role. It is not possible to connect two interfaces in a single
|
||||
process, are needed. One interface in ``server`` role and other in
|
||||
``client`` role. It is not possible to connect two interfaces in a single
|
||||
process. Each interface can be connected to one interface at same time,
|
||||
identified by matching id parameter.
|
||||
|
||||
Memif driver uses unix domain socket to exchange required information between
|
||||
memif interfaces. Socket file path is specified at interface creation see
|
||||
*Memif configuration options* table above. If socket is used by ``master``
|
||||
*Memif configuration options* table above. If socket is used by ``server``
|
||||
interface, it's marked as listener socket (in scope of current process) and
|
||||
listens to connection requests from other processes. One socket can be used by
|
||||
multiple interfaces. One process can have ``slave`` and ``master`` interfaces
|
||||
multiple interfaces. One process can have ``client`` and ``server`` interfaces
|
||||
at the same time, provided each role is assigned unique socket.
|
||||
|
||||
For detailed information on memif control messages, see: net/memif/memif.h.
|
||||
|
||||
Slave interface attempts to make a connection on assigned socket. Process
|
||||
Client interface attempts to make a connection on assigned socket. Process
|
||||
listening on this socket will extract the connection request and create a new
|
||||
connected socket (control channel). Then it sends the 'hello' message
|
||||
(``MEMIF_MSG_TYPE_HELLO``), containing configuration boundaries. Slave interface
|
||||
(``MEMIF_MSG_TYPE_HELLO``), containing configuration boundaries. Client interface
|
||||
adjusts its configuration accordingly, and sends 'init' message
|
||||
(``MEMIF_MSG_TYPE_INIT``). This message among others contains interface id. Driver
|
||||
uses this id to find master interface, and assigns the control channel to this
|
||||
uses this id to find server interface, and assigns the control channel to this
|
||||
interface. If such interface is found, 'ack' message (``MEMIF_MSG_TYPE_ACK``) is
|
||||
sent. Slave interface sends 'add region' message (``MEMIF_MSG_TYPE_ADD_REGION``) for
|
||||
every region allocated. Master responds to each of these messages with 'ack'
|
||||
message. Same behavior applies to rings. Slave sends 'add ring' message
|
||||
(``MEMIF_MSG_TYPE_ADD_RING``) for every initialized ring. Master again responds to
|
||||
each message with 'ack' message. To finalize the connection, slave interface
|
||||
sent. Client interface sends 'add region' message (``MEMIF_MSG_TYPE_ADD_REGION``) for
|
||||
every region allocated. Server responds to each of these messages with 'ack'
|
||||
message. Same behavior applies to rings. Client sends 'add ring' message
|
||||
(``MEMIF_MSG_TYPE_ADD_RING``) for every initialized ring. Server again responds to
|
||||
each message with 'ack' message. To finalize the connection, client interface
|
||||
sends 'connect' message (``MEMIF_MSG_TYPE_CONNECT``). Upon receiving this message
|
||||
master maps regions to its address space, initializes rings and responds with
|
||||
server maps regions to its address space, initializes rings and responds with
|
||||
'connected' message (``MEMIF_MSG_TYPE_CONNECTED``). Disconnect
|
||||
(``MEMIF_MSG_TYPE_DISCONNECT``) can be sent by both master and slave interfaces at
|
||||
(``MEMIF_MSG_TYPE_DISCONNECT``) can be sent by both server and client interfaces at
|
||||
any time, due to driver error or if the interface is being deleted.
|
||||
|
||||
Files
|
||||
@ -96,8 +96,8 @@ Shared memory
|
||||
|
||||
**Shared memory format**
|
||||
|
||||
Slave is producer and master is consumer. Memory regions, are mapped shared memory files,
|
||||
created by memif slave and provided to master at connection establishment.
|
||||
Client is producer and server is consumer. Memory regions, are mapped shared memory files,
|
||||
created by memif client and provided to server at connection establishment.
|
||||
Regions contain rings and buffers. Rings and buffers can also be separated into multiple
|
||||
regions. For no-zero-copy, rings and buffers are stored inside single memory
|
||||
region to reduce the number of opened files.
|
||||
@ -172,11 +172,11 @@ Files
|
||||
- net/memif/memif.h *- descriptor and ring definitions*
|
||||
- net/memif/rte_eth_memif.c *- eth_memif_rx() eth_memif_tx()*
|
||||
|
||||
Zero-copy slave
|
||||
~~~~~~~~~~~~~~~
|
||||
Zero-copy client
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
Zero-copy slave can be enabled with memif configuration option 'zero-copy=yes'. This option
|
||||
is only relevant to slave and requires eal argument '--single-file-segments'.
|
||||
Zero-copy client can be enabled with memif configuration option 'zero-copy=yes'. This option
|
||||
is only relevant to client and requires eal argument '--single-file-segments'.
|
||||
This limitation is in place, because it is too expensive to identify memseg
|
||||
for each packet buffer, resulting in worse performance than with zero-copy disabled.
|
||||
With single file segments we can calculate offset from the beginning of the file
|
||||
@ -184,9 +184,9 @@ for each packet buffer.
|
||||
|
||||
**Shared memory format**
|
||||
|
||||
Region 0 is created by memif driver and contains rings. Slave interface exposes DPDK memory (memseg).
|
||||
Region 0 is created by memif driver and contains rings. Client interface exposes DPDK memory (memseg).
|
||||
Instead of using memfd_create() to create new shared file, existing memsegs are used.
|
||||
Master interface functions the same as with zero-copy disabled.
|
||||
Server interface functions the same as with zero-copy disabled.
|
||||
|
||||
region 0:
|
||||
|
||||
@ -212,24 +212,24 @@ Example: testpmd
|
||||
----------------------------
|
||||
In this example we run two instances of testpmd application and transmit packets over memif.
|
||||
|
||||
First create ``master`` interface::
|
||||
First create ``server`` interface::
|
||||
|
||||
#./build/app/testpmd -l 0-1 --proc-type=primary --file-prefix=pmd1 --vdev=net_memif,role=master -- -i
|
||||
#./build/app/testpmd -l 0-1 --proc-type=primary --file-prefix=pmd1 --vdev=net_memif,role=server -- -i
|
||||
|
||||
Now create ``slave`` interface (master must be already running so the slave will connect)::
|
||||
Now create ``client`` interface (server must be already running so the client will connect)::
|
||||
|
||||
#./build/app/testpmd -l 2-3 --proc-type=primary --file-prefix=pmd2 --vdev=net_memif -- -i
|
||||
|
||||
You can also enable ``zero-copy`` on ``slave`` interface::
|
||||
You can also enable ``zero-copy`` on ``client`` interface::
|
||||
|
||||
#./build/app/testpmd -l 2-3 --proc-type=primary --file-prefix=pmd2 --vdev=net_memif,zero-copy=yes --single-file-segments -- -i
|
||||
|
||||
Start forwarding packets::
|
||||
|
||||
Slave:
|
||||
Client:
|
||||
testpmd> start
|
||||
|
||||
Master:
|
||||
Server:
|
||||
testpmd> start tx_first
|
||||
|
||||
Show status::
|
||||
@ -242,9 +242,9 @@ Example: testpmd and VPP
|
||||
------------------------
|
||||
For information on how to get and run VPP please see `<https://wiki.fd.io/view/VPP>`_.
|
||||
|
||||
Start VPP in interactive mode (should be by default). Create memif master interface in VPP::
|
||||
Start VPP in interactive mode (should be by default). Create memif server interface in VPP::
|
||||
|
||||
vpp# create interface memif id 0 master no-zero-copy
|
||||
vpp# create interface memif id 0 server no-zero-copy
|
||||
vpp# set interface state memif0/0 up
|
||||
vpp# set interface ip address memif0/0 192.168.1.1/24
|
||||
|
||||
@ -260,7 +260,7 @@ Now create memif interface by running testpmd with these command line options::
|
||||
|
||||
#./testpmd --vdev=net_memif,socket=/run/vpp/memif.sock -- -i
|
||||
|
||||
Testpmd should now create memif slave interface and try to connect to master.
|
||||
Testpmd should now create memif client interface and try to connect to server.
|
||||
In testpmd set forward option to icmpecho and start forwarding::
|
||||
|
||||
testpmd> set fwd icmpecho
|
||||
@ -281,7 +281,7 @@ The situation is analogous to cross connecting 2 ports of the NIC by cable.
|
||||
|
||||
To set the loopback, just use the same socket and id with different roles::
|
||||
|
||||
#./testpmd --vdev=net_memif0,role=master,id=0 --vdev=net_memif1,role=slave,id=0 -- -i
|
||||
#./testpmd --vdev=net_memif0,role=server,id=0 --vdev=net_memif1,role=client,id=0 -- -i
|
||||
|
||||
Then start the communication::
|
||||
|
||||
|
@ -12,8 +12,8 @@
|
||||
#define MEMIF_NAME_SZ 32
|
||||
|
||||
/*
|
||||
* S2M: direction slave -> master
|
||||
* M2S: direction master -> slave
|
||||
* C2S: direction client -> server
|
||||
* S2C: direction server -> client
|
||||
*/
|
||||
|
||||
/*
|
||||
@ -33,8 +33,8 @@ typedef enum memif_msg_type {
|
||||
} memif_msg_type_t;
|
||||
|
||||
typedef enum {
|
||||
MEMIF_RING_S2M, /**< buffer ring in direction slave -> master */
|
||||
MEMIF_RING_M2S, /**< buffer ring in direction master -> slave */
|
||||
MEMIF_RING_C2S, /**< buffer ring in direction client -> server */
|
||||
MEMIF_RING_S2C, /**< buffer ring in direction server -> client */
|
||||
} memif_ring_type_t;
|
||||
|
||||
typedef enum {
|
||||
@ -56,23 +56,23 @@ typedef uint8_t memif_log2_ring_size_t;
|
||||
*/
|
||||
|
||||
/**
|
||||
* M2S
|
||||
* Contains master interfaces configuration.
|
||||
* S2C
|
||||
* Contains server interfaces configuration.
|
||||
*/
|
||||
typedef struct __rte_packed {
|
||||
uint8_t name[MEMIF_NAME_SZ]; /**< Client app name. In this case DPDK version */
|
||||
memif_version_t min_version; /**< lowest supported memif version */
|
||||
memif_version_t max_version; /**< highest supported memif version */
|
||||
memif_region_index_t max_region; /**< maximum num of regions */
|
||||
memif_ring_index_t max_m2s_ring; /**< maximum num of M2S ring */
|
||||
memif_ring_index_t max_s2m_ring; /**< maximum num of S2M rings */
|
||||
memif_ring_index_t max_s2c_ring; /**< maximum num of S2C ring */
|
||||
memif_ring_index_t max_c2s_ring; /**< maximum num of C2S rings */
|
||||
memif_log2_ring_size_t max_log2_ring_size; /**< maximum ring size (as log2) */
|
||||
} memif_msg_hello_t;
|
||||
|
||||
/**
|
||||
* S2M
|
||||
* C2S
|
||||
* Contains information required to identify interface
|
||||
* to which the slave wants to connect.
|
||||
* to which the client wants to connect.
|
||||
*/
|
||||
typedef struct __rte_packed {
|
||||
memif_version_t version; /**< memif version */
|
||||
@ -83,8 +83,8 @@ typedef struct __rte_packed {
|
||||
} memif_msg_init_t;
|
||||
|
||||
/**
|
||||
* S2M
|
||||
* Request master to add new shared memory region to master interface.
|
||||
* C2S
|
||||
* Request server to add new shared memory region to server interface.
|
||||
* Shared files file descriptor is passed in cmsghdr.
|
||||
*/
|
||||
typedef struct __rte_packed {
|
||||
@ -93,12 +93,12 @@ typedef struct __rte_packed {
|
||||
} memif_msg_add_region_t;
|
||||
|
||||
/**
|
||||
* S2M
|
||||
* Request master to add new ring to master interface.
|
||||
* C2S
|
||||
* Request server to add new ring to server interface.
|
||||
*/
|
||||
typedef struct __rte_packed {
|
||||
uint16_t flags; /**< flags */
|
||||
#define MEMIF_MSG_ADD_RING_FLAG_S2M 1 /**< ring is in S2M direction */
|
||||
#define MEMIF_MSG_ADD_RING_FLAG_C2S 1 /**< ring is in C2S direction */
|
||||
memif_ring_index_t index; /**< ring index */
|
||||
memif_region_index_t region; /**< region index on which this ring is located */
|
||||
memif_region_offset_t offset; /**< buffer start offset */
|
||||
@ -107,23 +107,23 @@ typedef struct __rte_packed {
|
||||
} memif_msg_add_ring_t;
|
||||
|
||||
/**
|
||||
* S2M
|
||||
* C2S
|
||||
* Finalize connection establishment.
|
||||
*/
|
||||
typedef struct __rte_packed {
|
||||
uint8_t if_name[MEMIF_NAME_SZ]; /**< slave interface name */
|
||||
uint8_t if_name[MEMIF_NAME_SZ]; /**< client interface name */
|
||||
} memif_msg_connect_t;
|
||||
|
||||
/**
|
||||
* M2S
|
||||
* S2C
|
||||
* Finalize connection establishment.
|
||||
*/
|
||||
typedef struct __rte_packed {
|
||||
uint8_t if_name[MEMIF_NAME_SZ]; /**< master interface name */
|
||||
uint8_t if_name[MEMIF_NAME_SZ]; /**< server interface name */
|
||||
} memif_msg_connected_t;
|
||||
|
||||
/**
|
||||
* S2M & M2S
|
||||
* C2S & S2C
|
||||
* Disconnect interfaces.
|
||||
*/
|
||||
typedef struct __rte_packed {
|
||||
|
@ -143,8 +143,8 @@ memif_msg_enq_hello(struct memif_control_channel *cc)
|
||||
e->msg.type = MEMIF_MSG_TYPE_HELLO;
|
||||
h->min_version = MEMIF_VERSION;
|
||||
h->max_version = MEMIF_VERSION;
|
||||
h->max_s2m_ring = ETH_MEMIF_MAX_NUM_Q_PAIRS;
|
||||
h->max_m2s_ring = ETH_MEMIF_MAX_NUM_Q_PAIRS;
|
||||
h->max_c2s_ring = ETH_MEMIF_MAX_NUM_Q_PAIRS;
|
||||
h->max_s2c_ring = ETH_MEMIF_MAX_NUM_Q_PAIRS;
|
||||
h->max_region = ETH_MEMIF_MAX_REGION_NUM - 1;
|
||||
h->max_log2_ring_size = ETH_MEMIF_MAX_LOG2_RING_SIZE;
|
||||
|
||||
@ -165,10 +165,10 @@ memif_msg_receive_hello(struct rte_eth_dev *dev, memif_msg_t *msg)
|
||||
}
|
||||
|
||||
/* Set parameters for active connection */
|
||||
pmd->run.num_s2m_rings = RTE_MIN(h->max_s2m_ring + 1,
|
||||
pmd->cfg.num_s2m_rings);
|
||||
pmd->run.num_m2s_rings = RTE_MIN(h->max_m2s_ring + 1,
|
||||
pmd->cfg.num_m2s_rings);
|
||||
pmd->run.num_c2s_rings = RTE_MIN(h->max_c2s_ring + 1,
|
||||
pmd->cfg.num_c2s_rings);
|
||||
pmd->run.num_s2c_rings = RTE_MIN(h->max_s2c_ring + 1,
|
||||
pmd->cfg.num_s2c_rings);
|
||||
pmd->run.log2_ring_size = RTE_MIN(h->max_log2_ring_size,
|
||||
pmd->cfg.log2_ring_size);
|
||||
pmd->run.pkt_buffer_size = pmd->cfg.pkt_buffer_size;
|
||||
@ -203,7 +203,7 @@ memif_msg_receive_init(struct memif_control_channel *cc, memif_msg_t *msg)
|
||||
dev = elt->dev;
|
||||
pmd = dev->data->dev_private;
|
||||
if (((pmd->flags & ETH_MEMIF_FLAG_DISABLED) == 0) &&
|
||||
(pmd->id == i->id) && (pmd->role == MEMIF_ROLE_MASTER)) {
|
||||
(pmd->id == i->id) && (pmd->role == MEMIF_ROLE_SERVER)) {
|
||||
if (pmd->flags & (ETH_MEMIF_FLAG_CONNECTING |
|
||||
ETH_MEMIF_FLAG_CONNECTED)) {
|
||||
memif_msg_enq_disconnect(cc,
|
||||
@ -300,21 +300,21 @@ memif_msg_receive_add_ring(struct rte_eth_dev *dev, memif_msg_t *msg, int fd)
|
||||
}
|
||||
|
||||
/* check if we have enough queues */
|
||||
if (ar->flags & MEMIF_MSG_ADD_RING_FLAG_S2M) {
|
||||
if (ar->index >= pmd->cfg.num_s2m_rings) {
|
||||
if (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) {
|
||||
if (ar->index >= pmd->cfg.num_c2s_rings) {
|
||||
memif_msg_enq_disconnect(pmd->cc, "Invalid ring index", 0);
|
||||
return -1;
|
||||
}
|
||||
pmd->run.num_s2m_rings++;
|
||||
pmd->run.num_c2s_rings++;
|
||||
} else {
|
||||
if (ar->index >= pmd->cfg.num_m2s_rings) {
|
||||
if (ar->index >= pmd->cfg.num_s2c_rings) {
|
||||
memif_msg_enq_disconnect(pmd->cc, "Invalid ring index", 0);
|
||||
return -1;
|
||||
}
|
||||
pmd->run.num_m2s_rings++;
|
||||
pmd->run.num_s2c_rings++;
|
||||
}
|
||||
|
||||
mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_S2M) ?
|
||||
mq = (ar->flags & MEMIF_MSG_ADD_RING_FLAG_C2S) ?
|
||||
dev->data->rx_queues[ar->index] : dev->data->tx_queues[ar->index];
|
||||
|
||||
mq->intr_handle.fd = fd;
|
||||
@ -449,7 +449,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx,
|
||||
return -1;
|
||||
|
||||
ar = &e->msg.add_ring;
|
||||
mq = (type == MEMIF_RING_S2M) ? dev->data->tx_queues[idx] :
|
||||
mq = (type == MEMIF_RING_C2S) ? dev->data->tx_queues[idx] :
|
||||
dev->data->rx_queues[idx];
|
||||
|
||||
e->msg.type = MEMIF_MSG_TYPE_ADD_RING;
|
||||
@ -458,7 +458,7 @@ memif_msg_enq_add_ring(struct rte_eth_dev *dev, uint8_t idx,
|
||||
ar->offset = mq->ring_offset;
|
||||
ar->region = mq->region;
|
||||
ar->log2_ring_size = mq->log2_ring_size;
|
||||
ar->flags = (type == MEMIF_RING_S2M) ? MEMIF_MSG_ADD_RING_FLAG_S2M : 0;
|
||||
ar->flags = (type == MEMIF_RING_C2S) ? MEMIF_MSG_ADD_RING_FLAG_C2S : 0;
|
||||
ar->private_hdr_size = 0;
|
||||
|
||||
return 0;
|
||||
@ -575,8 +575,8 @@ memif_disconnect(struct rte_eth_dev *dev)
|
||||
rte_spinlock_unlock(&pmd->cc_lock);
|
||||
|
||||
/* unconfig interrupts */
|
||||
for (i = 0; i < pmd->cfg.num_s2m_rings; i++) {
|
||||
if (pmd->role == MEMIF_ROLE_SLAVE) {
|
||||
for (i = 0; i < pmd->cfg.num_c2s_rings; i++) {
|
||||
if (pmd->role == MEMIF_ROLE_CLIENT) {
|
||||
if (dev->data->tx_queues != NULL)
|
||||
mq = dev->data->tx_queues[i];
|
||||
else
|
||||
@ -592,8 +592,8 @@ memif_disconnect(struct rte_eth_dev *dev)
|
||||
mq->intr_handle.fd = -1;
|
||||
}
|
||||
}
|
||||
for (i = 0; i < pmd->cfg.num_m2s_rings; i++) {
|
||||
if (pmd->role == MEMIF_ROLE_MASTER) {
|
||||
for (i = 0; i < pmd->cfg.num_s2c_rings; i++) {
|
||||
if (pmd->role == MEMIF_ROLE_SERVER) {
|
||||
if (dev->data->tx_queues != NULL)
|
||||
mq = dev->data->tx_queues[i];
|
||||
else
|
||||
@ -616,7 +616,7 @@ memif_disconnect(struct rte_eth_dev *dev)
|
||||
memset(&pmd->run, 0, sizeof(pmd->run));
|
||||
|
||||
MIF_LOG(DEBUG, "Disconnected, id: %d, role: %s.", pmd->id,
|
||||
(pmd->role == MEMIF_ROLE_MASTER) ? "master" : "slave");
|
||||
(pmd->role == MEMIF_ROLE_SERVER) ? "server" : "client");
|
||||
}
|
||||
|
||||
static int
|
||||
@ -694,15 +694,15 @@ memif_msg_receive(struct memif_control_channel *cc)
|
||||
if (ret < 0)
|
||||
goto exit;
|
||||
}
|
||||
for (i = 0; i < pmd->run.num_s2m_rings; i++) {
|
||||
for (i = 0; i < pmd->run.num_c2s_rings; i++) {
|
||||
ret = memif_msg_enq_add_ring(cc->dev, i,
|
||||
MEMIF_RING_S2M);
|
||||
MEMIF_RING_C2S);
|
||||
if (ret < 0)
|
||||
goto exit;
|
||||
}
|
||||
for (i = 0; i < pmd->run.num_m2s_rings; i++) {
|
||||
for (i = 0; i < pmd->run.num_s2c_rings; i++) {
|
||||
ret = memif_msg_enq_add_ring(cc->dev, i,
|
||||
MEMIF_RING_M2S);
|
||||
MEMIF_RING_S2C);
|
||||
if (ret < 0)
|
||||
goto exit;
|
||||
}
|
||||
@ -969,7 +969,7 @@ memif_socket_init(struct rte_eth_dev *dev, const char *socket_filename)
|
||||
ret = rte_hash_lookup_data(hash, key, (void **)&socket);
|
||||
if (ret < 0) {
|
||||
socket = memif_socket_create(key,
|
||||
(pmd->role == MEMIF_ROLE_SLAVE) ? 0 : 1,
|
||||
(pmd->role == MEMIF_ROLE_CLIENT) ? 0 : 1,
|
||||
pmd->flags & ETH_MEMIF_FLAG_SOCKET_ABSTRACT);
|
||||
if (socket == NULL)
|
||||
return -1;
|
||||
@ -1046,7 +1046,7 @@ memif_socket_remove_device(struct rte_eth_dev *dev)
|
||||
}
|
||||
|
||||
int
|
||||
memif_connect_master(struct rte_eth_dev *dev)
|
||||
memif_connect_server(struct rte_eth_dev *dev)
|
||||
{
|
||||
struct pmd_internals *pmd = dev->data->dev_private;
|
||||
|
||||
@ -1057,7 +1057,7 @@ memif_connect_master(struct rte_eth_dev *dev)
|
||||
}
|
||||
|
||||
int
|
||||
memif_connect_slave(struct rte_eth_dev *dev)
|
||||
memif_connect_client(struct rte_eth_dev *dev)
|
||||
{
|
||||
int sockfd;
|
||||
int ret;
|
||||
|
@ -60,7 +60,8 @@ void memif_disconnect(struct rte_eth_dev *dev);
|
||||
* - On success, zero.
|
||||
* - On failure, a negative value.
|
||||
*/
|
||||
int memif_connect_master(struct rte_eth_dev *dev);
|
||||
int memif_connect_server(struct rte_eth_dev *dev);
|
||||
|
||||
|
||||
/**
|
||||
* If device is properly configured, send connection request.
|
||||
@ -71,7 +72,7 @@ int memif_connect_master(struct rte_eth_dev *dev);
|
||||
* - On success, zero.
|
||||
* - On failure, a negative value.
|
||||
*/
|
||||
int memif_connect_slave(struct rte_eth_dev *dev);
|
||||
int memif_connect_client(struct rte_eth_dev *dev);
|
||||
|
||||
struct memif_socket_dev_list_elt {
|
||||
TAILQ_ENTRY(memif_socket_dev_list_elt) next;
|
||||
|
@ -134,7 +134,7 @@ memif_mp_request_regions(struct rte_eth_dev *dev)
|
||||
struct memif_region *r;
|
||||
struct pmd_process_private *proc_private = dev->process_private;
|
||||
struct pmd_internals *pmd = dev->data->dev_private;
|
||||
/* in case of zero-copy slave, only request region 0 */
|
||||
/* in case of zero-copy client, only request region 0 */
|
||||
uint16_t max_region_num = (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) ?
|
||||
1 : ETH_MEMIF_MAX_REGION_NUM;
|
||||
|
||||
@ -212,7 +212,7 @@ memif_get_ring(struct pmd_internals *pmd, struct pmd_process_private *proc_priva
|
||||
int ring_size = sizeof(memif_ring_t) + sizeof(memif_desc_t) *
|
||||
(1 << pmd->run.log2_ring_size);
|
||||
|
||||
p = (uint8_t *)p + (ring_num + type * pmd->run.num_s2m_rings) * ring_size;
|
||||
p = (uint8_t *)p + (ring_num + type * pmd->run.num_c2s_rings) * ring_size;
|
||||
|
||||
return (memif_ring_t *)p;
|
||||
}
|
||||
@ -247,7 +247,7 @@ memif_get_buffer(struct pmd_process_private *proc_private, memif_desc_t *d)
|
||||
return ((uint8_t *)proc_private->regions[d->region]->addr + d->offset);
|
||||
}
|
||||
|
||||
/* Free mbufs received by master */
|
||||
/* Free mbufs received by server */
|
||||
static void
|
||||
memif_free_stored_mbufs(struct pmd_process_private *proc_private, struct memif_queue *mq)
|
||||
{
|
||||
@ -258,7 +258,7 @@ memif_free_stored_mbufs(struct pmd_process_private *proc_private, struct memif_q
|
||||
/* FIXME: improve performance */
|
||||
/* The ring->tail acts as a guard variable between Tx and Rx
|
||||
* threads, so using load-acquire pairs with store-release
|
||||
* in function eth_memif_rx for S2M queues.
|
||||
* in function eth_memif_rx for C2S queues.
|
||||
*/
|
||||
cur_tail = __atomic_load_n(&ring->tail, __ATOMIC_ACQUIRE);
|
||||
while (mq->last_tail != cur_tail) {
|
||||
@ -330,7 +330,7 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
|
||||
ring_size = 1 << mq->log2_ring_size;
|
||||
mask = ring_size - 1;
|
||||
|
||||
if (type == MEMIF_RING_S2M) {
|
||||
if (type == MEMIF_RING_C2S) {
|
||||
cur_slot = mq->last_head;
|
||||
last_slot = __atomic_load_n(&ring->head, __ATOMIC_ACQUIRE);
|
||||
} else {
|
||||
@ -404,7 +404,7 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
|
||||
}
|
||||
|
||||
no_free_bufs:
|
||||
if (type == MEMIF_RING_S2M) {
|
||||
if (type == MEMIF_RING_C2S) {
|
||||
__atomic_store_n(&ring->tail, cur_slot, __ATOMIC_RELEASE);
|
||||
mq->last_head = cur_slot;
|
||||
} else {
|
||||
@ -412,7 +412,7 @@ eth_memif_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
|
||||
}
|
||||
|
||||
refill:
|
||||
if (type == MEMIF_RING_M2S) {
|
||||
if (type == MEMIF_RING_S2C) {
|
||||
/* ring->head is updated by the receiver and this function
|
||||
* is called in the context of receiver thread. The loads in
|
||||
* the receiver do not need to synchronize with its own stores.
|
||||
@ -515,7 +515,7 @@ eth_memif_rx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
|
||||
|
||||
mq->last_tail = cur_slot;
|
||||
|
||||
/* Supply master with new buffers */
|
||||
/* Supply server with new buffers */
|
||||
refill:
|
||||
/* ring->head is updated by the receiver and this function
|
||||
* is called in the context of receiver thread. The loads in
|
||||
@ -591,8 +591,8 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
|
||||
ring_size = 1 << mq->log2_ring_size;
|
||||
mask = ring_size - 1;
|
||||
|
||||
if (type == MEMIF_RING_S2M) {
|
||||
/* For S2M queues ring->head is updated by the sender and
|
||||
if (type == MEMIF_RING_C2S) {
|
||||
/* For C2S queues ring->head is updated by the sender and
|
||||
* this function is called in the context of sending thread.
|
||||
* The loads in the sender do not need to synchronize with
|
||||
* its own stores. Hence, the following load can be a
|
||||
@ -602,7 +602,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
|
||||
n_free = ring_size - slot +
|
||||
__atomic_load_n(&ring->tail, __ATOMIC_ACQUIRE);
|
||||
} else {
|
||||
/* For M2S queues ring->tail is updated by the sender and
|
||||
/* For S2C queues ring->tail is updated by the sender and
|
||||
* this function is called in the context of sending thread.
|
||||
* The loads in the sender do not need to synchronize with
|
||||
* its own stores. Hence, the following load can be a
|
||||
@ -619,7 +619,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
|
||||
saved_slot = slot;
|
||||
d0 = &ring->desc[slot & mask];
|
||||
dst_off = 0;
|
||||
dst_len = (type == MEMIF_RING_S2M) ?
|
||||
dst_len = (type == MEMIF_RING_C2S) ?
|
||||
pmd->run.pkt_buffer_size : d0->length;
|
||||
|
||||
next_in_chain:
|
||||
@ -634,7 +634,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
|
||||
d0->flags |= MEMIF_DESC_FLAG_NEXT;
|
||||
d0 = &ring->desc[slot & mask];
|
||||
dst_off = 0;
|
||||
dst_len = (type == MEMIF_RING_S2M) ?
|
||||
dst_len = (type == MEMIF_RING_C2S) ?
|
||||
pmd->run.pkt_buffer_size : d0->length;
|
||||
d0->flags = 0;
|
||||
} else {
|
||||
@ -669,7 +669,7 @@ eth_memif_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
|
||||
}
|
||||
|
||||
no_free_slots:
|
||||
if (type == MEMIF_RING_S2M)
|
||||
if (type == MEMIF_RING_C2S)
|
||||
__atomic_store_n(&ring->head, slot, __ATOMIC_RELEASE);
|
||||
else
|
||||
__atomic_store_n(&ring->tail, slot, __ATOMIC_RELEASE);
|
||||
@ -699,7 +699,7 @@ memif_tx_one_zc(struct pmd_process_private *proc_private, struct memif_queue *mq
|
||||
next_in_chain:
|
||||
/* store pointer to mbuf to free it later */
|
||||
mq->buffers[slot & mask] = mbuf;
|
||||
/* Increment refcnt to make sure the buffer is not freed before master
|
||||
/* Increment refcnt to make sure the buffer is not freed before server
|
||||
* receives it. (current segment)
|
||||
*/
|
||||
rte_mbuf_refcnt_update(mbuf, 1);
|
||||
@ -751,11 +751,11 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
|
||||
ring_size = 1 << mq->log2_ring_size;
|
||||
mask = ring_size - 1;
|
||||
|
||||
/* free mbufs received by master */
|
||||
/* free mbufs received by server */
|
||||
memif_free_stored_mbufs(proc_private, mq);
|
||||
|
||||
/* ring type always MEMIF_RING_S2M */
|
||||
/* For S2M queues ring->head is updated by the sender and
|
||||
/* ring type always MEMIF_RING_C2S */
|
||||
/* For C2S queues ring->head is updated by the sender and
|
||||
* this function is called in the context of sending thread.
|
||||
* The loads in the sender do not need to synchronize with
|
||||
* its own stores. Hence, the following load can be a
|
||||
@ -816,10 +816,10 @@ eth_memif_tx_zc(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
|
||||
}
|
||||
|
||||
no_free_slots:
|
||||
/* ring type always MEMIF_RING_S2M */
|
||||
/* ring type always MEMIF_RING_C2S */
|
||||
/* The ring->head acts as a guard variable between Tx and Rx
|
||||
* threads, so using store-release pairs with load-acquire
|
||||
* in function eth_memif_rx for S2M rings.
|
||||
* in function eth_memif_rx for C2S rings.
|
||||
*/
|
||||
__atomic_store_n(&ring->head, slot, __ATOMIC_RELEASE);
|
||||
|
||||
@ -933,7 +933,7 @@ memif_region_init_shm(struct rte_eth_dev *dev, uint8_t has_buffers)
|
||||
}
|
||||
|
||||
/* calculate buffer offset */
|
||||
r->pkt_buffer_offset = (pmd->run.num_s2m_rings + pmd->run.num_m2s_rings) *
|
||||
r->pkt_buffer_offset = (pmd->run.num_c2s_rings + pmd->run.num_s2c_rings) *
|
||||
(sizeof(memif_ring_t) + sizeof(memif_desc_t) *
|
||||
(1 << pmd->run.log2_ring_size));
|
||||
|
||||
@ -942,8 +942,8 @@ memif_region_init_shm(struct rte_eth_dev *dev, uint8_t has_buffers)
|
||||
if (has_buffers == 1)
|
||||
r->region_size += (uint32_t)(pmd->run.pkt_buffer_size *
|
||||
(1 << pmd->run.log2_ring_size) *
|
||||
(pmd->run.num_s2m_rings +
|
||||
pmd->run.num_m2s_rings));
|
||||
(pmd->run.num_c2s_rings +
|
||||
pmd->run.num_s2c_rings));
|
||||
|
||||
memset(shm_name, 0, sizeof(char) * ETH_MEMIF_SHM_NAME_SIZE);
|
||||
snprintf(shm_name, ETH_MEMIF_SHM_NAME_SIZE, "memif_region_%d",
|
||||
@ -1028,8 +1028,8 @@ memif_init_rings(struct rte_eth_dev *dev)
|
||||
int i, j;
|
||||
uint16_t slot;
|
||||
|
||||
for (i = 0; i < pmd->run.num_s2m_rings; i++) {
|
||||
ring = memif_get_ring(pmd, proc_private, MEMIF_RING_S2M, i);
|
||||
for (i = 0; i < pmd->run.num_c2s_rings; i++) {
|
||||
ring = memif_get_ring(pmd, proc_private, MEMIF_RING_C2S, i);
|
||||
__atomic_store_n(&ring->head, 0, __ATOMIC_RELAXED);
|
||||
__atomic_store_n(&ring->tail, 0, __ATOMIC_RELAXED);
|
||||
ring->cookie = MEMIF_COOKIE;
|
||||
@ -1048,8 +1048,8 @@ memif_init_rings(struct rte_eth_dev *dev)
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < pmd->run.num_m2s_rings; i++) {
|
||||
ring = memif_get_ring(pmd, proc_private, MEMIF_RING_M2S, i);
|
||||
for (i = 0; i < pmd->run.num_s2c_rings; i++) {
|
||||
ring = memif_get_ring(pmd, proc_private, MEMIF_RING_S2C, i);
|
||||
__atomic_store_n(&ring->head, 0, __ATOMIC_RELAXED);
|
||||
__atomic_store_n(&ring->tail, 0, __ATOMIC_RELAXED);
|
||||
ring->cookie = MEMIF_COOKIE;
|
||||
@ -1059,7 +1059,7 @@ memif_init_rings(struct rte_eth_dev *dev)
|
||||
continue;
|
||||
|
||||
for (j = 0; j < (1 << pmd->run.log2_ring_size); j++) {
|
||||
slot = (i + pmd->run.num_s2m_rings) *
|
||||
slot = (i + pmd->run.num_c2s_rings) *
|
||||
(1 << pmd->run.log2_ring_size) + j;
|
||||
ring->desc[j].region = 0;
|
||||
ring->desc[j].offset =
|
||||
@ -1070,7 +1070,7 @@ memif_init_rings(struct rte_eth_dev *dev)
|
||||
}
|
||||
}
|
||||
|
||||
/* called only by slave */
|
||||
/* called only by client */
|
||||
static int
|
||||
memif_init_queues(struct rte_eth_dev *dev)
|
||||
{
|
||||
@ -1078,12 +1078,12 @@ memif_init_queues(struct rte_eth_dev *dev)
|
||||
struct memif_queue *mq;
|
||||
int i;
|
||||
|
||||
for (i = 0; i < pmd->run.num_s2m_rings; i++) {
|
||||
for (i = 0; i < pmd->run.num_c2s_rings; i++) {
|
||||
mq = dev->data->tx_queues[i];
|
||||
mq->log2_ring_size = pmd->run.log2_ring_size;
|
||||
/* queues located only in region 0 */
|
||||
mq->region = 0;
|
||||
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2M, i);
|
||||
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_C2S, i);
|
||||
mq->last_head = 0;
|
||||
mq->last_tail = 0;
|
||||
mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
|
||||
@ -1101,12 +1101,12 @@ memif_init_queues(struct rte_eth_dev *dev)
|
||||
}
|
||||
}
|
||||
|
||||
for (i = 0; i < pmd->run.num_m2s_rings; i++) {
|
||||
for (i = 0; i < pmd->run.num_s2c_rings; i++) {
|
||||
mq = dev->data->rx_queues[i];
|
||||
mq->log2_ring_size = pmd->run.log2_ring_size;
|
||||
/* queues located only in region 0 */
|
||||
mq->region = 0;
|
||||
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_M2S, i);
|
||||
mq->ring_offset = memif_get_ring_offset(dev, mq, MEMIF_RING_S2C, i);
|
||||
mq->last_head = 0;
|
||||
mq->last_tail = 0;
|
||||
mq->intr_handle.fd = eventfd(0, EFD_NONBLOCK);
|
||||
@ -1178,8 +1178,8 @@ memif_connect(struct rte_eth_dev *dev)
|
||||
}
|
||||
|
||||
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
|
||||
for (i = 0; i < pmd->run.num_s2m_rings; i++) {
|
||||
mq = (pmd->role == MEMIF_ROLE_SLAVE) ?
|
||||
for (i = 0; i < pmd->run.num_c2s_rings; i++) {
|
||||
mq = (pmd->role == MEMIF_ROLE_CLIENT) ?
|
||||
dev->data->tx_queues[i] : dev->data->rx_queues[i];
|
||||
ring = memif_get_ring_from_queue(proc_private, mq);
|
||||
if (ring == NULL || ring->cookie != MEMIF_COOKIE) {
|
||||
@ -1191,11 +1191,11 @@ memif_connect(struct rte_eth_dev *dev)
|
||||
mq->last_head = 0;
|
||||
mq->last_tail = 0;
|
||||
/* enable polling mode */
|
||||
if (pmd->role == MEMIF_ROLE_MASTER)
|
||||
if (pmd->role == MEMIF_ROLE_SERVER)
|
||||
ring->flags = MEMIF_RING_FLAG_MASK_INT;
|
||||
}
|
||||
for (i = 0; i < pmd->run.num_m2s_rings; i++) {
|
||||
mq = (pmd->role == MEMIF_ROLE_SLAVE) ?
|
||||
for (i = 0; i < pmd->run.num_s2c_rings; i++) {
|
||||
mq = (pmd->role == MEMIF_ROLE_CLIENT) ?
|
||||
dev->data->rx_queues[i] : dev->data->tx_queues[i];
|
||||
ring = memif_get_ring_from_queue(proc_private, mq);
|
||||
if (ring == NULL || ring->cookie != MEMIF_COOKIE) {
|
||||
@ -1207,7 +1207,7 @@ memif_connect(struct rte_eth_dev *dev)
|
||||
mq->last_head = 0;
|
||||
mq->last_tail = 0;
|
||||
/* enable polling mode */
|
||||
if (pmd->role == MEMIF_ROLE_SLAVE)
|
||||
if (pmd->role == MEMIF_ROLE_CLIENT)
|
||||
ring->flags = MEMIF_RING_FLAG_MASK_INT;
|
||||
}
|
||||
|
||||
@ -1226,11 +1226,11 @@ memif_dev_start(struct rte_eth_dev *dev)
|
||||
int ret = 0;
|
||||
|
||||
switch (pmd->role) {
|
||||
case MEMIF_ROLE_SLAVE:
|
||||
ret = memif_connect_slave(dev);
|
||||
case MEMIF_ROLE_CLIENT:
|
||||
ret = memif_connect_client(dev);
|
||||
break;
|
||||
case MEMIF_ROLE_MASTER:
|
||||
ret = memif_connect_master(dev);
|
||||
case MEMIF_ROLE_SERVER:
|
||||
ret = memif_connect_server(dev);
|
||||
break;
|
||||
default:
|
||||
MIF_LOG(ERR, "Unknown role: %d.", pmd->role);
|
||||
@ -1272,17 +1272,17 @@ memif_dev_configure(struct rte_eth_dev *dev)
|
||||
struct pmd_internals *pmd = dev->data->dev_private;
|
||||
|
||||
/*
|
||||
* SLAVE - TXQ
|
||||
* MASTER - RXQ
|
||||
* CLIENT - TXQ
|
||||
* SERVER - RXQ
|
||||
*/
|
||||
pmd->cfg.num_s2m_rings = (pmd->role == MEMIF_ROLE_SLAVE) ?
|
||||
pmd->cfg.num_c2s_rings = (pmd->role == MEMIF_ROLE_CLIENT) ?
|
||||
dev->data->nb_tx_queues : dev->data->nb_rx_queues;
|
||||
|
||||
/*
|
||||
* SLAVE - RXQ
|
||||
* MASTER - TXQ
|
||||
* CLIENT - RXQ
|
||||
* SERVER - TXQ
|
||||
*/
|
||||
pmd->cfg.num_m2s_rings = (pmd->role == MEMIF_ROLE_SLAVE) ?
|
||||
pmd->cfg.num_s2c_rings = (pmd->role == MEMIF_ROLE_CLIENT) ?
|
||||
dev->data->nb_rx_queues : dev->data->nb_tx_queues;
|
||||
|
||||
return 0;
|
||||
@ -1305,7 +1305,7 @@ memif_tx_queue_setup(struct rte_eth_dev *dev,
|
||||
}
|
||||
|
||||
mq->type =
|
||||
(pmd->role == MEMIF_ROLE_SLAVE) ? MEMIF_RING_S2M : MEMIF_RING_M2S;
|
||||
(pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_C2S : MEMIF_RING_S2C;
|
||||
mq->n_pkts = 0;
|
||||
mq->n_bytes = 0;
|
||||
mq->intr_handle.fd = -1;
|
||||
@ -1333,7 +1333,7 @@ memif_rx_queue_setup(struct rte_eth_dev *dev,
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
mq->type = (pmd->role == MEMIF_ROLE_SLAVE) ? MEMIF_RING_M2S : MEMIF_RING_S2M;
|
||||
mq->type = (pmd->role == MEMIF_ROLE_CLIENT) ? MEMIF_RING_S2C : MEMIF_RING_C2S;
|
||||
mq->n_pkts = 0;
|
||||
mq->n_bytes = 0;
|
||||
mq->intr_handle.fd = -1;
|
||||
@ -1388,8 +1388,8 @@ memif_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
|
||||
stats->opackets = 0;
|
||||
stats->obytes = 0;
|
||||
|
||||
tmp = (pmd->role == MEMIF_ROLE_SLAVE) ? pmd->run.num_s2m_rings :
|
||||
pmd->run.num_m2s_rings;
|
||||
tmp = (pmd->role == MEMIF_ROLE_CLIENT) ? pmd->run.num_c2s_rings :
|
||||
pmd->run.num_s2c_rings;
|
||||
nq = (tmp < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? tmp :
|
||||
RTE_ETHDEV_QUEUE_STAT_CNTRS;
|
||||
|
||||
@ -1402,8 +1402,8 @@ memif_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
|
||||
stats->ibytes += mq->n_bytes;
|
||||
}
|
||||
|
||||
tmp = (pmd->role == MEMIF_ROLE_SLAVE) ? pmd->run.num_m2s_rings :
|
||||
pmd->run.num_s2m_rings;
|
||||
tmp = (pmd->role == MEMIF_ROLE_CLIENT) ? pmd->run.num_s2c_rings :
|
||||
pmd->run.num_c2s_rings;
|
||||
nq = (tmp < RTE_ETHDEV_QUEUE_STAT_CNTRS) ? tmp :
|
||||
RTE_ETHDEV_QUEUE_STAT_CNTRS;
|
||||
|
||||
@ -1425,14 +1425,14 @@ memif_stats_reset(struct rte_eth_dev *dev)
|
||||
int i;
|
||||
struct memif_queue *mq;
|
||||
|
||||
for (i = 0; i < pmd->run.num_s2m_rings; i++) {
|
||||
mq = (pmd->role == MEMIF_ROLE_SLAVE) ? dev->data->tx_queues[i] :
|
||||
for (i = 0; i < pmd->run.num_c2s_rings; i++) {
|
||||
mq = (pmd->role == MEMIF_ROLE_CLIENT) ? dev->data->tx_queues[i] :
|
||||
dev->data->rx_queues[i];
|
||||
mq->n_pkts = 0;
|
||||
mq->n_bytes = 0;
|
||||
}
|
||||
for (i = 0; i < pmd->run.num_m2s_rings; i++) {
|
||||
mq = (pmd->role == MEMIF_ROLE_SLAVE) ? dev->data->rx_queues[i] :
|
||||
for (i = 0; i < pmd->run.num_s2c_rings; i++) {
|
||||
mq = (pmd->role == MEMIF_ROLE_CLIENT) ? dev->data->rx_queues[i] :
|
||||
dev->data->tx_queues[i];
|
||||
mq->n_pkts = 0;
|
||||
mq->n_bytes = 0;
|
||||
@ -1513,8 +1513,8 @@ memif_create(struct rte_vdev_device *vdev, enum memif_role_t role,
|
||||
pmd->flags = flags;
|
||||
pmd->flags |= ETH_MEMIF_FLAG_DISABLED;
|
||||
pmd->role = role;
|
||||
/* Zero-copy flag irelevant to master. */
|
||||
if (pmd->role == MEMIF_ROLE_MASTER)
|
||||
/* Zero-copy flag irelevant to server. */
|
||||
if (pmd->role == MEMIF_ROLE_SERVER)
|
||||
pmd->flags &= ~ETH_MEMIF_FLAG_ZERO_COPY;
|
||||
|
||||
ret = memif_socket_init(eth_dev, socket_filename);
|
||||
@ -1527,8 +1527,8 @@ memif_create(struct rte_vdev_device *vdev, enum memif_role_t role,
|
||||
|
||||
pmd->cfg.log2_ring_size = log2_ring_size;
|
||||
/* set in .dev_configure() */
|
||||
pmd->cfg.num_s2m_rings = 0;
|
||||
pmd->cfg.num_m2s_rings = 0;
|
||||
pmd->cfg.num_c2s_rings = 0;
|
||||
pmd->cfg.num_s2c_rings = 0;
|
||||
|
||||
pmd->cfg.pkt_buffer_size = pkt_buffer_size;
|
||||
rte_spinlock_init(&pmd->cc_lock);
|
||||
@ -1562,10 +1562,16 @@ memif_set_role(const char *key __rte_unused, const char *value,
|
||||
{
|
||||
enum memif_role_t *role = (enum memif_role_t *)extra_args;
|
||||
|
||||
if (strstr(value, "master") != NULL) {
|
||||
*role = MEMIF_ROLE_MASTER;
|
||||
if (strstr(value, "server") != NULL) {
|
||||
*role = MEMIF_ROLE_SERVER;
|
||||
} else if (strstr(value, "client") != NULL) {
|
||||
*role = MEMIF_ROLE_CLIENT;
|
||||
} else if (strstr(value, "master") != NULL) {
|
||||
MIF_LOG(NOTICE, "Role argument \"master\" is deprecated, use \"server\"");
|
||||
*role = MEMIF_ROLE_SERVER;
|
||||
} else if (strstr(value, "slave") != NULL) {
|
||||
*role = MEMIF_ROLE_SLAVE;
|
||||
MIF_LOG(NOTICE, "Role argument \"slave\" is deprecated, use \"client\"");
|
||||
*role = MEMIF_ROLE_CLIENT;
|
||||
} else {
|
||||
MIF_LOG(ERR, "Unknown role: %s.", value);
|
||||
return -EINVAL;
|
||||
@ -1724,7 +1730,7 @@ rte_pmd_memif_probe(struct rte_vdev_device *vdev)
|
||||
int ret = 0;
|
||||
struct rte_kvargs *kvlist;
|
||||
const char *name = rte_vdev_device_name(vdev);
|
||||
enum memif_role_t role = MEMIF_ROLE_SLAVE;
|
||||
enum memif_role_t role = MEMIF_ROLE_CLIENT;
|
||||
memif_interface_id_t id = 0;
|
||||
uint16_t pkt_buffer_size = ETH_MEMIF_DEFAULT_PKT_BUFFER_SIZE;
|
||||
memif_log2_ring_size_t log2_ring_size = ETH_MEMIF_DEFAULT_RING_SIZE;
|
||||
@ -1863,7 +1869,7 @@ RTE_PMD_REGISTER_VDEV(net_memif, pmd_memif_drv);
|
||||
|
||||
RTE_PMD_REGISTER_PARAM_STRING(net_memif,
|
||||
ETH_MEMIF_ID_ARG "=<int>"
|
||||
ETH_MEMIF_ROLE_ARG "=master|slave"
|
||||
ETH_MEMIF_ROLE_ARG "=server|client"
|
||||
ETH_MEMIF_PKT_BUFFER_SIZE_ARG "=<int>"
|
||||
ETH_MEMIF_RING_SIZE_ARG "=<int>"
|
||||
ETH_MEMIF_SOCKET_ARG "=<string>"
|
||||
|
@ -36,8 +36,8 @@ extern int memif_logtype;
|
||||
"%s(): " fmt "\n", __func__, ##args)
|
||||
|
||||
enum memif_role_t {
|
||||
MEMIF_ROLE_MASTER,
|
||||
MEMIF_ROLE_SLAVE,
|
||||
MEMIF_ROLE_SERVER,
|
||||
MEMIF_ROLE_CLIENT,
|
||||
};
|
||||
|
||||
struct memif_region {
|
||||
@ -64,8 +64,8 @@ struct memif_queue {
|
||||
uint16_t last_tail; /**< last ring tail */
|
||||
|
||||
struct rte_mbuf **buffers;
|
||||
/**< Stored mbufs. Used in zero-copy tx. Slave stores transmitted
|
||||
* mbufs to free them once master has received them.
|
||||
/**< Stored mbufs. Used in zero-copy tx. Client stores transmitted
|
||||
* mbufs to free them once server has received them.
|
||||
*/
|
||||
|
||||
/* rx/tx info */
|
||||
@ -104,15 +104,15 @@ struct pmd_internals {
|
||||
|
||||
struct {
|
||||
memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */
|
||||
uint8_t num_s2m_rings; /**< number of slave to master rings */
|
||||
uint8_t num_m2s_rings; /**< number of master to slave rings */
|
||||
uint8_t num_c2s_rings; /**< number of client to server rings */
|
||||
uint8_t num_s2c_rings; /**< number of server to client rings */
|
||||
uint16_t pkt_buffer_size; /**< buffer size */
|
||||
} cfg; /**< Configured parameters (max values) */
|
||||
|
||||
struct {
|
||||
memif_log2_ring_size_t log2_ring_size; /**< log2 of ring size */
|
||||
uint8_t num_s2m_rings; /**< number of slave to master rings */
|
||||
uint8_t num_m2s_rings; /**< number of master to slave rings */
|
||||
uint8_t num_c2s_rings; /**< number of client to server rings */
|
||||
uint8_t num_s2c_rings; /**< number of server to client rings */
|
||||
uint16_t pkt_buffer_size; /**< buffer size */
|
||||
} run;
|
||||
/**< Parameters used in active connection */
|
||||
@ -139,7 +139,7 @@ void memif_free_regions(struct rte_eth_dev *dev);
|
||||
|
||||
/**
|
||||
* Finalize connection establishment process. Map shared memory file
|
||||
* (master role), initialize ring queue, set link status up.
|
||||
* (server role), initialize ring queue, set link status up.
|
||||
*
|
||||
* @param dev
|
||||
* memif device
|
||||
@ -151,7 +151,7 @@ int memif_connect(struct rte_eth_dev *dev);
|
||||
|
||||
/**
|
||||
* Create shared memory file and initialize ring queue.
|
||||
* Only called by slave when establishing connection
|
||||
* Only called by client when establishing connection
|
||||
*
|
||||
* @param dev
|
||||
* memif device
|
||||
|
Loading…
Reference in New Issue
Block a user