event/dlb2: add infos get and configure
Add support for configuring the DLB2 hardware. In particular, this patch configures the DLB2 hardware's scheduling domain, such that it is provisioned with the requested number of ports and queues, provided sufficient resources are available. Individual queues and ports are configured later in port setup and eventdev start. Signed-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com> Reviewed-by: Gage Eads <gage.eads@intel.com>
This commit is contained in:
parent
e88753dcc1
commit
f3cad285bb
@ -32,9 +32,125 @@ detailed understanding of the hardware, but these details are important when
|
||||
writing high-performance code. This section describes the places where the
|
||||
eventdev API and DLB2 misalign.
|
||||
|
||||
Scheduling Domain Configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
There are 32 scheduling domainis the DLB2.
|
||||
When one is configured, it allocates load-balanced and
|
||||
directed queues, ports, credits, and other hardware resources. Some
|
||||
resource allocations are user-controlled -- the number of queues, for example
|
||||
-- and others, like credit pools (one directed and one load-balanced pool per
|
||||
scheduling domain), are not.
|
||||
|
||||
The DLB2 is a closed system eventdev, and as such the ``nb_events_limit`` device
|
||||
setup argument and the per-port ``new_event_threshold`` argument apply as
|
||||
defined in the eventdev header file. The limit is applied to all enqueues,
|
||||
regardless of whether it will consume a directed or load-balanced credit.
|
||||
|
||||
Load-balanced and Directed Ports
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
DLB2 ports come in two flavors: load-balanced and directed. The eventdev API
|
||||
does not have the same concept, but it has a similar one: ports and queues that
|
||||
are singly-linked (i.e. linked to a single queue or port, respectively).
|
||||
|
||||
The ``rte_event_dev_info_get()`` function reports the number of available
|
||||
event ports and queues (among other things). For the DLB2 PMD, max_event_ports
|
||||
and max_event_queues report the number of available load-balanced ports and
|
||||
queues, and max_single_link_event_port_queue_pairs reports the number of
|
||||
available directed ports and queues.
|
||||
|
||||
When a scheduling domain is created in ``rte_event_dev_configure()``, the user
|
||||
specifies ``nb_event_ports`` and ``nb_single_link_event_port_queues``, which
|
||||
control the total number of ports (load-balanced and directed) and the number
|
||||
of directed ports. Hence, the number of requested load-balanced ports is
|
||||
``nb_event_ports - nb_single_link_event_ports``. The ``nb_event_queues`` field
|
||||
specifies the total number of queues (load-balanced and directed). The number
|
||||
of directed queues comes from ``nb_single_link_event_port_queues``, since
|
||||
directed ports and queues come in pairs.
|
||||
|
||||
When a port is setup, the ``RTE_EVENT_PORT_CFG_SINGLE_LINK`` flag determines
|
||||
whether it should be configured as a directed (the flag is set) or a
|
||||
load-balanced (the flag is unset) port. Similarly, the
|
||||
``RTE_EVENT_QUEUE_CFG_SINGLE_LINK`` queue configuration flag controls
|
||||
whether it is a directed or load-balanced queue.
|
||||
|
||||
Load-balanced ports can only be linked to load-balanced queues, and directed
|
||||
ports can only be linked to directed queues. Furthermore, directed ports can
|
||||
only be linked to a single directed queue (and vice versa), and that link
|
||||
cannot change after the eventdev is started.
|
||||
|
||||
The eventdev API does not have a directed scheduling type. To support directed
|
||||
traffic, the dlb PMD detects when an event is being sent to a directed queue
|
||||
and overrides its scheduling type. Note that the originally selected scheduling
|
||||
type (atomic, ordered, or parallel) is not preserved, and an event's sched_type
|
||||
will be set to ``RTE_SCHED_TYPE_ATOMIC`` when it is dequeued from a directed
|
||||
port.
|
||||
|
||||
Flow ID
|
||||
~~~~~~~
|
||||
|
||||
The flow ID field is preserved in the event when it is scheduled in the
|
||||
DLB2.
|
||||
|
||||
Reconfiguration
|
||||
~~~~~~~~~~~~~~~
|
||||
|
||||
The Eventdev API allows one to reconfigure a device, its ports, and its queues
|
||||
by first stopping the device, calling the configuration function(s), then
|
||||
restarting the device. The DLB2 does not support configuring an individual queue
|
||||
or port without first reconfiguring the entire device, however, so there are
|
||||
certain reconfiguration sequences that are valid in the eventdev API but not
|
||||
supported by the PMD.
|
||||
|
||||
Specifically, the PMD supports the following configuration sequence:
|
||||
1. Configure and start the device
|
||||
2. Stop the device
|
||||
3. (Optional) Reconfigure the device
|
||||
4. (Optional) If step 3 is run:
|
||||
|
||||
a. Setup queue(s). The reconfigured queue(s) lose their previous port links.
|
||||
b. The reconfigured port(s) lose their previous queue links.
|
||||
|
||||
5. (Optional, only if steps 4a and 4b are run) Link port(s) to queue(s)
|
||||
6. Restart the device. If the device is reconfigured in step 3 but one or more
|
||||
of its ports or queues are not, the PMD will apply their previous
|
||||
configuration (including port->queue links) at this time.
|
||||
|
||||
The PMD does not support the following configuration sequences:
|
||||
1. Configure and start the device
|
||||
2. Stop the device
|
||||
3. Setup queue or setup port
|
||||
4. Start the device
|
||||
|
||||
This sequence is not supported because the event device must be reconfigured
|
||||
before its ports or queues can be.
|
||||
|
||||
Atomic Inflights Allocation
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In the last stage prior to scheduling an atomic event to a CQ, DLB2 holds the
|
||||
inflight event in a temporary buffer that is divided among load-balanced
|
||||
queues. If a queue's atomic buffer storage fills up, this can result in
|
||||
head-of-line-blocking. For example:
|
||||
|
||||
- An LDB queue allocated N atomic buffer entries
|
||||
- All N entries are filled with events from flow X, which is pinned to CQ 0.
|
||||
|
||||
Until CQ 0 releases 1+ events, no other atomic flows for that LDB queue can be
|
||||
scheduled. The likelihood of this case depends on the eventdev configuration,
|
||||
traffic behavior, event processing latency, potential for a worker to be
|
||||
interrupted or otherwise delayed, etc.
|
||||
|
||||
By default, the PMD allocates 16 buffer entries for each load-balanced queue,
|
||||
which provides an even division across all 128 queues but potentially wastes
|
||||
buffer space (e.g. if not all queues are used, or aren't used for atomic
|
||||
scheduling).
|
||||
|
||||
The PMD provides a dev arg to override the default per-queue allocation. To
|
||||
increase a vdev's per-queue atomic-inflight allocation to (for example) 64:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
--vdev=dlb1_event,atm_inflights=64
|
||||
|
||||
|
@ -85,6 +85,25 @@ dlb2_get_queue_depth(struct dlb2_eventdev *dlb2,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
dlb2_free_qe_mem(struct dlb2_port *qm_port)
|
||||
{
|
||||
if (qm_port == NULL)
|
||||
return;
|
||||
|
||||
rte_free(qm_port->qe4);
|
||||
qm_port->qe4 = NULL;
|
||||
|
||||
rte_free(qm_port->int_arm_qe);
|
||||
qm_port->int_arm_qe = NULL;
|
||||
|
||||
rte_free(qm_port->consume_qe);
|
||||
qm_port->consume_qe = NULL;
|
||||
|
||||
rte_memzone_free(dlb2_port[qm_port->id][PORT_TYPE(qm_port)].mz);
|
||||
dlb2_port[qm_port->id][PORT_TYPE(qm_port)].mz = NULL;
|
||||
}
|
||||
|
||||
/* override defaults with value(s) provided on command line */
|
||||
static void
|
||||
dlb2_init_queue_depth_thresholds(struct dlb2_eventdev *dlb2,
|
||||
@ -349,11 +368,305 @@ set_qid_depth_thresh(const char *key __rte_unused,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
dlb2_eventdev_info_get(struct rte_eventdev *dev,
|
||||
struct rte_event_dev_info *dev_info)
|
||||
{
|
||||
struct dlb2_eventdev *dlb2 = dlb2_pmd_priv(dev);
|
||||
int ret;
|
||||
|
||||
ret = dlb2_hw_query_resources(dlb2);
|
||||
if (ret) {
|
||||
const struct rte_eventdev_data *data = dev->data;
|
||||
|
||||
DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
|
||||
ret, data->dev_id);
|
||||
/* fn is void, so fall through and return values set up in
|
||||
* probe
|
||||
*/
|
||||
}
|
||||
|
||||
/* Add num resources currently owned by this domain.
|
||||
* These would become available if the scheduling domain were reset due
|
||||
* to the application recalling eventdev_configure to *reconfigure* the
|
||||
* domain.
|
||||
*/
|
||||
evdev_dlb2_default_info.max_event_ports += dlb2->num_ldb_ports;
|
||||
evdev_dlb2_default_info.max_event_queues += dlb2->num_ldb_queues;
|
||||
evdev_dlb2_default_info.max_num_events += dlb2->max_ldb_credits;
|
||||
|
||||
evdev_dlb2_default_info.max_event_queues =
|
||||
RTE_MIN(evdev_dlb2_default_info.max_event_queues,
|
||||
RTE_EVENT_MAX_QUEUES_PER_DEV);
|
||||
|
||||
evdev_dlb2_default_info.max_num_events =
|
||||
RTE_MIN(evdev_dlb2_default_info.max_num_events,
|
||||
dlb2->max_num_events_override);
|
||||
|
||||
*dev_info = evdev_dlb2_default_info;
|
||||
}
|
||||
|
||||
static int
|
||||
dlb2_hw_create_sched_domain(struct dlb2_hw_dev *handle,
|
||||
const struct dlb2_hw_rsrcs *resources_asked)
|
||||
{
|
||||
int ret = 0;
|
||||
struct dlb2_create_sched_domain_args *cfg;
|
||||
|
||||
if (resources_asked == NULL) {
|
||||
DLB2_LOG_ERR("dlb2: dlb2_create NULL parameter\n");
|
||||
ret = EINVAL;
|
||||
goto error_exit;
|
||||
}
|
||||
|
||||
/* Map generic qm resources to dlb2 resources */
|
||||
cfg = &handle->cfg.resources;
|
||||
|
||||
/* DIR ports and queues */
|
||||
|
||||
cfg->num_dir_ports = resources_asked->num_dir_ports;
|
||||
|
||||
cfg->num_dir_credits = resources_asked->num_dir_credits;
|
||||
|
||||
/* LDB queues */
|
||||
|
||||
cfg->num_ldb_queues = resources_asked->num_ldb_queues;
|
||||
|
||||
/* LDB ports */
|
||||
|
||||
cfg->cos_strict = 0; /* Best effort */
|
||||
cfg->num_cos_ldb_ports[0] = 0;
|
||||
cfg->num_cos_ldb_ports[1] = 0;
|
||||
cfg->num_cos_ldb_ports[2] = 0;
|
||||
cfg->num_cos_ldb_ports[3] = 0;
|
||||
|
||||
switch (handle->cos_id) {
|
||||
case DLB2_COS_0:
|
||||
cfg->num_ldb_ports = 0; /* no don't care ports */
|
||||
cfg->num_cos_ldb_ports[0] =
|
||||
resources_asked->num_ldb_ports;
|
||||
break;
|
||||
case DLB2_COS_1:
|
||||
cfg->num_ldb_ports = 0; /* no don't care ports */
|
||||
cfg->num_cos_ldb_ports[1] = resources_asked->num_ldb_ports;
|
||||
break;
|
||||
case DLB2_COS_2:
|
||||
cfg->num_ldb_ports = 0; /* no don't care ports */
|
||||
cfg->num_cos_ldb_ports[2] = resources_asked->num_ldb_ports;
|
||||
break;
|
||||
case DLB2_COS_3:
|
||||
cfg->num_ldb_ports = 0; /* no don't care ports */
|
||||
cfg->num_cos_ldb_ports[3] =
|
||||
resources_asked->num_ldb_ports;
|
||||
break;
|
||||
case DLB2_COS_DEFAULT:
|
||||
/* all ldb ports are don't care ports from a cos perspective */
|
||||
cfg->num_ldb_ports =
|
||||
resources_asked->num_ldb_ports;
|
||||
break;
|
||||
}
|
||||
|
||||
cfg->num_ldb_credits =
|
||||
resources_asked->num_ldb_credits;
|
||||
|
||||
cfg->num_atomic_inflights =
|
||||
DLB2_NUM_ATOMIC_INFLIGHTS_PER_QUEUE *
|
||||
cfg->num_ldb_queues;
|
||||
|
||||
cfg->num_hist_list_entries = resources_asked->num_ldb_ports *
|
||||
DLB2_NUM_HIST_LIST_ENTRIES_PER_LDB_PORT;
|
||||
|
||||
DLB2_LOG_DBG("sched domain create - ldb_qs=%d, ldb_ports=%d, dir_ports=%d, atomic_inflights=%d, hist_list_entries=%d, ldb_credits=%d, dir_credits=%d\n",
|
||||
cfg->num_ldb_queues,
|
||||
resources_asked->num_ldb_ports,
|
||||
cfg->num_dir_ports,
|
||||
cfg->num_atomic_inflights,
|
||||
cfg->num_hist_list_entries,
|
||||
cfg->num_ldb_credits,
|
||||
cfg->num_dir_credits);
|
||||
|
||||
/* Configure the QM */
|
||||
|
||||
ret = dlb2_iface_sched_domain_create(handle, cfg);
|
||||
if (ret < 0) {
|
||||
DLB2_LOG_ERR("dlb2: domain create failed, ret = %d, extra status: %s\n",
|
||||
ret,
|
||||
dlb2_error_strings[cfg->response.status]);
|
||||
|
||||
goto error_exit;
|
||||
}
|
||||
|
||||
handle->domain_id = cfg->response.id;
|
||||
handle->cfg.configured = true;
|
||||
|
||||
error_exit:
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void
|
||||
dlb2_hw_reset_sched_domain(const struct rte_eventdev *dev, bool reconfig)
|
||||
{
|
||||
struct dlb2_eventdev *dlb2 = dlb2_pmd_priv(dev);
|
||||
enum dlb2_configuration_state config_state;
|
||||
int i, j;
|
||||
|
||||
dlb2_iface_domain_reset(dlb2);
|
||||
|
||||
/* Free all dynamically allocated port memory */
|
||||
for (i = 0; i < dlb2->num_ports; i++)
|
||||
dlb2_free_qe_mem(&dlb2->ev_ports[i].qm_port);
|
||||
|
||||
/* If reconfiguring, mark the device's queues and ports as "previously
|
||||
* configured." If the user doesn't reconfigure them, the PMD will
|
||||
* reapply their previous configuration when the device is started.
|
||||
*/
|
||||
config_state = (reconfig) ? DLB2_PREV_CONFIGURED :
|
||||
DLB2_NOT_CONFIGURED;
|
||||
|
||||
for (i = 0; i < dlb2->num_ports; i++) {
|
||||
dlb2->ev_ports[i].qm_port.config_state = config_state;
|
||||
/* Reset setup_done so ports can be reconfigured */
|
||||
dlb2->ev_ports[i].setup_done = false;
|
||||
for (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)
|
||||
dlb2->ev_ports[i].link[j].mapped = false;
|
||||
}
|
||||
|
||||
for (i = 0; i < dlb2->num_queues; i++)
|
||||
dlb2->ev_queues[i].qm_queue.config_state = config_state;
|
||||
|
||||
for (i = 0; i < DLB2_MAX_NUM_QUEUES; i++)
|
||||
dlb2->ev_queues[i].setup_done = false;
|
||||
|
||||
dlb2->num_ports = 0;
|
||||
dlb2->num_ldb_ports = 0;
|
||||
dlb2->num_dir_ports = 0;
|
||||
dlb2->num_queues = 0;
|
||||
dlb2->num_ldb_queues = 0;
|
||||
dlb2->num_dir_queues = 0;
|
||||
dlb2->configured = false;
|
||||
}
|
||||
|
||||
/* Note: 1 QM instance per QM device, QM instance/device == event device */
|
||||
static int
|
||||
dlb2_eventdev_configure(const struct rte_eventdev *dev)
|
||||
{
|
||||
struct dlb2_eventdev *dlb2 = dlb2_pmd_priv(dev);
|
||||
struct dlb2_hw_dev *handle = &dlb2->qm_instance;
|
||||
struct dlb2_hw_rsrcs *rsrcs = &handle->info.hw_rsrc_max;
|
||||
const struct rte_eventdev_data *data = dev->data;
|
||||
const struct rte_event_dev_config *config = &data->dev_conf;
|
||||
int ret;
|
||||
|
||||
/* If this eventdev is already configured, we must release the current
|
||||
* scheduling domain before attempting to configure a new one.
|
||||
*/
|
||||
if (dlb2->configured) {
|
||||
dlb2_hw_reset_sched_domain(dev, true);
|
||||
|
||||
ret = dlb2_hw_query_resources(dlb2);
|
||||
if (ret) {
|
||||
DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
|
||||
ret, data->dev_id);
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
if (config->nb_event_queues > rsrcs->num_queues) {
|
||||
DLB2_LOG_ERR("nb_event_queues parameter (%d) exceeds the QM device's capabilities (%d).\n",
|
||||
config->nb_event_queues,
|
||||
rsrcs->num_queues);
|
||||
return -EINVAL;
|
||||
}
|
||||
if (config->nb_event_ports > (rsrcs->num_ldb_ports
|
||||
+ rsrcs->num_dir_ports)) {
|
||||
DLB2_LOG_ERR("nb_event_ports parameter (%d) exceeds the QM device's capabilities (%d).\n",
|
||||
config->nb_event_ports,
|
||||
(rsrcs->num_ldb_ports + rsrcs->num_dir_ports));
|
||||
return -EINVAL;
|
||||
}
|
||||
if (config->nb_events_limit > rsrcs->nb_events_limit) {
|
||||
DLB2_LOG_ERR("nb_events_limit parameter (%d) exceeds the QM device's capabilities (%d).\n",
|
||||
config->nb_events_limit,
|
||||
rsrcs->nb_events_limit);
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (config->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT)
|
||||
dlb2->global_dequeue_wait = false;
|
||||
else {
|
||||
uint32_t timeout32;
|
||||
|
||||
dlb2->global_dequeue_wait = true;
|
||||
|
||||
/* note size mismatch of timeout vals in eventdev lib. */
|
||||
timeout32 = config->dequeue_timeout_ns;
|
||||
|
||||
dlb2->global_dequeue_wait_ticks =
|
||||
timeout32 * (rte_get_timer_hz() / 1E9);
|
||||
}
|
||||
|
||||
/* Does this platform support umonitor/umwait? */
|
||||
if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_WAITPKG)) {
|
||||
if (RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE != 0 &&
|
||||
RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE != 1) {
|
||||
DLB2_LOG_ERR("invalid value (%d) for RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE, must be 0 or 1.\n",
|
||||
RTE_LIBRTE_PMD_DLB2_UMWAIT_CTL_STATE);
|
||||
return -EINVAL;
|
||||
}
|
||||
dlb2->umwait_allowed = true;
|
||||
}
|
||||
|
||||
rsrcs->num_dir_ports = config->nb_single_link_event_port_queues;
|
||||
rsrcs->num_ldb_ports = config->nb_event_ports - rsrcs->num_dir_ports;
|
||||
/* 1 dir queue per dir port */
|
||||
rsrcs->num_ldb_queues = config->nb_event_queues - rsrcs->num_dir_ports;
|
||||
|
||||
/* Scale down nb_events_limit by 4 for directed credits, since there
|
||||
* are 4x as many load-balanced credits.
|
||||
*/
|
||||
rsrcs->num_ldb_credits = 0;
|
||||
rsrcs->num_dir_credits = 0;
|
||||
|
||||
if (rsrcs->num_ldb_queues)
|
||||
rsrcs->num_ldb_credits = config->nb_events_limit;
|
||||
if (rsrcs->num_dir_ports)
|
||||
rsrcs->num_dir_credits = config->nb_events_limit / 4;
|
||||
if (dlb2->num_dir_credits_override != -1)
|
||||
rsrcs->num_dir_credits = dlb2->num_dir_credits_override;
|
||||
|
||||
if (dlb2_hw_create_sched_domain(handle, rsrcs) < 0) {
|
||||
DLB2_LOG_ERR("dlb2_hw_create_sched_domain failed\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
dlb2->new_event_limit = config->nb_events_limit;
|
||||
__atomic_store_n(&dlb2->inflights, 0, __ATOMIC_SEQ_CST);
|
||||
|
||||
/* Save number of ports/queues for this event dev */
|
||||
dlb2->num_ports = config->nb_event_ports;
|
||||
dlb2->num_queues = config->nb_event_queues;
|
||||
dlb2->num_dir_ports = rsrcs->num_dir_ports;
|
||||
dlb2->num_ldb_ports = dlb2->num_ports - dlb2->num_dir_ports;
|
||||
dlb2->num_ldb_queues = dlb2->num_queues - dlb2->num_dir_ports;
|
||||
dlb2->num_dir_queues = dlb2->num_dir_ports;
|
||||
dlb2->ldb_credit_pool = rsrcs->num_ldb_credits;
|
||||
dlb2->max_ldb_credits = rsrcs->num_ldb_credits;
|
||||
dlb2->dir_credit_pool = rsrcs->num_dir_credits;
|
||||
dlb2->max_dir_credits = rsrcs->num_dir_credits;
|
||||
|
||||
dlb2->configured = true;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void
|
||||
dlb2_entry_points_init(struct rte_eventdev *dev)
|
||||
{
|
||||
/* Expose PMD's eventdev interface */
|
||||
static struct rte_eventdev_ops dlb2_eventdev_entry_ops = {
|
||||
.dev_infos_get = dlb2_eventdev_info_get,
|
||||
.dev_configure = dlb2_eventdev_configure,
|
||||
.dump = dlb2_eventdev_dump,
|
||||
.xstats_get = dlb2_eventdev_xstats_get,
|
||||
.xstats_get_names = dlb2_eventdev_xstats_get_names,
|
||||
|
@ -26,3 +26,8 @@ int (*dlb2_iface_get_cq_poll_mode)(struct dlb2_hw_dev *handle,
|
||||
|
||||
int (*dlb2_iface_get_num_resources)(struct dlb2_hw_dev *handle,
|
||||
struct dlb2_get_num_resources_args *rsrcs);
|
||||
|
||||
int (*dlb2_iface_sched_domain_create)(struct dlb2_hw_dev *handle,
|
||||
struct dlb2_create_sched_domain_args *args);
|
||||
|
||||
void (*dlb2_iface_domain_reset)(struct dlb2_eventdev *dlb2);
|
||||
|
@ -26,4 +26,8 @@ extern int (*dlb2_iface_get_cq_poll_mode)(struct dlb2_hw_dev *handle,
|
||||
extern int (*dlb2_iface_get_num_resources)(struct dlb2_hw_dev *handle,
|
||||
struct dlb2_get_num_resources_args *rsrcs);
|
||||
|
||||
extern int (*dlb2_iface_sched_domain_create)(struct dlb2_hw_dev *handle,
|
||||
struct dlb2_create_sched_domain_args *args);
|
||||
|
||||
extern void (*dlb2_iface_domain_reset)(struct dlb2_eventdev *dlb2);
|
||||
#endif /* _DLB2_IFACE_H_ */
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -598,3 +598,18 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
int
|
||||
dlb2_pf_create_sched_domain(struct dlb2_hw *hw,
|
||||
struct dlb2_create_sched_domain_args *args,
|
||||
struct dlb2_cmd_response *resp)
|
||||
{
|
||||
return dlb2_hw_create_sched_domain(hw, args, resp, NOT_VF_REQ,
|
||||
PF_ID_ZERO);
|
||||
}
|
||||
|
||||
int
|
||||
dlb2_pf_reset_domain(struct dlb2_hw *hw, u32 id)
|
||||
{
|
||||
return dlb2_reset_domain(hw, id, NOT_VF_REQ, PF_ID_ZERO);
|
||||
}
|
||||
|
@ -108,15 +108,59 @@ dlb2_pf_get_cq_poll_mode(struct dlb2_hw_dev *handle,
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int
|
||||
dlb2_pf_sched_domain_create(struct dlb2_hw_dev *handle,
|
||||
struct dlb2_create_sched_domain_args *arg)
|
||||
{
|
||||
struct dlb2_dev *dlb2_dev = (struct dlb2_dev *)handle->pf_dev;
|
||||
struct dlb2_cmd_response response = {0};
|
||||
int ret;
|
||||
|
||||
DLB2_INFO(dev->dlb2_device, "Entering %s()\n", __func__);
|
||||
|
||||
if (dlb2_dev->domain_reset_failed) {
|
||||
response.status = DLB2_ST_DOMAIN_RESET_FAILED;
|
||||
ret = -EINVAL;
|
||||
goto done;
|
||||
}
|
||||
|
||||
ret = dlb2_pf_create_sched_domain(&dlb2_dev->hw, arg, &response);
|
||||
if (ret)
|
||||
goto done;
|
||||
|
||||
done:
|
||||
|
||||
arg->response = response;
|
||||
|
||||
DLB2_INFO(dev->dlb2_device, "Exiting %s() with ret=%d\n",
|
||||
__func__, ret);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void
|
||||
dlb2_pf_domain_reset(struct dlb2_eventdev *dlb2)
|
||||
{
|
||||
struct dlb2_dev *dlb2_dev;
|
||||
int ret;
|
||||
|
||||
dlb2_dev = (struct dlb2_dev *)dlb2->qm_instance.pf_dev;
|
||||
ret = dlb2_pf_reset_domain(&dlb2_dev->hw, dlb2->qm_instance.domain_id);
|
||||
if (ret)
|
||||
DLB2_LOG_ERR("dlb2_pf_reset_domain err %d", ret);
|
||||
}
|
||||
|
||||
static void
|
||||
dlb2_pf_iface_fn_ptrs_init(void)
|
||||
{
|
||||
dlb2_iface_low_level_io_init = dlb2_pf_low_level_io_init;
|
||||
dlb2_iface_open = dlb2_pf_open;
|
||||
dlb2_iface_domain_reset = dlb2_pf_domain_reset;
|
||||
dlb2_iface_get_device_version = dlb2_pf_get_device_version;
|
||||
dlb2_iface_hardware_init = dlb2_pf_hardware_init;
|
||||
dlb2_iface_get_num_resources = dlb2_pf_get_num_resources;
|
||||
dlb2_iface_get_cq_poll_mode = dlb2_pf_get_cq_poll_mode;
|
||||
dlb2_iface_sched_domain_create = dlb2_pf_sched_domain_create;
|
||||
}
|
||||
|
||||
/* PCI DEV HOOKS */
|
||||
|
Loading…
Reference in New Issue
Block a user