Add asynchronous command support to the pass(4) driver, and the new

camdd(8) utility.

CCBs may be queued to the driver via the new CAMIOQUEUE ioctl, and
completed CCBs may be retrieved via the CAMIOGET ioctl.  User
processes can use poll(2) or kevent(2) to get notification when
I/O has completed.

While the existing CAMIOCOMMAND blocking ioctl interface only
supports user virtual data pointers in a CCB (generally only
one per CCB), the new CAMIOQUEUE ioctl supports user virtual and
physical address pointers, as well as user virtual and physical
scatter/gather lists.  This allows user applications to have more
flexibility in their data handling operations.

Kernel memory for data transferred via the queued interface is
allocated from the zone allocator in MAXPHYS sized chunks, and user
data is copied in and out.  This is likely faster than the
vmapbuf()/vunmapbuf() method used by the CAMIOCOMMAND ioctl in
configurations with many processors (there are more TLB shootdowns
caused by the mapping/unmapping operation) but may not be as fast
as running with unmapped I/O.

The new memory handling model for user requests also allows
applications to send CCBs with request sizes that are larger than
MAXPHYS.  The pass(4) driver now limits queued requests to the I/O
size listed by the SIM driver in the maxio field in the Path
Inquiry (XPT_PATH_INQ) CCB.

There are some things things would be good to add:

1. Come up with a way to do unmapped I/O on multiple buffers.
   Currently the unmapped I/O interface operates on a struct bio,
   which includes only one address and length.  It would be nice
   to be able to send an unmapped scatter/gather list down to
   busdma.  This would allow eliminating the copy we currently do
   for data.

2. Add an ioctl to list currently outstanding CCBs in the various
   queues.

3. Add an ioctl to cancel a request, or use the XPT_ABORT CCB to do
   that.

4. Test physical address support.  Virtual pointers and scatter
   gather lists have been tested, but I have not yet tested
   physical addresses or scatter/gather lists.

5. Investigate multiple queue support.  At the moment there is one
   queue of commands per pass(4) device.  If multiple processes
   open the device, they will submit I/O into the same queue and
   get events for the same completions.  This is probably the right
   model for most applications, but it is something that could be
   changed later on.

Also, add a new utility, camdd(8) that uses the asynchronous pass(4)
driver interface.

This utility is intended to be a basic data transfer/copy utility,
a simple benchmark utility, and an example of how to use the
asynchronous pass(4) interface.

It can copy data to and from pass(4) devices using any target queue
depth, starting offset and blocksize for the input and ouptut devices.
It currently only supports SCSI devices, but could be easily extended
to support ATA devices.

It can also copy data to and from regular files, block devices, tape
devices, pipes, stdin, and stdout.  It does not support queueing
multiple commands to any of those targets, since it uses the standard
read(2)/write(2)/writev(2)/readv(2) system calls.

The I/O is done by two threads, one for the reader and one for the
writer.  The reader thread sends completed read requests to the
writer thread in strictly sequential order, even if they complete
out of order.  That could be modified later on for random I/O patterns
or slightly out of order I/O.

camdd(8) uses kqueue(2)/kevent(2) to get I/O completion events from
the pass(4) driver and also to send request notifications internally.

For pass(4) devcies, camdd(8) uses a single buffer (CAM_DATA_VADDR)
per CAM CCB on the reading side, and a scatter/gather list
(CAM_DATA_SG) on the writing side.  In addition to testing both
interfaces, this makes any potential reblocking of I/O easier.  No
data is copied between the reader and the writer, but rather the
reader's buffers are split into multiple I/O requests or combined
into a single I/O request depending on the input and output blocksize.

For the file I/O path, camdd(8) also uses a single buffer (read(2),
write(2), pread(2) or pwrite(2)) on reads, and a scatter/gather list
(readv(2), writev(2), preadv(2), pwritev(2)) on writes.

Things that would be nice to do for camdd(8) eventually:

1.  Add support for I/O pattern generation.  Patterns like all
    zeros, all ones, LBA-based patterns, random patterns, etc. Right
    Now you can always use /dev/zero, /dev/random, etc.

2.  Add support for a "sink" mode, so we do only reads with no
    writes.  Right now, you can use /dev/null.

3.  Add support for automatic queue depth probing, so that we can
    figure out the right queue depth on the input and output side
    for maximum throughput.  At the moment it defaults to 6.

4.  Add support for SATA device passthrough I/O.

5.  Add support for random LBAs and/or lengths on the input and
    output sides.

6.  Track average per-I/O latency and busy time.  The busy time
    and latency could also feed in to the automatic queue depth
    determination.

sys/cam/scsi/scsi_pass.h:
	Define two new ioctls, CAMIOQUEUE and CAMIOGET, that queue
	and fetch asynchronous CAM CCBs respectively.

	Although these ioctls do not have a declared argument, they
	both take a union ccb pointer.  If we declare a size here,
	the ioctl code in sys/kern/sys_generic.c will malloc and free
	a buffer for either the CCB or the CCB pointer (depending on
	how it is declared).  Since we have to keep a copy of the
	CCB (which is fairly large) anyway, having the ioctl malloc
	and free a CCB for each call is wasteful.

sys/cam/scsi/scsi_pass.c:
	Add asynchronous CCB support.

	Add two new ioctls, CAMIOQUEUE and CAMIOGET.

	CAMIOQUEUE adds a CCB to the incoming queue.  The CCB is
	executed immediately (and moved to the active queue) if it
	is an immediate CCB, but otherwise it will be executed
	in passstart() when a CCB is available from the transport layer.

	When CCBs are completed (because they are immediate or
	passdone() if they are queued), they are put on the done
	queue.

	If we get the final close on the device before all pending
	I/O is complete, all active I/O is moved to the abandoned
	queue and we increment the peripheral reference count so
	that the peripheral driver instance doesn't go away before
	all pending I/O is done.

	The new passcreatezone() function is called on the first
	call to the CAMIOQUEUE ioctl on a given device to allocate
	the UMA zones for I/O requests and S/G list buffers.  This
	may be good to move off to a taskqueue at some point.
	The new passmemsetup() function allocates memory and
	scatter/gather lists to hold the user's data, and copies
	in any data that needs to be written.  For virtual pointers
	(CAM_DATA_VADDR), the kernel buffer is malloced from the
	new pass(4) driver malloc bucket.  For virtual
	scatter/gather lists (CAM_DATA_SG), buffers are allocated
	from a new per-pass(9) UMA zone in MAXPHYS-sized chunks.
	Physical pointers are passed in unchanged.  We have support
	for up to 16 scatter/gather segments (for the user and
	kernel S/G lists) in the default struct pass_io_req, so
	requests with longer S/G lists require an extra kernel malloc.

	The new passcopysglist() function copies a user scatter/gather
	list to a kernel scatter/gather list.  The number of elements
	in each list may be different, but (obviously) the amount of data
	stored has to be identical.

	The new passmemdone() function copies data out for the
	CAM_DATA_VADDR and CAM_DATA_SG cases.

	The new passiocleanup() function restores data pointers in
	user CCBs and frees memory.

	Add new functions to support kqueue(2)/kevent(2):

	passreadfilt() tells kevent whether or not the done
	queue is empty.

	passkqfilter() adds a knote to our list.

	passreadfiltdetach() removes a knote from our list.

	Add a new function, passpoll(), for poll(2)/select(2)
	to use.

	Add devstat(9) support for the queued CCB path.

sys/cam/ata/ata_da.c:
	Add support for the BIO_VLIST bio type.

sys/cam/cam_ccb.h:
	Add a new enumeration for the xflags field in the CCB header.
	(This doesn't change the CCB header, just adds an enumeration to
	use.)

sys/cam/cam_xpt.c:
	Add a new function, xpt_setup_ccb_flags(), that allows specifying
	CCB flags.

sys/cam/cam_xpt.h:
	Add a prototype for xpt_setup_ccb_flags().

sys/cam/scsi/scsi_da.c:
	Add support for BIO_VLIST.

sys/dev/md/md.c:
	Add BIO_VLIST support to md(4).

sys/geom/geom_disk.c:
	Add BIO_VLIST support to the GEOM disk class.  Re-factor the I/O size
	limiting code in g_disk_start() a bit.

sys/kern/subr_bus_dma.c:
	Change _bus_dmamap_load_vlist() to take a starting offset and
	length.

	Add a new function, _bus_dmamap_load_pages(), that will load a list
	of physical pages starting at an offset.

	Update _bus_dmamap_load_bio() to allow loading BIO_VLIST bios.
	Allow unmapped I/O to start at an offset.

sys/kern/subr_uio.c:
	Add two new functions, physcopyin_vlist() and physcopyout_vlist().

sys/pc98/include/bus.h:
	Guard kernel-only parts of the pc98 machine/bus.h header with
	#ifdef _KERNEL.

	This allows userland programs to include <machine/bus.h> to get the
	definition of bus_addr_t and bus_size_t.

sys/sys/bio.h:
	Add a new bio flag, BIO_VLIST.

sys/sys/uio.h:
	Add prototypes for physcopyin_vlist() and physcopyout_vlist().

share/man/man4/pass.4:
	Document the CAMIOQUEUE and CAMIOGET ioctls.

usr.sbin/Makefile:
	Add camdd.

usr.sbin/camdd/Makefile:
	Add a makefile for camdd(8).

usr.sbin/camdd/camdd.8:
	Man page for camdd(8).

usr.sbin/camdd/camdd.c:
	The new camdd(8) utility.

Sponsored by:	Spectra Logic
MFC after:	1 week
This commit is contained in:
Kenneth D. Merry 2015-12-03 20:54:55 +00:00
parent b23896a221
commit a9934668aa
20 changed files with 5977 additions and 197 deletions

View File

@ -27,7 +27,7 @@
.\"
.\" $FreeBSD$
.\"
.Dd October 10, 1998
.Dd March 17, 2015
.Dt PASS 4
.Os
.Sh NAME
@ -53,9 +53,13 @@ The
.Nm
driver attaches to every
.Tn SCSI
and
.Tn ATA
device found in the system.
Since it attaches to every device, it provides a generic means of accessing
.Tn SCSI
and
.Tn ATA
devices, and allows the user to access devices which have no
"standard" peripheral driver associated with them.
.Sh KERNEL CONFIGURATION
@ -65,10 +69,12 @@ device in the kernel;
.Nm
devices are automatically allocated as
.Tn SCSI
and
.Tn ATA
devices are found.
.Sh IOCTLS
.Bl -tag -width 012345678901234
.It CAMIOCOMMAND
.Bl -tag -width 5n
.It CAMIOCOMMAND union ccb *
This ioctl takes most kinds of CAM CCBs and passes them through to the CAM
transport layer for action.
Note that some CCB types are not allowed
@ -79,7 +85,7 @@ Some examples of xpt-only CCBs are XPT_SCAN_BUS,
XPT_DEV_MATCH, XPT_RESET_BUS, XPT_SCAN_LUN, XPT_ENG_INQ, and XPT_ENG_EXEC.
These CCB types have various attributes that make it illogical or
impossible to service them through the passthrough interface.
.It CAMGETPASSTHRU
.It CAMGETPASSTHRU union ccb *
This ioctl takes an XPT_GDEVLIST CCB, and returns the passthrough device
corresponding to the device in question.
Although this ioctl is available through the
@ -90,6 +96,109 @@ ioctl.
It is probably more useful to issue this ioctl through the
.Xr xpt 4
device.
.It CAMIOQUEUE union ccb *
Queue a CCB to the
.Xr pass 4
driver to be executed asynchronously.
The caller may use
.Xr select 2 ,
.Xr poll 2
or
.Xr kevent 2
to receive notification when the CCB has completed.
.Pp
This ioctl takes most CAM CCBs, but some CCB types are not allowed through
the pass device, and must be sent through the
.Xr xpt 4
device instead.
Some examples of xpt-only CCBs are XPT_SCAN_BUS,
XPT_DEV_MATCH, XPT_RESET_BUS, XPT_SCAN_LUN, XPT_ENG_INQ, and XPT_ENG_EXEC.
These CCB types have various attributes that make it illogical or
impossible to service them through the passthrough interface.
.Pp
Although the
.Dv CAMIOQUEUE
ioctl is not defined to take an argument, it does require a
pointer to a union ccb.
It is not defined to take an argument to avoid an extra malloc and copy
inside the generic
.Xr ioctl 2
handler.
.pp
The completed CCB will be returned via the
.Dv CAMIOGET
ioctl.
An error will only be returned from the
.Dv CAMIOQUEUE
ioctl if there is an error allocating memory for the request or copying
memory from userland.
All other errors will be reported as standard CAM CCB status errors.
Since the CCB is not copied back to the user process from the pass driver
in the
.Dv CAMIOQUEUE
ioctl, the user's passed-in CCB will not be modfied.
This is the case even with immediate CCBs.
Instead, the completed CCB must be retrieved via the
.Dv CAMIOGET
ioctl and the status examined.
.Pp
Multiple CCBs may be queued via the
.Dv CAMIOQUEUE
ioctl at any given time, and they may complete in a different order than
the order that they were submitted.
The caller must take steps to identify CCBs that are queued and completed.
The
.Dv periph_priv
structure inside struct ccb_hdr is available for userland use with the
.Dv CAMIOQUEUE
and
.Dv CAMIOGET
ioctls, and will be preserved across calls.
Also, the periph_links linked list pointers inside struct ccb_hdr are
available for userland use with the
.Dv CAMIOQUEUE
and
.Dv CAMIOGET
ioctls and will be preserved across calls.
.It CAMIOGET union ccb *
Retrieve completed CAM CCBs queued via the
.Dv CAMIOQUEUE
ioctl.
An error will only be returned from the
.Dv CAMIOGET
ioctl if the
.Xr pass 4
driver fails to copy data to the user process or if there are no completed
CCBs available to retrieve.
If no CCBs are available to retrieve,
errno will be set to
.Dv ENOENT .
.Pp
All other errors will be reported as standard CAM CCB status errors.
.Pp
Although the
.Dv CAMIOGET
ioctl is not defined to take an argument, it does require a
pointer to a union ccb.
It is not defined to take an argument to avoid an extra malloc and copy
inside the generic
.Xr ioctl 2
handler.
.Pp
The pass driver will report via
.Xr select 2 ,
.Xr poll 2
or
.Xr kevent 2
when a CCB has completed.
One CCB may be retrieved per
.Dv CAMIOGET
call.
CCBs may be returned in an order different than the order they were
submitted.
So the caller should use the
.Dv periph_priv
area inside the CCB header to store pointers to identifying information.
.El
.Sh FILES
.Bl -tag -width /dev/passn -compact
@ -103,18 +212,21 @@ CAM subsystem.
.Sh DIAGNOSTICS
None.
.Sh SEE ALSO
.Xr kqueue 2 ,
.Xr poll 2 ,
.Xr select 2 ,
.Xr cam 3 ,
.Xr cam_cdbparse 3 ,
.Xr cam 4 ,
.Xr cd 4 ,
.Xr ctl 4 ,
.Xr da 4 ,
.Xr sa 4 ,
.Xr xpt 4 ,
.Xr camcontrol 8
.Xr camcontrol 8 ,
.Xr camdd 8
.Sh HISTORY
The CAM passthrough driver first appeared in
.Fx 3.0 .
.Sh AUTHORS
.An Kenneth Merry Aq Mt ken@FreeBSD.org
.Sh BUGS
It might be nice to have a way to asynchronously send CCBs through the
passthrough driver.
This would probably require some sort of read/write
interface or an asynchronous ioctl interface.

View File

@ -1543,12 +1543,26 @@ adastart(struct cam_periph *periph, union ccb *start_ccb)
}
switch (bp->bio_cmd) {
case BIO_WRITE:
softc->flags |= ADA_FLAG_DIRTY;
/* FALLTHROUGH */
case BIO_READ:
{
uint64_t lba = bp->bio_pblkno;
uint16_t count = bp->bio_bcount / softc->params.secsize;
void *data_ptr;
int rw_op;
if (bp->bio_cmd == BIO_WRITE) {
softc->flags |= ADA_FLAG_DIRTY;
rw_op = CAM_DIR_OUT;
} else {
rw_op = CAM_DIR_IN;
}
data_ptr = bp->bio_data;
if ((bp->bio_flags & (BIO_UNMAPPED|BIO_VLIST)) != 0) {
rw_op |= CAM_DATA_BIO;
data_ptr = bp;
}
#ifdef ADA_TEST_FAILURE
int fail = 0;
@ -1593,12 +1607,9 @@ adastart(struct cam_periph *periph, union ccb *start_ccb)
cam_fill_ataio(ataio,
ada_retry_count,
adadone,
(bp->bio_cmd == BIO_READ ? CAM_DIR_IN :
CAM_DIR_OUT) | ((bp->bio_flags & BIO_UNMAPPED)
!= 0 ? CAM_DATA_BIO : 0),
rw_op,
tag_code,
((bp->bio_flags & BIO_UNMAPPED) != 0) ? (void *)bp :
bp->bio_data,
data_ptr,
bp->bio_bcount,
ada_default_timeout*1000);

View File

@ -109,6 +109,12 @@ typedef enum {
CAM_UNLOCKED = 0x80000000 /* Call callback without lock. */
} ccb_flags;
typedef enum {
CAM_USER_DATA_ADDR = 0x00000001,/* Userspace data pointers */
CAM_SG_FORMAT_IOVEC = 0x00000002,/* iovec instead of busdma S/G*/
CAM_UNMAPPED_BUF = 0x00000004 /* use unmapped I/O */
} ccb_xflags;
/* XPT Opcodes for xpt_action */
typedef enum {
/* Function code flags are bits greater than 0xff */

View File

@ -3333,7 +3333,8 @@ xpt_merge_ccb(union ccb *master_ccb, union ccb *slave_ccb)
}
void
xpt_setup_ccb(struct ccb_hdr *ccb_h, struct cam_path *path, u_int32_t priority)
xpt_setup_ccb_flags(struct ccb_hdr *ccb_h, struct cam_path *path,
u_int32_t priority, u_int32_t flags)
{
CAM_DEBUG(path, CAM_DEBUG_TRACE, ("xpt_setup_ccb\n"));
@ -3351,10 +3352,16 @@ xpt_setup_ccb(struct ccb_hdr *ccb_h, struct cam_path *path, u_int32_t priority)
ccb_h->target_lun = CAM_TARGET_WILDCARD;
}
ccb_h->pinfo.index = CAM_UNQUEUED_INDEX;
ccb_h->flags = 0;
ccb_h->flags = flags;
ccb_h->xflags = 0;
}
void
xpt_setup_ccb(struct ccb_hdr *ccb_h, struct cam_path *path, u_int32_t priority)
{
xpt_setup_ccb_flags(ccb_h, path, priority, /*flags*/ 0);
}
/* Path manipulation functions */
cam_status
xpt_create_path(struct cam_path **new_path_ptr, struct cam_periph *perph,

View File

@ -70,6 +70,10 @@ void xpt_action_default(union ccb *new_ccb);
union ccb *xpt_alloc_ccb(void);
union ccb *xpt_alloc_ccb_nowait(void);
void xpt_free_ccb(union ccb *free_ccb);
void xpt_setup_ccb_flags(struct ccb_hdr *ccb_h,
struct cam_path *path,
u_int32_t priority,
u_int32_t flags);
void xpt_setup_ccb(struct ccb_hdr *ccb_h,
struct cam_path *path,
u_int32_t priority);

View File

@ -2328,29 +2328,40 @@ dastart(struct cam_periph *periph, union ccb *start_ccb)
switch (bp->bio_cmd) {
case BIO_WRITE:
softc->flags |= DA_FLAG_DIRTY;
/* FALLTHROUGH */
case BIO_READ:
{
void *data_ptr;
int rw_op;
if (bp->bio_cmd == BIO_WRITE) {
softc->flags |= DA_FLAG_DIRTY;
rw_op = SCSI_RW_WRITE;
} else {
rw_op = SCSI_RW_READ;
}
data_ptr = bp->bio_data;
if ((bp->bio_flags & (BIO_UNMAPPED|BIO_VLIST)) != 0) {
rw_op |= SCSI_RW_BIO;
data_ptr = bp;
}
scsi_read_write(&start_ccb->csio,
/*retries*/da_retry_count,
/*cbfcnp*/dadone,
/*tag_action*/tag_code,
/*read_op*/(bp->bio_cmd == BIO_READ ?
SCSI_RW_READ : SCSI_RW_WRITE) |
((bp->bio_flags & BIO_UNMAPPED) != 0 ?
SCSI_RW_BIO : 0),
rw_op,
/*byte2*/0,
softc->minimum_cmd_size,
/*lba*/bp->bio_pblkno,
/*block_count*/bp->bio_bcount /
softc->params.secsize,
/*data_ptr*/ (bp->bio_flags &
BIO_UNMAPPED) != 0 ? (void *)bp :
bp->bio_data,
data_ptr,
/*dxfer_len*/ bp->bio_bcount,
/*sense_len*/SSD_FULL_SIZE,
da_default_timeout * 1000);
break;
}
case BIO_FLUSH:
/*
* BIO_FLUSH doesn't currently communicate

File diff suppressed because it is too large Load Diff

View File

@ -39,4 +39,12 @@
#define CAMIOCOMMAND _IOWR(CAM_VERSION, 2, union ccb)
#define CAMGETPASSTHRU _IOWR(CAM_VERSION, 3, union ccb)
/*
* These two ioctls take a union ccb *, but that is not explicitly declared
* to avoid having the ioctl handling code malloc and free their own copy
* of the CCB or the CCB pointer.
*/
#define CAMIOQUEUE _IO(CAM_VERSION, 4)
#define CAMIOGET _IO(CAM_VERSION, 5)
#endif

View File

@ -99,6 +99,8 @@
#include <vm/swap_pager.h>
#include <vm/uma.h>
#include <machine/bus.h>
#define MD_MODVER 1
#define MD_SHUTDOWN 0x10000 /* Tell worker thread to terminate. */
@ -446,7 +448,7 @@ g_md_start(struct bio *bp)
#define MD_MALLOC_MOVE_CMP 5
static int
md_malloc_move(vm_page_t **mp, int *ma_offs, unsigned sectorsize,
md_malloc_move_ma(vm_page_t **mp, int *ma_offs, unsigned sectorsize,
void *ptr, u_char fill, int op)
{
struct sf_buf *sf;
@ -508,7 +510,7 @@ md_malloc_move(vm_page_t **mp, int *ma_offs, unsigned sectorsize,
}
break;
default:
KASSERT(0, ("md_malloc_move unknown op %d\n", op));
KASSERT(0, ("md_malloc_move_ma unknown op %d\n", op));
break;
}
if (error != 0)
@ -530,11 +532,69 @@ md_malloc_move(vm_page_t **mp, int *ma_offs, unsigned sectorsize,
return (error);
}
static int
md_malloc_move_vlist(bus_dma_segment_t **pvlist, int *pma_offs,
unsigned len, void *ptr, u_char fill, int op)
{
bus_dma_segment_t *vlist;
uint8_t *p, *end, first;
off_t *uc;
int ma_offs, seg_len;
vlist = *pvlist;
ma_offs = *pma_offs;
uc = ptr;
for (; len != 0; len -= seg_len) {
seg_len = imin(vlist->ds_len - ma_offs, len);
p = (uint8_t *)(uintptr_t)vlist->ds_addr + ma_offs;
switch (op) {
case MD_MALLOC_MOVE_ZERO:
bzero(p, seg_len);
break;
case MD_MALLOC_MOVE_FILL:
memset(p, fill, seg_len);
break;
case MD_MALLOC_MOVE_READ:
bcopy(ptr, p, seg_len);
cpu_flush_dcache(p, seg_len);
break;
case MD_MALLOC_MOVE_WRITE:
bcopy(p, ptr, seg_len);
break;
case MD_MALLOC_MOVE_CMP:
end = p + seg_len;
first = *uc = *p;
/* Confirm all following bytes match the first */
while (++p < end) {
if (*p != first)
return (EDOOFUS);
}
break;
default:
KASSERT(0, ("md_malloc_move_vlist unknown op %d\n", op));
break;
}
ma_offs += seg_len;
if (ma_offs == vlist->ds_len) {
ma_offs = 0;
vlist++;
}
ptr = (uint8_t *)ptr + seg_len;
}
*pvlist = vlist;
*pma_offs = ma_offs;
return (0);
}
static int
mdstart_malloc(struct md_s *sc, struct bio *bp)
{
u_char *dst;
vm_page_t *m;
bus_dma_segment_t *vlist;
int i, error, error1, ma_offs, notmapped;
off_t secno, nsec, uc;
uintptr_t sp, osp;
@ -549,10 +609,16 @@ mdstart_malloc(struct md_s *sc, struct bio *bp)
}
notmapped = (bp->bio_flags & BIO_UNMAPPED) != 0;
vlist = (bp->bio_flags & BIO_VLIST) != 0 ?
(bus_dma_segment_t *)bp->bio_data : NULL;
if (notmapped) {
m = bp->bio_ma;
ma_offs = bp->bio_ma_offset;
dst = NULL;
KASSERT(vlist == NULL, ("vlists cannot be unmapped"));
} else if (vlist != NULL) {
ma_offs = bp->bio_ma_offset;
dst = NULL;
} else {
dst = bp->bio_data;
}
@ -568,23 +634,36 @@ mdstart_malloc(struct md_s *sc, struct bio *bp)
} else if (bp->bio_cmd == BIO_READ) {
if (osp == 0) {
if (notmapped) {
error = md_malloc_move(&m, &ma_offs,
error = md_malloc_move_ma(&m, &ma_offs,
sc->sectorsize, NULL, 0,
MD_MALLOC_MOVE_ZERO);
} else if (vlist != NULL) {
error = md_malloc_move_vlist(&vlist,
&ma_offs, sc->sectorsize, NULL, 0,
MD_MALLOC_MOVE_ZERO);
} else
bzero(dst, sc->sectorsize);
} else if (osp <= 255) {
if (notmapped) {
error = md_malloc_move(&m, &ma_offs,
error = md_malloc_move_ma(&m, &ma_offs,
sc->sectorsize, NULL, osp,
MD_MALLOC_MOVE_FILL);
} else if (vlist != NULL) {
error = md_malloc_move_vlist(&vlist,
&ma_offs, sc->sectorsize, NULL, osp,
MD_MALLOC_MOVE_FILL);
} else
memset(dst, osp, sc->sectorsize);
} else {
if (notmapped) {
error = md_malloc_move(&m, &ma_offs,
error = md_malloc_move_ma(&m, &ma_offs,
sc->sectorsize, (void *)osp, 0,
MD_MALLOC_MOVE_READ);
} else if (vlist != NULL) {
error = md_malloc_move_vlist(&vlist,
&ma_offs, sc->sectorsize,
(void *)osp, 0,
MD_MALLOC_MOVE_READ);
} else {
bcopy((void *)osp, dst, sc->sectorsize);
cpu_flush_dcache(dst, sc->sectorsize);
@ -594,10 +673,15 @@ mdstart_malloc(struct md_s *sc, struct bio *bp)
} else if (bp->bio_cmd == BIO_WRITE) {
if (sc->flags & MD_COMPRESS) {
if (notmapped) {
error1 = md_malloc_move(&m, &ma_offs,
error1 = md_malloc_move_ma(&m, &ma_offs,
sc->sectorsize, &uc, 0,
MD_MALLOC_MOVE_CMP);
i = error1 == 0 ? sc->sectorsize : 0;
} else if (vlist != NULL) {
error1 = md_malloc_move_vlist(&vlist,
&ma_offs, sc->sectorsize, &uc, 0,
MD_MALLOC_MOVE_CMP);
i = error1 == 0 ? sc->sectorsize : 0;
} else {
uc = dst[0];
for (i = 1; i < sc->sectorsize; i++) {
@ -622,10 +706,15 @@ mdstart_malloc(struct md_s *sc, struct bio *bp)
break;
}
if (notmapped) {
error = md_malloc_move(&m,
error = md_malloc_move_ma(&m,
&ma_offs, sc->sectorsize,
(void *)sp, 0,
MD_MALLOC_MOVE_WRITE);
} else if (vlist != NULL) {
error = md_malloc_move_vlist(
&vlist, &ma_offs,
sc->sectorsize, (void *)sp,
0, MD_MALLOC_MOVE_WRITE);
} else {
bcopy(dst, (void *)sp,
sc->sectorsize);
@ -633,10 +722,15 @@ mdstart_malloc(struct md_s *sc, struct bio *bp)
error = s_write(sc->indir, secno, sp);
} else {
if (notmapped) {
error = md_malloc_move(&m,
error = md_malloc_move_ma(&m,
&ma_offs, sc->sectorsize,
(void *)osp, 0,
MD_MALLOC_MOVE_WRITE);
} else if (vlist != NULL) {
error = md_malloc_move_vlist(
&vlist, &ma_offs,
sc->sectorsize, (void *)osp,
0, MD_MALLOC_MOVE_WRITE);
} else {
bcopy(dst, (void *)osp,
sc->sectorsize);
@ -652,26 +746,78 @@ mdstart_malloc(struct md_s *sc, struct bio *bp)
if (error != 0)
break;
secno++;
if (!notmapped)
if (!notmapped && vlist == NULL)
dst += sc->sectorsize;
}
bp->bio_resid = 0;
return (error);
}
static void
mdcopyto_vlist(void *src, bus_dma_segment_t *vlist, off_t offset, off_t len)
{
off_t seg_len;
while (offset >= vlist->ds_len) {
offset -= vlist->ds_len;
vlist++;
}
while (len != 0) {
seg_len = omin(len, vlist->ds_len - offset);
bcopy(src, (void *)(uintptr_t)(vlist->ds_addr + offset),
seg_len);
offset = 0;
src = (uint8_t *)src + seg_len;
len -= seg_len;
vlist++;
}
}
static void
mdcopyfrom_vlist(bus_dma_segment_t *vlist, off_t offset, void *dst, off_t len)
{
off_t seg_len;
while (offset >= vlist->ds_len) {
offset -= vlist->ds_len;
vlist++;
}
while (len != 0) {
seg_len = omin(len, vlist->ds_len - offset);
bcopy((void *)(uintptr_t)(vlist->ds_addr + offset), dst,
seg_len);
offset = 0;
dst = (uint8_t *)dst + seg_len;
len -= seg_len;
vlist++;
}
}
static int
mdstart_preload(struct md_s *sc, struct bio *bp)
{
uint8_t *p;
p = sc->pl_ptr + bp->bio_offset;
switch (bp->bio_cmd) {
case BIO_READ:
bcopy(sc->pl_ptr + bp->bio_offset, bp->bio_data,
bp->bio_length);
if ((bp->bio_flags & BIO_VLIST) != 0) {
mdcopyto_vlist(p, (bus_dma_segment_t *)bp->bio_data,
bp->bio_ma_offset, bp->bio_length);
} else {
bcopy(p, bp->bio_data, bp->bio_length);
}
cpu_flush_dcache(bp->bio_data, bp->bio_length);
break;
case BIO_WRITE:
bcopy(bp->bio_data, sc->pl_ptr + bp->bio_offset,
bp->bio_length);
if ((bp->bio_flags & BIO_VLIST) != 0) {
mdcopyfrom_vlist((bus_dma_segment_t *)bp->bio_data,
bp->bio_ma_offset, p, bp->bio_length);
} else {
bcopy(bp->bio_data, p, bp->bio_length);
}
break;
}
bp->bio_resid = 0;
@ -684,16 +830,23 @@ mdstart_vnode(struct md_s *sc, struct bio *bp)
int error;
struct uio auio;
struct iovec aiov;
struct iovec *piov;
struct mount *mp;
struct vnode *vp;
struct buf *pb;
bus_dma_segment_t *vlist;
struct thread *td;
off_t end, zerosize;
off_t len, zerosize;
int ma_offs;
switch (bp->bio_cmd) {
case BIO_READ:
auio.uio_rw = UIO_READ;
break;
case BIO_WRITE:
case BIO_DELETE:
auio.uio_rw = UIO_WRITE;
break;
case BIO_FLUSH:
break;
default:
@ -702,6 +855,9 @@ mdstart_vnode(struct md_s *sc, struct bio *bp)
td = curthread;
vp = sc->vnode;
pb = NULL;
piov = NULL;
ma_offs = bp->bio_ma_offset;
/*
* VNODE I/O
@ -720,73 +876,66 @@ mdstart_vnode(struct md_s *sc, struct bio *bp)
return (error);
}
bzero(&auio, sizeof(auio));
auio.uio_offset = (vm_ooffset_t)bp->bio_offset;
auio.uio_resid = bp->bio_length;
auio.uio_segflg = UIO_SYSSPACE;
auio.uio_td = td;
/*
* Special case for BIO_DELETE. On the surface, this is very
* similar to BIO_WRITE, except that we write from our own
* fixed-length buffer, so we have to loop. The net result is
* that the two cases end up having very little in common.
*/
if (bp->bio_cmd == BIO_DELETE) {
/*
* Emulate BIO_DELETE by writing zeros.
*/
zerosize = ZERO_REGION_SIZE -
(ZERO_REGION_SIZE % sc->sectorsize);
auio.uio_iov = &aiov;
auio.uio_iovcnt = 1;
auio.uio_offset = (vm_ooffset_t)bp->bio_offset;
auio.uio_segflg = UIO_SYSSPACE;
auio.uio_rw = UIO_WRITE;
auio.uio_td = td;
end = bp->bio_offset + bp->bio_length;
(void) vn_start_write(vp, &mp, V_WAIT);
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
error = 0;
while (auio.uio_offset < end) {
aiov.iov_base = __DECONST(void *, zero_region);
aiov.iov_len = end - auio.uio_offset;
if (aiov.iov_len > zerosize)
aiov.iov_len = zerosize;
auio.uio_resid = aiov.iov_len;
error = VOP_WRITE(vp, &auio,
sc->flags & MD_ASYNC ? 0 : IO_SYNC, sc->cred);
if (error != 0)
break;
auio.uio_iovcnt = howmany(bp->bio_length, zerosize);
piov = malloc(sizeof(*piov) * auio.uio_iovcnt, M_MD, M_WAITOK);
auio.uio_iov = piov;
len = bp->bio_length;
while (len > 0) {
piov->iov_base = __DECONST(void *, zero_region);
piov->iov_len = len;
if (len > zerosize)
piov->iov_len = zerosize;
len -= piov->iov_len;
piov++;
}
VOP_UNLOCK(vp, 0);
vn_finished_write(mp);
bp->bio_resid = end - auio.uio_offset;
return (error);
}
if ((bp->bio_flags & BIO_UNMAPPED) == 0) {
pb = NULL;
aiov.iov_base = bp->bio_data;
} else {
KASSERT(bp->bio_length <= MAXPHYS, ("bio_length %jd",
(uintmax_t)bp->bio_length));
piov = auio.uio_iov;
} else if ((bp->bio_flags & BIO_VLIST) != 0) {
piov = malloc(sizeof(*piov) * bp->bio_ma_n, M_MD, M_WAITOK);
auio.uio_iov = piov;
vlist = (bus_dma_segment_t *)bp->bio_data;
len = bp->bio_length;
while (len > 0) {
piov->iov_base = (void *)(uintptr_t)(vlist->ds_addr +
ma_offs);
piov->iov_len = vlist->ds_len - ma_offs;
if (piov->iov_len > len)
piov->iov_len = len;
len -= piov->iov_len;
ma_offs = 0;
vlist++;
piov++;
}
auio.uio_iovcnt = piov - auio.uio_iov;
piov = auio.uio_iov;
} else if ((bp->bio_flags & BIO_UNMAPPED) != 0) {
pb = getpbuf(&md_vnode_pbuf_freecnt);
pmap_qenter((vm_offset_t)pb->b_data, bp->bio_ma, bp->bio_ma_n);
aiov.iov_base = (void *)((vm_offset_t)pb->b_data +
bp->bio_ma_offset);
aiov.iov_base = (void *)((vm_offset_t)pb->b_data + ma_offs);
aiov.iov_len = bp->bio_length;
auio.uio_iov = &aiov;
auio.uio_iovcnt = 1;
} else {
aiov.iov_base = bp->bio_data;
aiov.iov_len = bp->bio_length;
auio.uio_iov = &aiov;
auio.uio_iovcnt = 1;
}
aiov.iov_len = bp->bio_length;
auio.uio_iov = &aiov;
auio.uio_iovcnt = 1;
auio.uio_offset = (vm_ooffset_t)bp->bio_offset;
auio.uio_segflg = UIO_SYSSPACE;
if (bp->bio_cmd == BIO_READ)
auio.uio_rw = UIO_READ;
else if (bp->bio_cmd == BIO_WRITE)
auio.uio_rw = UIO_WRITE;
else
panic("wrong BIO_OP in mdstart_vnode");
auio.uio_resid = bp->bio_length;
auio.uio_td = td;
/*
* When reading set IO_DIRECT to try to avoid double-caching
* the data. When writing IO_DIRECT is not optimal.
*/
if (bp->bio_cmd == BIO_READ) {
if (auio.uio_rw == UIO_READ) {
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
error = VOP_READ(vp, &auio, IO_DIRECT, sc->cred);
VOP_UNLOCK(vp, 0);
@ -798,10 +947,15 @@ mdstart_vnode(struct md_s *sc, struct bio *bp)
VOP_UNLOCK(vp, 0);
vn_finished_write(mp);
}
if ((bp->bio_flags & BIO_UNMAPPED) != 0) {
if (pb) {
pmap_qremove((vm_offset_t)pb->b_data, bp->bio_ma_n);
relpbuf(pb, &md_vnode_pbuf_freecnt);
}
if (piov != NULL)
free(piov, M_MD);
bp->bio_resid = auio.uio_resid;
return (error);
}
@ -812,6 +966,7 @@ mdstart_swap(struct md_s *sc, struct bio *bp)
vm_page_t m;
u_char *p;
vm_pindex_t i, lastp;
bus_dma_segment_t *vlist;
int rv, ma_offs, offs, len, lastend;
switch (bp->bio_cmd) {
@ -824,7 +979,10 @@ mdstart_swap(struct md_s *sc, struct bio *bp)
}
p = bp->bio_data;
ma_offs = (bp->bio_flags & BIO_UNMAPPED) == 0 ? 0 : bp->bio_ma_offset;
ma_offs = (bp->bio_flags & (BIO_UNMAPPED|BIO_VLIST)) != 0 ?
bp->bio_ma_offset : 0;
vlist = (bp->bio_flags & BIO_VLIST) != 0 ?
(bus_dma_segment_t *)bp->bio_data : NULL;
/*
* offs is the offset at which to start operating on the
@ -864,6 +1022,10 @@ mdstart_swap(struct md_s *sc, struct bio *bp)
if ((bp->bio_flags & BIO_UNMAPPED) != 0) {
pmap_copy_pages(&m, offs, bp->bio_ma,
ma_offs, len);
} else if ((bp->bio_flags & BIO_VLIST) != 0) {
physcopyout_vlist(VM_PAGE_TO_PHYS(m) + offs,
vlist, ma_offs, len);
cpu_flush_dcache(p, len);
} else {
physcopyout(VM_PAGE_TO_PHYS(m) + offs, p, len);
cpu_flush_dcache(p, len);
@ -880,6 +1042,9 @@ mdstart_swap(struct md_s *sc, struct bio *bp)
if ((bp->bio_flags & BIO_UNMAPPED) != 0) {
pmap_copy_pages(bp->bio_ma, ma_offs, &m,
offs, len);
} else if ((bp->bio_flags & BIO_VLIST) != 0) {
physcopyin_vlist(vlist, ma_offs,
VM_PAGE_TO_PHYS(m) + offs, len);
} else {
physcopyin(p, VM_PAGE_TO_PHYS(m) + offs, len);
}

View File

@ -58,6 +58,8 @@ __FBSDID("$FreeBSD$");
#include <dev/led/led.h>
#include <machine/bus.h>
struct g_disk_softc {
struct mtx done_mtx;
struct disk *dp;
@ -251,6 +253,138 @@ g_disk_ioctl(struct g_provider *pp, u_long cmd, void * data, int fflag, struct t
return (error);
}
static int
g_disk_maxsegs(struct disk *dp)
{
return ((dp->d_maxsize / PAGE_SIZE) + 1);
}
static void
g_disk_advance(struct disk *dp, struct bio *bp, off_t off)
{
bp->bio_offset += off;
bp->bio_length -= off;
if ((bp->bio_flags & BIO_VLIST) != 0) {
bus_dma_segment_t *seg, *end;
seg = (bus_dma_segment_t *)bp->bio_data;
end = (bus_dma_segment_t *)bp->bio_data + bp->bio_ma_n;
off += bp->bio_ma_offset;
while (off >= seg->ds_len) {
KASSERT((seg != end),
("vlist request runs off the end"));
off -= seg->ds_len;
seg++;
}
bp->bio_ma_offset = off;
bp->bio_ma_n = end - seg;
bp->bio_data = (void *)seg;
} else if ((bp->bio_flags & BIO_UNMAPPED) != 0) {
bp->bio_ma += off / PAGE_SIZE;
bp->bio_ma_offset += off;
bp->bio_ma_offset %= PAGE_SIZE;
bp->bio_ma_n -= off / PAGE_SIZE;
} else {
bp->bio_data += off;
}
}
static void
g_disk_seg_limit(bus_dma_segment_t *seg, off_t *poffset,
off_t *plength, int *ppages)
{
uintptr_t seg_page_base;
uintptr_t seg_page_end;
off_t offset;
off_t length;
int seg_pages;
offset = *poffset;
length = *plength;
if (length > seg->ds_len - offset)
length = seg->ds_len - offset;
seg_page_base = trunc_page(seg->ds_addr + offset);
seg_page_end = round_page(seg->ds_addr + offset + length);
seg_pages = (seg_page_end - seg_page_base) >> PAGE_SHIFT;
if (seg_pages > *ppages) {
seg_pages = *ppages;
length = (seg_page_base + (seg_pages << PAGE_SHIFT)) -
(seg->ds_addr + offset);
}
*poffset = 0;
*plength -= length;
*ppages -= seg_pages;
}
static off_t
g_disk_vlist_limit(struct disk *dp, struct bio *bp, bus_dma_segment_t **pendseg)
{
bus_dma_segment_t *seg, *end;
off_t residual;
off_t offset;
int pages;
seg = (bus_dma_segment_t *)bp->bio_data;
end = (bus_dma_segment_t *)bp->bio_data + bp->bio_ma_n;
residual = bp->bio_length;
offset = bp->bio_ma_offset;
pages = g_disk_maxsegs(dp);
while (residual != 0 && pages != 0) {
KASSERT((seg != end),
("vlist limit runs off the end"));
g_disk_seg_limit(seg, &offset, &residual, &pages);
seg++;
}
if (pendseg != NULL)
*pendseg = seg;
return (residual);
}
static bool
g_disk_limit(struct disk *dp, struct bio *bp)
{
bool limited = false;
off_t d_maxsize;
d_maxsize = (bp->bio_cmd == BIO_DELETE) ?
dp->d_delmaxsize : dp->d_maxsize;
/*
* XXX: If we have a stripesize we should really use it here.
* Care should be taken in the delete case if this is done
* as deletes can be very sensitive to size given how they
* are processed.
*/
if (bp->bio_length > d_maxsize) {
bp->bio_length = d_maxsize;
limited = true;
}
if ((bp->bio_flags & BIO_VLIST) != 0) {
bus_dma_segment_t *firstseg, *endseg;
off_t residual;
firstseg = (bus_dma_segment_t*)bp->bio_data;
residual = g_disk_vlist_limit(dp, bp, &endseg);
if (residual != 0) {
bp->bio_ma_n = endseg - firstseg;
bp->bio_length -= residual;
limited = true;
}
} else if ((bp->bio_flags & BIO_UNMAPPED) != 0) {
bp->bio_ma_n =
howmany(bp->bio_ma_offset + bp->bio_length, PAGE_SIZE);
}
return (limited);
}
static void
g_disk_start(struct bio *bp)
{
@ -275,6 +409,9 @@ g_disk_start(struct bio *bp)
/* fall-through */
case BIO_READ:
case BIO_WRITE:
KASSERT((dp->d_flags & DISKFLAG_UNMAPPED_BIO) != 0 ||
(bp->bio_flags & BIO_UNMAPPED) == 0,
("unmapped bio not supported by disk %s", dp->d_name));
off = 0;
bp3 = NULL;
bp2 = g_clone_bio(bp);
@ -282,39 +419,10 @@ g_disk_start(struct bio *bp)
error = ENOMEM;
break;
}
do {
off_t d_maxsize;
for (;;) {
if (g_disk_limit(dp, bp2)) {
off += bp2->bio_length;
d_maxsize = (bp->bio_cmd == BIO_DELETE) ?
dp->d_delmaxsize : dp->d_maxsize;
bp2->bio_offset += off;
bp2->bio_length -= off;
if ((bp->bio_flags & BIO_UNMAPPED) == 0) {
bp2->bio_data += off;
} else {
KASSERT((dp->d_flags & DISKFLAG_UNMAPPED_BIO)
!= 0,
("unmapped bio not supported by disk %s",
dp->d_name));
bp2->bio_ma += off / PAGE_SIZE;
bp2->bio_ma_offset += off;
bp2->bio_ma_offset %= PAGE_SIZE;
bp2->bio_ma_n -= off / PAGE_SIZE;
}
if (bp2->bio_length > d_maxsize) {
/*
* XXX: If we have a stripesize we should really
* use it here. Care should be taken in the delete
* case if this is done as deletes can be very
* sensitive to size given how they are processed.
*/
bp2->bio_length = d_maxsize;
if ((bp->bio_flags & BIO_UNMAPPED) != 0) {
bp2->bio_ma_n = howmany(
bp2->bio_ma_offset +
bp2->bio_length, PAGE_SIZE);
}
off += d_maxsize;
/*
* To avoid a race, we need to grab the next bio
* before we schedule this one. See "notes".
@ -331,9 +439,14 @@ g_disk_start(struct bio *bp)
devstat_start_transaction_bio(dp->d_devstat, bp2);
mtx_unlock(&sc->start_mtx);
dp->d_strategy(bp2);
if (bp3 == NULL)
break;
bp2 = bp3;
bp3 = NULL;
} while (bp2 != NULL);
g_disk_advance(dp, bp2, off);
}
break;
case BIO_GETATTR:
/* Give the driver a chance to override */

View File

@ -205,11 +205,12 @@ g_clone_bio(struct bio *bp)
/*
* BIO_ORDERED flag may be used by disk drivers to enforce
* ordering restrictions, so this flag needs to be cloned.
* BIO_UNMAPPED should be inherited, to properly indicate
* which way the buffer is passed.
* BIO_UNMAPPED and BIO_VLIST should be inherited, to properly
* indicate which way the buffer is passed.
* Other bio flags are not suitable for cloning.
*/
bp2->bio_flags = bp->bio_flags & (BIO_ORDERED | BIO_UNMAPPED);
bp2->bio_flags = bp->bio_flags &
(BIO_ORDERED | BIO_UNMAPPED | BIO_VLIST);
bp2->bio_length = bp->bio_length;
bp2->bio_offset = bp->bio_offset;
bp2->bio_data = bp->bio_data;
@ -240,7 +241,7 @@ g_duplicate_bio(struct bio *bp)
struct bio *bp2;
bp2 = uma_zalloc(biozone, M_WAITOK | M_ZERO);
bp2->bio_flags = bp->bio_flags & BIO_UNMAPPED;
bp2->bio_flags = bp->bio_flags & (BIO_UNMAPPED | BIO_VLIST);
bp2->bio_parent = bp;
bp2->bio_cmd = bp->bio_cmd;
bp2->bio_length = bp->bio_length;

View File

@ -54,19 +54,32 @@ __FBSDID("$FreeBSD$");
#include <machine/bus.h>
/*
* Load a list of virtual addresses.
* Load up data starting at offset within a region specified by a
* list of virtual address ranges until either length or the region
* are exhausted.
*/
static int
_bus_dmamap_load_vlist(bus_dma_tag_t dmat, bus_dmamap_t map,
bus_dma_segment_t *list, int sglist_cnt, struct pmap *pmap, int *nsegs,
int flags)
int flags, size_t offset, size_t length)
{
int error;
error = 0;
for (; sglist_cnt > 0; sglist_cnt--, list++) {
error = _bus_dmamap_load_buffer(dmat, map,
(void *)(uintptr_t)list->ds_addr, list->ds_len, pmap,
for (; sglist_cnt > 0 && length != 0; sglist_cnt--, list++) {
char *addr;
size_t ds_len;
KASSERT((offset < list->ds_len),
("Invalid mid-segment offset"));
addr = (char *)(uintptr_t)list->ds_addr + offset;
ds_len = list->ds_len - offset;
offset = 0;
if (ds_len > length)
ds_len = length;
length -= ds_len;
KASSERT((ds_len != 0), ("Segment length is zero"));
error = _bus_dmamap_load_buffer(dmat, map, addr, ds_len, pmap,
flags, NULL, nsegs);
if (error)
break;
@ -117,6 +130,28 @@ _bus_dmamap_load_mbuf_sg(bus_dma_tag_t dmat, bus_dmamap_t map,
return (error);
}
/*
* Load tlen data starting at offset within a region specified by a list of
* physical pages.
*/
static int
_bus_dmamap_load_pages(bus_dma_tag_t dmat, bus_dmamap_t map,
vm_page_t *pages, bus_size_t tlen, int offset, int *nsegs, int flags)
{
vm_paddr_t paddr;
bus_size_t len;
int error, i;
for (i = 0, error = 0; error == 0 && tlen > 0; i++, tlen -= len) {
len = min(PAGE_SIZE - offset, tlen);
paddr = VM_PAGE_TO_PHYS(pages[i]) + offset;
error = _bus_dmamap_load_phys(dmat, map, paddr, len,
flags, NULL, nsegs);
offset = 0;
}
return (error);
}
/*
* Load from block io.
*/
@ -124,16 +159,20 @@ static int
_bus_dmamap_load_bio(bus_dma_tag_t dmat, bus_dmamap_t map, struct bio *bio,
int *nsegs, int flags)
{
int error;
if ((bio->bio_flags & BIO_UNMAPPED) == 0) {
error = _bus_dmamap_load_buffer(dmat, map, bio->bio_data,
bio->bio_bcount, kernel_pmap, flags, NULL, nsegs);
} else {
error = _bus_dmamap_load_ma(dmat, map, bio->bio_ma,
bio->bio_bcount, bio->bio_ma_offset, flags, NULL, nsegs);
if ((bio->bio_flags & BIO_VLIST) != 0) {
bus_dma_segment_t *segs = (bus_dma_segment_t *)bio->bio_data;
return (_bus_dmamap_load_vlist(dmat, map, segs, bio->bio_ma_n,
kernel_pmap, nsegs, flags, bio->bio_ma_offset,
bio->bio_bcount));
}
return (error);
if ((bio->bio_flags & BIO_UNMAPPED) != 0)
return (_bus_dmamap_load_pages(dmat, map, bio->bio_ma,
bio->bio_bcount, bio->bio_ma_offset, nsegs, flags));
return (_bus_dmamap_load_buffer(dmat, map, bio->bio_data,
bio->bio_bcount, kernel_pmap, flags, NULL, nsegs));
}
int
@ -219,7 +258,7 @@ _bus_dmamap_load_ccb(bus_dma_tag_t dmat, bus_dmamap_t map, union ccb *ccb,
case CAM_DATA_SG:
error = _bus_dmamap_load_vlist(dmat, map,
(bus_dma_segment_t *)data_ptr, sglist_cnt, kernel_pmap,
nsegs, flags);
nsegs, flags, 0, dxfer_len);
break;
case CAM_DATA_SG_PADDR:
error = _bus_dmamap_load_plist(dmat, map,
@ -494,7 +533,7 @@ bus_dmamap_load_mem(bus_dma_tag_t dmat, bus_dmamap_t map,
break;
case MEMDESC_VLIST:
error = _bus_dmamap_load_vlist(dmat, map, mem->u.md_list,
mem->md_opaque, kernel_pmap, &nsegs, flags);
mem->md_opaque, kernel_pmap, &nsegs, flags, 0, SIZE_T_MAX);
break;
case MEMDESC_PLIST:
error = _bus_dmamap_load_plist(dmat, map, mem->u.md_list,

View File

@ -62,6 +62,8 @@ __FBSDID("$FreeBSD$");
#include <vm/vm_pageout.h>
#include <vm/vm_map.h>
#include <machine/bus.h>
SYSCTL_INT(_kern, KERN_IOV_MAX, iov_max, CTLFLAG_RD, SYSCTL_NULL_INT_PTR, UIO_MAXIOV,
"Maximum number of elements in an I/O vector; sysconf(_SC_IOV_MAX)");
@ -135,6 +137,58 @@ physcopyout(vm_paddr_t src, void *dst, size_t len)
#undef PHYS_PAGE_COUNT
int
physcopyin_vlist(bus_dma_segment_t *src, off_t offset, vm_paddr_t dst,
size_t len)
{
size_t seg_len;
int error;
error = 0;
while (offset >= src->ds_len) {
offset -= src->ds_len;
src++;
}
while (len > 0 && error == 0) {
seg_len = MIN(src->ds_len - offset, len);
error = physcopyin((void *)(uintptr_t)(src->ds_addr + offset),
dst, seg_len);
offset = 0;
src++;
len -= seg_len;
dst += seg_len;
}
return (error);
}
int
physcopyout_vlist(vm_paddr_t src, bus_dma_segment_t *dst, off_t offset,
size_t len)
{
size_t seg_len;
int error;
error = 0;
while (offset >= dst->ds_len) {
offset -= dst->ds_len;
dst++;
}
while (len > 0 && error == 0) {
seg_len = MIN(dst->ds_len - offset, len);
error = physcopyout(src, (void *)(uintptr_t)(dst->ds_addr +
offset), seg_len);
offset = 0;
dst++;
len -= seg_len;
src += seg_len;
}
return (error);
}
int
uiomove(void *cp, int n, struct uio *uio)
{

View File

@ -78,7 +78,9 @@
#ifndef _PC98_BUS_H_
#define _PC98_BUS_H_
#ifdef _KERNEL
#include <sys/systm.h>
#endif /* _KERNEL */
#include <machine/_bus.h>
#include <machine/cpufunc.h>
@ -92,6 +94,8 @@
#define BUS_SPACE_UNRESTRICTED (~0)
#ifdef _KERNEL
/*
* address relocation table
*/
@ -639,4 +643,6 @@ bus_space_barrier(bus_space_tag_t tag, bus_space_handle_t bsh,
#define bus_space_copy_region_stream_4(t, h1, o1, h2, o2, c) \
bus_space_copy_region_4((t), (h1), (o1), (h2), (o2), (c))
#endif /* _KERNEL */
#endif /* _PC98_BUS_H_ */

View File

@ -61,6 +61,7 @@
#define BIO_ORDERED 0x08
#define BIO_UNMAPPED 0x10
#define BIO_TRANSIENT_MAPPING 0x20
#define BIO_VLIST 0x40
#ifdef _KERNEL
struct disk;

View File

@ -85,6 +85,7 @@ struct uio {
struct vm_object;
struct vm_page;
struct bus_dma_segment;
struct uio *cloneuio(struct uio *uiop);
int copyinfrom(const void * __restrict src, void * __restrict dst,
@ -98,6 +99,10 @@ int copyout_map(struct thread *td, vm_offset_t *addr, size_t sz);
int copyout_unmap(struct thread *td, vm_offset_t addr, size_t sz);
int physcopyin(void *src, vm_paddr_t dst, size_t len);
int physcopyout(vm_paddr_t src, void *dst, size_t len);
int physcopyin_vlist(struct bus_dma_segment *src, off_t offset,
vm_paddr_t dst, size_t len);
int physcopyout_vlist(vm_paddr_t src, struct bus_dma_segment *dst,
off_t offset, size_t len);
int uiomove(void *cp, int n, struct uio *uio);
int uiomove_frombuf(void *buf, int buflen, struct uio *uio);
int uiomove_fromphys(struct vm_page *ma[], vm_offset_t offset, int n,

View File

@ -7,6 +7,7 @@ SUBDIR= adduser \
arp \
binmiscctl \
bsdconfig \
camdd \
cdcontrol \
chkgrp \
chown \

11
usr.sbin/camdd/Makefile Normal file
View File

@ -0,0 +1,11 @@
# $FreeBSD$
PROG= camdd
SRCS= camdd.c
SDIR= ${.CURDIR}/../../sys
DPADD= ${LIBCAM} ${LIBMT} ${LIBSBUF} ${LIBBSDXML} ${LIBUTIL} ${LIBTHR}
LDADD= -lcam -lmt -lsbuf -lbsdxml -lutil -lthr
NO_WTHREAD_SAFETY= 1
MAN= camdd.8
.include <bsd.prog.mk>

283
usr.sbin/camdd/camdd.8 Normal file
View File

@ -0,0 +1,283 @@
.\"
.\" Copyright (c) 2015 Spectra Logic Corporation
.\" All rights reserved.
.\"
.\" Redistribution and use in source and binary forms, with or without
.\" modification, are permitted provided that the following conditions
.\" are met:
.\" 1. Redistributions of source code must retain the above copyright
.\" notice, this list of conditions, and the following disclaimer,
.\" without modification.
.\" 2. Redistributions in binary form must reproduce at minimum a disclaimer
.\" substantially similar to the "NO WARRANTY" disclaimer below
.\" ("Disclaimer") and any redistribution must be conditioned upon
.\" including a substantially similar Disclaimer requirement for further
.\" binary redistribution.
.\"
.\" NO WARRANTY
.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
.\" "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
.\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
.\" A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
.\" HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
.\" DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
.\" OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
.\" HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
.\" STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
.\" IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
.\" POSSIBILITY OF SUCH DAMAGES.
.\"
.\" Authors: Ken Merry (Spectra Logic Corporation)
.\"
.\" $FreeBSD$
.\"
.Dd November 11, 2015
.Dt CAMDD 8
.Os
.Sh NAME
.Nm camdd
.Nd CAM data transfer utility
.Sh SYNOPSIS
.Nm
.Aq Fl i|o Ar pass=pass_dev|file=filename,bs=blocksize,[...]
.Op Fl C Ar retry_count
.Op Fl E
.Op Fl m Ar max_io
.Op Fl t Ar timeout
.Op Fl v
.Op Fl h
.Sh DESCRIPTION
The
.Nm
utility is a sequential data transfer utility that offers standard
.Xr read 2
and
.Xr write 2
operation in addition to a mode that uses the asynchronous
.Xr pass 4
API.
The asynchronous
.Xr pass 4
API allows multiple requests to be queued to a device simultaneously.
.Pp
.Nm
collects performance information and will display it when the transfer
completes, when
.Nm
is terminated or when it receives a SIGINFO signal.
.Pp
The following options are available:
.Bl -tag -width 12n
.It Fl i | o Ar args
Specify the input and output device or file.
Both
.Fl i
and
.Fl o
must be specified.
There are a number of parameters that can be specified.
One of the first two (file or pass) MUST be specified to indicate which I/O
method to use on the device in question.
.Bl -tag -width 9n
.It pass=dev
Specify a
.Xr pass 4
device to operate on.
This requests that
.Nm
access the device in question be accessed via the asynchronous
.Xr pass 4
interface.
.Pp
The device name can be a
.Xr pass 4
name and unit number, for instance
.Dq pass0 ,
or a regular peripheral driver name and unit number, for instance
.Dq da5 .
It can also be the path of a
.Xr pass 4
or other disk device, like
.Dq /dev/da5 .
It may also be a bus:target:lun, for example:
.Dq 0:5:0 .
.Pp
Only
.Xr pass 4
devices for
.Tn SCSI
disk-like devices are supported.
.Tn ATA
devices are not currently supported, but support could be added later.
Specifically,
.Tn SCSI
Direct Access (type 0), WORM (type 4), CDROM (type 5), and RBC (Reduced
Block Command, type 14) devices are supported.
Tape drives, medium changers, enclosures etc. are not supported.
.It file=path
Specify a file or device to operate on.
This requests that the file or device in question be accessed using the
standard
.Xr read 2
and
.Xr write 2
system calls.
The file interface does not support queueing multiple commands at a time.
It does support probing disk sector size and capacity information, and tape
blocksize and maximum transfer size information.
The file interface supports standard files, disks, tape drives, special
devices, pipes and standard input and output.
If the file is specified as a
.Dq - ,
standard input or standard output are used.
For tape devices, the specified blocksize will be the size that
.Nm
attempts to use to write to or read from the tape.
When writing to a tape device, the blocksize is treated like a disk sector
size.
So, that means
.Nm
will not write anything smaller than the sector size.
At the end of a transfer, if there isn't sufficient data from the reader
to yield a full block,
.Nm
will add zeros on the end of the data from the reader to make up a full
block.
.It bs=N
Specify the blocksize to use for transfers.
.Nm
will attempt to read or write using the requested blocksize.
.Pp
Note that the blocksize given only applies to either the input or the
output path.
To use the same blocksize for the input and output transfers, you must
specify that blocksize with both the
.Fl i
and
.Fl o
arguments.
.Pp
The blocksize may be specified in bytes, or using any suffix (e.g. k, M, G)
supported by
.Xr expand_number 3 .
.It offset=N
Specify the starting offset for the input or output device or file.
The offset may be specified in bytes, or by using any suffix (e.g. k, M, G)
supported by
.Xr expand_number 3 .
.It depth=N
Specify a desired queue depth for the input or output path.
.Nm
will attempt to keep the requested number of requests of the specified
blocksize queued to the input or output device.
Queue depths greater than 1 are only supported for the asynchronous
.Xr pass 4
output method.
The queue depth is maintained on a best effort basis, and may not be
possible to maintain for especially fast devices.
For writes, maintaining the queue depth also depends on a sufficiently
fast reading device.
.It mcs=N
Specify the minimum command size to use for
.Xr pass 4
devices.
Some devices do not support 6 byte
.Tn SCSI
commands.
The
.Xr da 4
device handles this restriction automatically, but the
.Xr pass 4
device allows the user to specify the
.Tn SCSI
command used.
If a device does not accept 6 byte
.Tn SCSI
READ/WRITE commands (which is the default at lower LBAs), it will generally
accept 10 byte
.Tn SCSI
commands instead.
.It debug=N
Specify the debug level for this device.
There is currently only one debug level setting, so setting this to any
non-zero value will turn on debugging.
The debug facility may be expanded in the future.
.El
.It Fl C Ar count
Specify the retry count for commands sent via the asynchronous
.Xr pass 4
interface.
This does not apply to commands sent via the file interface.
.It Fl E
Enable kernel error recovery for the
.Xr pass 4
driver.
If error recovery is not enabled, unit attention conditions and other
transient failures may cause the transfer to fail.
.It Fl m Ar size
Specify the maximum amount of data to be transferred.
This may be specified in bytes, or by using any suffix (e.g. K, M, G)
supported by
.Xr expand_number 3 .
.It Fl t Ar timeout
Specify the command timeout in seconds to use for commands sent via the
.Xr pass 4
driver.
.It Fl v
Enable verbose reporting of errors.
This is recommended to aid in debugging any
.Tn SCSI
issues that come up.
.It Fl h
Display the
.Nm
usage message.
.El
.Pp
If
.Nm
receives a SIGINFO signal, it will print the current input and output byte
counts, elapsed runtime and average throughput.
If
.Nm
receives a SIGINT signal, it will print the current input and output byte
counts, elapsed runtime and average throughput and then exit.
.Sh EXAMPLES
.Dl camdd -i pass=da8,bs=512k,depth=4 -o pass=da3,bs=512k,depth=4
.Pp
Copy all data from da8 to da3 using a blocksize of 512k for both drives,
and attempt to maintain a queue depth of 4 on both the input and output
devices.
The transfer will stop when the end of either device is reached.
.Pp
.Dl camdd -i file=/dev/zero,bs=1M -o pass=da5,bs=1M,depth=4 -m 100M
.Pp
Read 1MB blocks of zeros from /dev/zero, and write them to da5 with a
desired queue depth of 4.
Stop the transfer after 100MB has been written.
.Pp
.Dl camdd -i pass=da8,bs=1M,depth=3 -o file=disk.img
.Pp
Copy disk da8 using a 1MB blocksize and desired queue depth of 3 to the
file disk.img.
.Pp
.Dl camdd -i file=/etc/rc -o file=-
.Pp
Read the file /etc/rc and write it to standard output.
.Pp
.Dl camdd -i pass=da10,bs=64k,depth=16 -o file=/dev/nsa0,bs=128k
.Pp
Copy 64K blocks from the disk da10 with a queue depth of 16, and write
to the tape drive sa0 with a 128k blocksize.
The copy will stop when either the end of the disk or tape is reached.
.Sh SEE ALSO
.Xr cam 3 ,
.Xr cam 4 ,
.Xr pass 4 ,
.Xr camcontrol 8
.Sh HISTORY
.Nm
first appeared in
.Fx 10.2
.Sh AUTHORS
.An Kenneth Merry Aq Mt ken@FreeBSD.org

3428
usr.sbin/camdd/camdd.c Normal file

File diff suppressed because it is too large Load Diff