Improve the Xen para-virtualized device infrastructure of FreeBSD:

o Add support for backend devices (e.g. blkback)
 o Implement extensions to the Xen para-virtualized block API to allow
   for larger and more outstanding I/Os.
 o Import a completely rewritten block back driver with support for fronting
   I/O to both raw devices and files.
 o General cleanup and documentation of the XenBus and XenStore support code.
 o Robustness and performance updates for the block front driver.
 o Fixes to the netfront driver.

Sponsored by: Spectra Logic Corporation

sys/xen/xenbus/init.txt:
	Deleted: This file explains the Linux method for XenBus device
	enumeration and thus does not apply to FreeBSD's NewBus approach.

sys/xen/xenbus/xenbus_probe_backend.c:
	Deleted: Linux version of backend XenBus service routines.  It
	was never ported to FreeBSD.  See xenbusb.c, xenbusb_if.m,
	xenbusb_front.c xenbusb_back.c for details of FreeBSD's XenBus
	support.

sys/xen/xenbus/xenbusvar.h:
sys/xen/xenbus/xenbus_xs.c:
sys/xen/xenbus/xenbus_comms.c:
sys/xen/xenbus/xenbus_comms.h:
sys/xen/xenstore/xenstorevar.h:
sys/xen/xenstore/xenstore.c:
	Split XenStore into its own tree.  XenBus is a software layer built
	on top of XenStore.  The old arrangement and the naming of some
	structures and functions blurred these lines making it difficult to
	discern what services are provided by which layer and at what times
	these services are available (e.g. during system startup and shutdown).

sys/xen/xenbus/xenbus_client.c:
sys/xen/xenbus/xenbus.c:
sys/xen/xenbus/xenbus_probe.c:
sys/xen/xenbus/xenbusb.c:
sys/xen/xenbus/xenbusb.h:
	Split up XenBus code into methods available for use by client
	drivers (xenbus.c) and code used by the XenBus "bus code" to
	enumerate, attach, detach, and service bus drivers.

sys/xen/reboot.c:
sys/dev/xen/control/control.c:
	Add a XenBus front driver for handling shutdown, reboot, suspend, and
	resume events published in the XenStore.  Move all PV suspend/reboot
	support from reboot.c into this driver.

sys/xen/blkif.h:
	New file from Xen vendor with macros and structures used by
	a block back driver to service requests from a VM running a
	different ABI (e.g. amd64 back with i386 front).

sys/conf/files:
	Adjust kernel build spec for new XenBus/XenStore layout and added
	Xen functionality.

sys/dev/xen/balloon/balloon.c:
sys/dev/xen/netfront/netfront.c:
sys/dev/xen/blkfront/blkfront.c:
sys/xen/xenbus/...
sys/xen/xenstore/...
	o Rename XenStore APIs and structures from xenbus_* to xs_*.
	o Adjust to use of M_XENBUS and M_XENSTORE malloc types for allocation
	  of objects returned by these APIs.
	o Adjust for changes in the bus interface for Xen drivers.

sys/xen/xenbus/...
sys/xen/xenstore/...
	Add Doxygen comments for these interfaces and the code that
	implements them.

sys/dev/xen/blkback/blkback.c:
	o Rewrite the Block Back driver to attach properly via newbus,
	  operate correctly in both PV and HVM mode regardless of domain
	  (e.g. can be in a DOM other than 0), and to deal with the latest
	  metadata available in XenStore for block devices.

	o Allow users to specify a file as a backend to blkback, in addition
	  to character devices.  Use the namei lookup of the backend path
	  to automatically configure, based on file type, the appropriate
	  backend method.

	The current implementation is limited to a single outstanding I/O
	at a time to file backed storage.

sys/dev/xen/blkback/blkback.c:
sys/xen/interface/io/blkif.h:
sys/xen/blkif.h:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/blkfront/block.h:
	Extend the Xen blkif API: Negotiable request size and number of
	requests.

	This change extends the information recorded in the XenStore
	allowing block front/back devices to negotiate for optimal I/O
	parameters.  This has been achieved without sacrificing backward
	compatibility with drivers that are unaware of these protocol
	enhancements.  The extensions center around the connection protocol
	which now includes these additions:

	o The back-end device publishes its maximum supported values for,
	  request I/O size, the number of page segments that can be
	  associated with a request, the maximum number of requests that
	  can be concurrently active, and the maximum number of pages that
	  can be in the shared request ring.  These values are published
	  before the back-end enters the XenbusStateInitWait state.

	o The front-end waits for the back-end to enter either the InitWait
	  or Initialize state.  At this point, the front end limits it's
	  own capabilities to the lesser of the values it finds published
	  by the backend, it's own maximums, or, should any back-end data
	  be missing in the store, the values supported by the original
	  protocol.  It then initializes it's internal data structures
	  including allocation of the shared ring, publishes its maximum
	  capabilities to the XenStore and transitions to the Initialized
	  state.

	o The back-end waits for the front-end to enter the Initalized
	  state.  At this point, the back end limits it's own capabilities
	  to the lesser of the values it finds published by the frontend,
	  it's own maximums, or, should any front-end data be missing in
	  the store, the values supported by the original protocol.  It
	  then initializes it's internal data structures, attaches to the
	  shared ring and transitions to the Connected state.

	o The front-end waits for the back-end to enter the Connnected
	  state, transitions itself to the connected state, and can
	  commence I/O.

	Although an updated front-end driver must be aware of the back-end's
	InitWait state, the back-end has been coded such that it can
	tolerate a front-end that skips this step and transitions directly
	to the Initialized state without waiting for the back-end.

sys/xen/interface/io/blkif.h:
	o Increase BLKIF_MAX_SEGMENTS_PER_REQUEST to 255.  This is
	  the maximum number possible without changing the blkif
	  request header structure (nr_segs is a uint8_t).

	o Add two new constants:
	  BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK, and
	  BLKIF_MAX_SEGMENTS_PER_SEGMENT_BLOCK.  These respectively
	  indicate the number of segments that can fit in the first
	  ring-buffer entry of a request, and for each subsequent
	  (sg element only) ring-buffer entry associated with the
          "header" ring-buffer entry of the request.

	o Add the blkif_request_segment_t typedef for segment
	  elements.

	o Add the BLKRING_GET_SG_REQUEST() macro which wraps the
	  RING_GET_REQUEST() macro and returns a properly cast
	  pointer to an array of blkif_request_segment_ts.

	o Add the BLKIF_SEGS_TO_BLOCKS() macro which calculates the
	  number of ring entries that will be consumed by a blkif
	  request with the given number of segments.

sys/xen/blkif.h:
	o Update for changes in interface/io/blkif.h macros.

	o Update the BLKIF_MAX_RING_REQUESTS() macro to take the
	  ring size as an argument to allow this calculation on
	  multi-page rings.

	o Add a companion macro to BLKIF_MAX_RING_REQUESTS(),
	  BLKIF_RING_PAGES().  This macro determines the number of
	  ring pages required in order to support a ring with the
	  supplied number of request blocks.

sys/dev/xen/blkback/blkback.c:
sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/blkfront/block.h:
	o Negotiate with the other-end with the following limits:
	      Reqeust Size:   MAXPHYS
	      Max Segments:   (MAXPHYS/PAGE_SIZE) + 1
	      Max Requests:   256
	      Max Ring Pages: Sufficient to support Max Requests with
	                      Max Segments.

	o Dynamically allocate request pools and segemnts-per-request.

	o Update ring allocation/attachment code to support a
	  multi-page shared ring.

	o Update routines that access the shared ring to handle
	  multi-block requests.

sys/dev/xen/blkfront/blkfront.c:
	o Track blkfront allocations in a blkfront driver specific
	  malloc pool.

	o Strip out XenStore transaction retry logic in the
	  connection code.  Transactions only need to be used when
	  the update to multiple XenStore nodes must be atomic.
	  That is not the case here.

	o Fully disable blkif_resume() until it can be fixed
	  properly (it didn't work before this change).

	o Destroy bus-dma objects during device instance tear-down.

	o Properly handle backend devices with powef-of-2 sector
	  sizes larger than 512b.

sys/dev/xen/blkback/blkback.c:
	Advertise support for and implement the BLKIF_OP_WRITE_BARRIER
	and BLKIF_OP_FLUSH_DISKCACHE blkif opcodes using BIO_FLUSH and
	the BIO_ORDERED attribute of bios.

sys/dev/xen/blkfront/blkfront.c:
sys/dev/xen/blkfront/block.h:
	Fix various bugs in blkfront.

       o gnttab_alloc_grant_references() returns 0 for success and
	 non-zero for failure.  The check for < 0 is a leftover
	 Linuxism.

       o When we negotiate with blkback and have to reduce some of our
	 capabilities, print out the original and reduced capability before
	 changing the local capability.  So the user now gets the correct
	 information.

	o Fix blkif_restart_queue_callback() formatting.  Make sure we hold
	  the mutex in that function before calling xb_startio().

	o Fix a couple of KASSERT()s.

        o Fix a check in the xb_remove_* macro to be a little more specific.

sys/xen/gnttab.h:
sys/xen/gnttab.c:
	Define GNTTAB_LIST_END publicly as GRANT_REF_INVALID.

sys/dev/xen/netfront/netfront.c:
	Use GRANT_REF_INVALID instead of driver private definitions of the
	same constant.

sys/xen/gnttab.h:
sys/xen/gnttab.c:
	Add the gnttab_end_foreign_access_references() API.

	This API allows a client to batch the release of an array of grant
	references, instead of coding a private for loop.  The implementation
	takes advantage of this batching to reduce lock overhead to one
	acquisition and release per-batch instead of per-freed grant reference.

	While here, reduce the duration the gnttab_list_lock is held during
	gnttab_free_grant_references() operations.  The search to find the
	tail of the incoming free list does not rely on global state and so
	can be performed without holding the lock.

sys/dev/xen/xenpci/evtchn.c:
sys/dev/xen/evtchn/evtchn.c:
sys/xen/xen_intr.h:
	o Implement the bind_interdomain_evtchn_to_irqhandler API for HVM mode.
	  This allows an HVM domain to serve back end devices to other domains.
	  This API is already implemented for PV mode.

	o Synchronize the API between HVM and PV.

sys/dev/xen/xenpci/xenpci.c:
	o Scan the full region of CPUID space in which the Xen VMM interface
	  may be implemented.  On systems using SuSE as a Dom0 where the
	  Viridian API is also exported, the VMM interface is above the region
	  we used to search.

	o Pass through bus_alloc_resource() calls so that XenBus drivers
	  attaching on an HVM system can allocate unused physical address
	  space from the nexus.  The block back driver makes use of this
	  facility.

sys/i386/xen/xen_machdep.c:
	Use the correct type for accessing the statically mapped xenstore
	metadata.

sys/xen/interface/hvm/params.h:
sys/xen/xenstore/xenstore.c:
	Move hvm_get_parameter() to the correct global header file instead
	of as a private method to the XenStore.

sys/xen/interface/io/protocols.h:
	Sync with vendor.

sys/xeninterface/io/ring.h:
	Add macro for calculating the number of ring pages needed for an N
	deep ring.

	To avoid duplication within the macros, create and use the new
	__RING_HEADER_SIZE() macro.  This macro calculates the size of the
	ring book keeping struct (producer/consumer indexes, etc.) that
	resides at the head of the ring.

	Add the __RING_PAGES() macro which calculates the number of shared
	ring pages required to support a ring with the given number of
	requests.

	These APIs are used to support the multi-page ring version of the
	Xen block API.

sys/xeninterface/io/xenbus.h:
	Add Comments.

sys/xen/xenbus/...
	o Refactor the FreeBSD XenBus support code to allow for both front and
	  backend device attachments.

	o Make use of new config_intr_hook capabilities to allow front and back
	  devices to be probed/attached in parallel.

	o Fix bugs in probe/attach state machine that could cause the system to
	  hang when confronted with a failure either in the local domain or in
	  a remote domain to which one of our driver instances is attaching.

	o Publish all required state to the XenStore on device detach and
	  failure.  The majority of the missing functionality was for serving
	  as a back end since the typical "hot-plug" scripts in Dom0 don't
	  handle the case of cleaning up for a "service domain" that is not
	  itself.

	o Add dynamic sysctl nodes exposing the generic ivars of
	  XenBus devices.

	o Add doxygen style comments to the majority of the code.

	o Cleanup types, formatting, etc.

sys/xen/xenbus/xenbusb.c:
	Common code used by both front and back XenBus busses.

sys/xen/xenbus/xenbusb_if.m:
	Method definitions for a XenBus bus.

sys/xen/xenbus/xenbusb_front.c:
sys/xen/xenbus/xenbusb_back.c:
	XenBus bus specialization for front and back devices.

MFC after:	1 month
This commit is contained in:
Justin T. Gibbs 2010-10-19 20:53:30 +00:00
parent 220666153d
commit ff662b5c98
Notes: svn2git 2020-12-20 02:59:44 +00:00
svn path=/head/; revision=214077
40 changed files with 8206 additions and 4109 deletions

View File

@ -3008,19 +3008,20 @@ xen/gnttab.c optional xen | xenhvm
xen/features.c optional xen | xenhvm
xen/evtchn/evtchn.c optional xen
xen/evtchn/evtchn_dev.c optional xen | xenhvm
xen/reboot.c optional xen
xen/xenbus/xenbus_client.c optional xen | xenhvm
xen/xenbus/xenbus_comms.c optional xen | xenhvm
xen/xenbus/xenbus_dev.c optional xen | xenhvm
xen/xenbus/xenbus_if.m optional xen | xenhvm
xen/xenbus/xenbus_probe.c optional xen | xenhvm
#xen/xenbus/xenbus_probe_backend.c optional xen
xen/xenbus/xenbus_xs.c optional xen | xenhvm
xen/xenbus/xenbus.c optional xen | xenhvm
xen/xenbus/xenbusb_if.m optional xen | xenhvm
xen/xenbus/xenbusb.c optional xen | xenhvm
xen/xenbus/xenbusb_front.c optional xen | xenhvm
xen/xenbus/xenbusb_back.c optional xen | xenhvm
xen/xenstore/xenstore.c optional xen | xenhvm
xen/xenstore/xenstore_dev.c optional xen | xenhvm
dev/xen/balloon/balloon.c optional xen | xenhvm
dev/xen/blkfront/blkfront.c optional xen | xenhvm
dev/xen/blkback/blkback.c optional xen | xenhvm
dev/xen/console/console.c optional xen
dev/xen/console/xencons_ring.c optional xen
dev/xen/blkfront/blkfront.c optional xen | xenhvm
dev/xen/control/control.c optional xen | xenhvm
dev/xen/netfront/netfront.c optional xen | xenhvm
dev/xen/xenpci/xenpci.c optional xenpci
dev/xen/xenpci/evtchn.c optional xenpci
dev/xen/xenpci/machine_reboot.c optional xenpci

View File

@ -44,7 +44,7 @@ __FBSDID("$FreeBSD$");
#include <machine/xen/xenfunc.h>
#include <machine/xen/xenvar.h>
#include <xen/hypervisor.h>
#include <xen/xenbus/xenbusvar.h>
#include <xen/xenstore/xenstorevar.h>
#include <vm/vm.h>
#include <vm/vm_page.h>
@ -406,20 +406,20 @@ set_new_target(unsigned long target)
wakeup(balloon_process);
}
static struct xenbus_watch target_watch =
static struct xs_watch target_watch =
{
.node = "memory/target"
};
/* React to a change in the target key */
static void
watch_target(struct xenbus_watch *watch,
watch_target(struct xs_watch *watch,
const char **vec, unsigned int len)
{
unsigned long long new_target;
int err;
err = xenbus_scanf(XBT_NIL, "memory", "target", NULL,
err = xs_scanf(XST_NIL, "memory", "target", NULL,
"%llu", &new_target);
if (err) {
/* This is ok (for domain0 at least) - so just return */
@ -438,7 +438,7 @@ balloon_init_watcher(void *arg)
{
int err;
err = register_xenbus_watch(&target_watch);
err = xs_register_watch(&target_watch);
if (err)
printf("Failed to set balloon watcher\n");

File diff suppressed because it is too large Load Diff

View File

@ -49,8 +49,10 @@ __FBSDID("$FreeBSD$");
#include <machine/vmparam.h>
#include <sys/bus_dma.h>
#include <machine/_inttypes.h>
#include <machine/xen/xen-os.h>
#include <machine/xen/xenfunc.h>
#include <xen/hypervisor.h>
#include <xen/xen_intr.h>
#include <xen/evtchn.h>
@ -68,17 +70,21 @@ __FBSDID("$FreeBSD$");
/* prototypes */
static void xb_free_command(struct xb_command *cm);
static void xb_startio(struct xb_softc *sc);
static void connect(struct xb_softc *);
static void blkfront_connect(struct xb_softc *);
static void blkfront_closing(device_t);
static int blkfront_detach(device_t);
static int talk_to_backend(struct xb_softc *);
static int setup_blkring(struct xb_softc *);
static void blkif_int(void *);
static void blkfront_initialize(struct xb_softc *);
#if 0
static void blkif_recover(struct xb_softc *);
static void blkif_completion(struct xb_command *);
#endif
static int blkif_completion(struct xb_command *);
static void blkif_free(struct xb_softc *, int);
static void blkif_queue_cb(void *, bus_dma_segment_t *, int, int);
MALLOC_DEFINE(M_XENBLOCKFRONT, "xbd", "Xen Block Front driver data");
#define GRANT_INVALID_REF 0
/* Control whether runtime update of vbds is enabled. */
@ -113,11 +119,6 @@ static char * blkif_status_name[] = {
#define DPRINTK(fmt, args...)
#endif
#define MAXIMUM_OUTSTANDING_BLOCK_REQS \
(BLKIF_MAX_SEGMENTS_PER_REQUEST * BLK_RING_SIZE)
#define BLKIF_MAXIO (32 * 1024)
static int blkif_open(struct disk *dp);
static int blkif_close(struct disk *dp);
static int blkif_ioctl(struct disk *dp, u_long cmd, void *addr, int flag, struct thread *td);
@ -202,8 +203,8 @@ blkfront_vdevice_to_unit(int vdevice, int *unit, const char **name)
}
int
xlvbd_add(struct xb_softc *sc, blkif_sector_t capacity,
int vdevice, uint16_t vdisk_info, uint16_t sector_size)
xlvbd_add(struct xb_softc *sc, blkif_sector_t sectors,
int vdevice, uint16_t vdisk_info, unsigned long sector_size)
{
int unit, error = 0;
const char *name;
@ -215,7 +216,6 @@ xlvbd_add(struct xb_softc *sc, blkif_sector_t capacity,
if (strcmp(name, "xbd"))
device_printf(sc->xb_dev, "attaching as %s%d\n", name, unit);
memset(&sc->xb_disk, 0, sizeof(sc->xb_disk));
sc->xb_disk = disk_alloc();
sc->xb_disk->d_unit = sc->xb_unit;
sc->xb_disk->d_open = blkif_open;
@ -227,20 +227,14 @@ xlvbd_add(struct xb_softc *sc, blkif_sector_t capacity,
sc->xb_disk->d_drv1 = sc;
sc->xb_disk->d_sectorsize = sector_size;
sc->xb_disk->d_mediasize = capacity << XBD_SECTOR_SHFT;
sc->xb_disk->d_maxsize = BLKIF_MAXIO;
sc->xb_disk->d_mediasize = sectors * sector_size;
sc->xb_disk->d_maxsize = sc->max_request_size;
sc->xb_disk->d_flags = 0;
disk_create(sc->xb_disk, DISK_VERSION_00);
return error;
}
void
xlvbd_del(struct xb_softc *sc)
{
disk_destroy(sc->xb_disk);
}
/************************ end VBD support *****************/
/*
@ -357,15 +351,16 @@ xb_dump(void *arg, void *virtual, vm_offset_t physical, off_t offset,
return (EBUSY);
}
if (gnttab_alloc_grant_references(
BLKIF_MAX_SEGMENTS_PER_REQUEST, &cm->gref_head) < 0) {
if (gnttab_alloc_grant_references(sc->max_request_segments,
&cm->gref_head) != 0) {
xb_free_command(cm);
mtx_unlock(&sc->xb_io_lock);
device_printf(sc->xb_dev, "no more grant allocs?\n");
return (EBUSY);
}
chunk = length > BLKIF_MAXIO ? BLKIF_MAXIO : length;
chunk = length > sc->max_request_size
? sc->max_request_size : length;
cm->data = virtual;
cm->datalen = chunk;
cm->operation = BLKIF_OP_WRITE;
@ -423,16 +418,18 @@ static int
blkfront_attach(device_t dev)
{
struct xb_softc *sc;
struct xb_command *cm;
const char *name;
int error, vdevice, i, unit;
int error;
int vdevice;
int i;
int unit;
/* FIXME: Use dynamic device id if this is not set. */
error = xenbus_scanf(XBT_NIL, xenbus_get_node(dev),
error = xs_scanf(XST_NIL, xenbus_get_node(dev),
"virtual-device", NULL, "%i", &vdevice);
if (error) {
xenbus_dev_fatal(dev, error, "reading virtual-device");
printf("couldn't find virtual device");
device_printf(dev, "Couldn't determine virtual device.\n");
return (error);
}
@ -447,51 +444,18 @@ blkfront_attach(device_t dev)
xb_initq_ready(sc);
xb_initq_complete(sc);
xb_initq_bio(sc);
/* Allocate parent DMA tag */
if (bus_dma_tag_create( NULL, /* parent */
512, 4096, /* algnmnt, boundary */
BUS_SPACE_MAXADDR, /* lowaddr */
BUS_SPACE_MAXADDR, /* highaddr */
NULL, NULL, /* filter, filterarg */
BLKIF_MAXIO, /* maxsize */
BLKIF_MAX_SEGMENTS_PER_REQUEST, /* nsegments */
PAGE_SIZE, /* maxsegsize */
BUS_DMA_ALLOCNOW, /* flags */
busdma_lock_mutex, /* lockfunc */
&sc->xb_io_lock, /* lockarg */
&sc->xb_io_dmat)) {
device_printf(dev, "Cannot allocate parent DMA tag\n");
return (ENOMEM);
}
#ifdef notyet
if (bus_dma_tag_set(sc->xb_io_dmat, BUS_DMA_SET_MINSEGSZ,
XBD_SECTOR_SIZE)) {
device_printf(dev, "Cannot set sector size\n");
return (EINVAL);
}
#endif
for (i = 0; i < XBF_MAX_RING_PAGES; i++)
sc->ring_ref[i] = GRANT_INVALID_REF;
sc->xb_dev = dev;
sc->vdevice = vdevice;
sc->connected = BLKIF_STATE_DISCONNECTED;
/* work queue needed ? */
for (i = 0; i < BLK_RING_SIZE; i++) {
cm = &sc->shadow[i];
cm->req.id = i;
cm->cm_sc = sc;
if (bus_dmamap_create(sc->xb_io_dmat, 0, &cm->map) != 0)
break;
xb_free_command(cm);
}
/* Front end dir is a number, which is used as the id. */
sc->handle = strtoul(strrchr(xenbus_get_node(dev),'/')+1, NULL, 0);
error = talk_to_backend(sc);
if (error)
return (error);
/* Wait for backend device to publish its protocol capabilities. */
xenbus_set_state(dev, XenbusStateInitialising);
return (0);
}
@ -512,121 +476,265 @@ blkfront_suspend(device_t dev)
static int
blkfront_resume(device_t dev)
{
#if 0
struct xb_softc *sc = device_get_softc(dev);
int err;
DPRINTK("blkfront_resume: %s\n", xenbus_get_node(dev));
/* XXX This can't work!!! */
blkif_free(sc, 1);
err = talk_to_backend(sc);
if (sc->connected == BLKIF_STATE_SUSPENDED && !err)
blkfront_initialize(sc);
if (sc->connected == BLKIF_STATE_SUSPENDED)
blkif_recover(sc);
return (err);
#endif
return (0);
}
/* Common code used when first setting up, and when resuming. */
static int
talk_to_backend(struct xb_softc *sc)
static void
blkfront_initialize(struct xb_softc *sc)
{
device_t dev;
struct xenbus_transaction xbt;
const char *message = NULL;
int err;
const char *otherend_path;
const char *node_path;
int error;
int i;
/* Create shared ring, alloc event channel. */
dev = sc->xb_dev;
err = setup_blkring(sc);
if (err)
goto out;
if (xenbus_get_state(sc->xb_dev) != XenbusStateInitialising)
return;
again:
err = xenbus_transaction_start(&xbt);
if (err) {
xenbus_dev_fatal(dev, err, "starting transaction");
goto destroy_blkring;
/*
* Protocol defaults valid even if negotiation for a
* setting fails.
*/
sc->ring_pages = 1;
sc->max_requests = BLKIF_MAX_RING_REQUESTS(PAGE_SIZE);
sc->max_request_segments = BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK;
sc->max_request_size = sc->max_request_segments * PAGE_SIZE;
sc->max_request_blocks = BLKIF_SEGS_TO_BLOCKS(sc->max_request_segments);
/*
* Protocol negotiation.
*
* \note xs_gather() returns on the first encountered error, so
* we must use independant calls in order to guarantee
* we don't miss information in a sparsly populated back-end
* tree.
*/
otherend_path = xenbus_get_otherend_path(sc->xb_dev);
node_path = xenbus_get_node(sc->xb_dev);
(void)xs_scanf(XST_NIL, otherend_path,
"max-ring-pages", NULL, "%" PRIu32,
&sc->ring_pages);
(void)xs_scanf(XST_NIL, otherend_path,
"max-requests", NULL, "%" PRIu32,
&sc->max_requests);
(void)xs_scanf(XST_NIL, otherend_path,
"max-request-segments", NULL, "%" PRIu32,
&sc->max_request_segments);
(void)xs_scanf(XST_NIL, otherend_path,
"max-request-size", NULL, "%" PRIu32,
&sc->max_request_size);
if (sc->ring_pages > XBF_MAX_RING_PAGES) {
device_printf(sc->xb_dev, "Back-end specified ring-pages of "
"%u limited to front-end limit of %zu.\n",
sc->ring_pages, XBF_MAX_RING_PAGES);
sc->ring_pages = XBF_MAX_RING_PAGES;
}
err = xenbus_printf(xbt, xenbus_get_node(dev),
"ring-ref","%u", sc->ring_ref);
if (err) {
message = "writing ring-ref";
goto abort_transaction;
}
err = xenbus_printf(xbt, xenbus_get_node(dev),
"event-channel", "%u", irq_to_evtchn_port(sc->irq));
if (err) {
message = "writing event-channel";
goto abort_transaction;
}
err = xenbus_printf(xbt, xenbus_get_node(dev),
"protocol", "%s", XEN_IO_PROTO_ABI_NATIVE);
if (err) {
message = "writing protocol";
goto abort_transaction;
if (sc->max_requests > XBF_MAX_REQUESTS) {
device_printf(sc->xb_dev, "Back-end specified max_requests of "
"%u limited to front-end limit of %u.\n",
sc->max_requests, XBF_MAX_REQUESTS);
sc->max_requests = XBF_MAX_REQUESTS;
}
err = xenbus_transaction_end(xbt, 0);
if (err) {
if (err == EAGAIN)
goto again;
xenbus_dev_fatal(dev, err, "completing transaction");
goto destroy_blkring;
if (sc->max_request_segments > XBF_MAX_SEGMENTS_PER_REQUEST) {
device_printf(sc->xb_dev, "Back-end specificed "
"max_requests_segments of %u limited to "
"front-end limit of %u.\n",
sc->max_request_segments,
XBF_MAX_SEGMENTS_PER_REQUEST);
sc->max_request_segments = XBF_MAX_SEGMENTS_PER_REQUEST;
}
xenbus_set_state(dev, XenbusStateInitialised);
return 0;
abort_transaction:
xenbus_transaction_end(xbt, 1);
if (message)
xenbus_dev_fatal(dev, err, "%s", message);
destroy_blkring:
blkif_free(sc, 0);
out:
return err;
if (sc->max_request_size > XBF_MAX_REQUEST_SIZE) {
device_printf(sc->xb_dev, "Back-end specificed "
"max_request_size of %u limited to front-end "
"limit of %u.\n", sc->max_request_size,
XBF_MAX_REQUEST_SIZE);
sc->max_request_size = XBF_MAX_REQUEST_SIZE;
}
sc->max_request_blocks = BLKIF_SEGS_TO_BLOCKS(sc->max_request_segments);
/* Allocate datastructures based on negotiated values. */
error = bus_dma_tag_create(NULL, /* parent */
512, PAGE_SIZE, /* algnmnt, boundary */
BUS_SPACE_MAXADDR, /* lowaddr */
BUS_SPACE_MAXADDR, /* highaddr */
NULL, NULL, /* filter, filterarg */
sc->max_request_size,
sc->max_request_segments,
PAGE_SIZE, /* maxsegsize */
BUS_DMA_ALLOCNOW, /* flags */
busdma_lock_mutex, /* lockfunc */
&sc->xb_io_lock, /* lockarg */
&sc->xb_io_dmat);
if (error != 0) {
xenbus_dev_fatal(sc->xb_dev, error,
"Cannot allocate parent DMA tag\n");
return;
}
/* Per-transaction data allocation. */
sc->shadow = malloc(sizeof(*sc->shadow) * sc->max_requests,
M_XENBLOCKFRONT, M_NOWAIT|M_ZERO);
if (sc->shadow == NULL) {
xenbus_dev_fatal(sc->xb_dev, error,
"Cannot allocate request structures\n");
}
for (i = 0; i < sc->max_requests; i++) {
struct xb_command *cm;
cm = &sc->shadow[i];
cm->sg_refs = malloc(sizeof(grant_ref_t)
* sc->max_request_segments,
M_XENBLOCKFRONT, M_NOWAIT);
if (cm->sg_refs == NULL)
break;
cm->id = i;
cm->cm_sc = sc;
if (bus_dmamap_create(sc->xb_io_dmat, 0, &cm->map) != 0)
break;
xb_free_command(cm);
}
if (setup_blkring(sc) != 0)
return;
error = xs_printf(XST_NIL, node_path,
"ring-pages","%u", sc->ring_pages);
if (error) {
xenbus_dev_fatal(sc->xb_dev, error,
"writing %s/ring-pages",
node_path);
return;
}
error = xs_printf(XST_NIL, node_path,
"max-requests","%u", sc->max_requests);
if (error) {
xenbus_dev_fatal(sc->xb_dev, error,
"writing %s/max-requests",
node_path);
return;
}
error = xs_printf(XST_NIL, node_path,
"max-request-segments","%u", sc->max_request_segments);
if (error) {
xenbus_dev_fatal(sc->xb_dev, error,
"writing %s/max-request-segments",
node_path);
return;
}
error = xs_printf(XST_NIL, node_path,
"max-request-size","%u", sc->max_request_size);
if (error) {
xenbus_dev_fatal(sc->xb_dev, error,
"writing %s/max-request-size",
node_path);
return;
}
error = xs_printf(XST_NIL, node_path, "event-channel",
"%u", irq_to_evtchn_port(sc->irq));
if (error) {
xenbus_dev_fatal(sc->xb_dev, error,
"writing %s/event-channel",
node_path);
return;
}
error = xs_printf(XST_NIL, node_path,
"protocol", "%s", XEN_IO_PROTO_ABI_NATIVE);
if (error) {
xenbus_dev_fatal(sc->xb_dev, error,
"writing %s/protocol",
node_path);
return;
}
xenbus_set_state(sc->xb_dev, XenbusStateInitialised);
}
static int
setup_blkring(struct xb_softc *sc)
{
blkif_sring_t *sring;
uintptr_t sring_page_addr;
int error;
int i;
sc->ring_ref = GRANT_INVALID_REF;
sring = (blkif_sring_t *)malloc(PAGE_SIZE, M_DEVBUF, M_NOWAIT|M_ZERO);
sring = malloc(sc->ring_pages * PAGE_SIZE, M_XENBLOCKFRONT,
M_NOWAIT|M_ZERO);
if (sring == NULL) {
xenbus_dev_fatal(sc->xb_dev, ENOMEM, "allocating shared ring");
return ENOMEM;
return (ENOMEM);
}
SHARED_RING_INIT(sring);
FRONT_RING_INIT(&sc->ring, sring, PAGE_SIZE);
FRONT_RING_INIT(&sc->ring, sring, sc->ring_pages * PAGE_SIZE);
error = xenbus_grant_ring(sc->xb_dev,
(vtomach(sc->ring.sring) >> PAGE_SHIFT), &sc->ring_ref);
if (error) {
free(sring, M_DEVBUF);
sc->ring.sring = NULL;
goto fail;
for (i = 0, sring_page_addr = (uintptr_t)sring;
i < sc->ring_pages;
i++, sring_page_addr += PAGE_SIZE) {
error = xenbus_grant_ring(sc->xb_dev,
(vtomach(sring_page_addr) >> PAGE_SHIFT), &sc->ring_ref[i]);
if (error) {
xenbus_dev_fatal(sc->xb_dev, error,
"granting ring_ref(%d)", i);
return (error);
}
}
error = bind_listening_port_to_irqhandler(xenbus_get_otherend_id(sc->xb_dev),
error = xs_printf(XST_NIL, xenbus_get_node(sc->xb_dev),
"ring-ref","%u", sc->ring_ref[0]);
if (error) {
xenbus_dev_fatal(sc->xb_dev, error, "writing %s/ring-ref",
xenbus_get_node(sc->xb_dev));
return (error);
}
for (i = 1; i < sc->ring_pages; i++) {
char ring_ref_name[]= "ring_refXX";
snprintf(ring_ref_name, sizeof(ring_ref_name), "ring-ref%u", i);
error = xs_printf(XST_NIL, xenbus_get_node(sc->xb_dev),
ring_ref_name, "%u", sc->ring_ref[i]);
if (error) {
xenbus_dev_fatal(sc->xb_dev, error, "writing %s/%s",
xenbus_get_node(sc->xb_dev),
ring_ref_name);
return (error);
}
}
error = bind_listening_port_to_irqhandler(
xenbus_get_otherend_id(sc->xb_dev),
"xbd", (driver_intr_t *)blkif_int, sc,
INTR_TYPE_BIO | INTR_MPSAFE, &sc->irq);
if (error) {
xenbus_dev_fatal(sc->xb_dev, error,
"bind_evtchn_to_irqhandler failed");
goto fail;
return (error);
}
return (0);
fail:
blkif_free(sc, 0);
return (error);
}
/**
* Callback received when the backend's state changes.
*/
@ -640,15 +748,19 @@ blkfront_backend_changed(device_t dev, XenbusState backend_state)
switch (backend_state) {
case XenbusStateUnknown:
case XenbusStateInitialising:
case XenbusStateInitWait:
case XenbusStateInitialised:
case XenbusStateClosed:
case XenbusStateReconfigured:
case XenbusStateReconfiguring:
case XenbusStateClosed:
break;
case XenbusStateInitWait:
blkfront_initialize(sc);
break;
case XenbusStateInitialised:
case XenbusStateConnected:
connect(sc);
blkfront_initialize(sc);
blkfront_connect(sc);
break;
case XenbusStateClosing:
@ -657,20 +769,7 @@ blkfront_backend_changed(device_t dev, XenbusState backend_state)
"Device in use; refusing to close");
else
blkfront_closing(dev);
#ifdef notyet
bd = bdget(sc->dev);
if (bd == NULL)
xenbus_dev_fatal(dev, -ENODEV, "bdget failed");
down(&bd->bd_sem);
if (sc->users > 0)
xenbus_dev_error(dev, -EBUSY,
"Device in use; refusing to close");
else
blkfront_closing(dev);
up(&bd->bd_sem);
bdput(bd);
#endif
break;
}
return (0);
@ -681,7 +780,7 @@ blkfront_backend_changed(device_t dev, XenbusState backend_state)
** the details about the physical device - #sectors, size, etc).
*/
static void
connect(struct xb_softc *sc)
blkfront_connect(struct xb_softc *sc)
{
device_t dev = sc->xb_dev;
unsigned long sectors, sector_size;
@ -694,20 +793,20 @@ connect(struct xb_softc *sc)
DPRINTK("blkfront.c:connect:%s.\n", xenbus_get_otherend_path(dev));
err = xenbus_gather(XBT_NIL, xenbus_get_otherend_path(dev),
"sectors", "%lu", &sectors,
"info", "%u", &binfo,
"sector-size", "%lu", &sector_size,
NULL);
err = xs_gather(XST_NIL, xenbus_get_otherend_path(dev),
"sectors", "%lu", &sectors,
"info", "%u", &binfo,
"sector-size", "%lu", &sector_size,
NULL);
if (err) {
xenbus_dev_fatal(dev, err,
"reading backend fields at %s",
xenbus_get_otherend_path(dev));
return;
}
err = xenbus_gather(XBT_NIL, xenbus_get_otherend_path(dev),
"feature-barrier", "%lu", &feature_barrier,
NULL);
err = xs_gather(XST_NIL, xenbus_get_otherend_path(dev),
"feature-barrier", "%lu", &feature_barrier,
NULL);
if (!err || feature_barrier)
sc->xb_flags |= XB_BARRIER;
@ -741,15 +840,16 @@ blkfront_closing(device_t dev)
{
struct xb_softc *sc = device_get_softc(dev);
xenbus_set_state(dev, XenbusStateClosing);
DPRINTK("blkfront_closing: %s removed\n", xenbus_get_node(dev));
if (sc->mi) {
DPRINTK("Calling xlvbd_del\n");
xlvbd_del(sc);
sc->mi = NULL;
if (sc->xb_disk != NULL) {
disk_destroy(sc->xb_disk);
sc->xb_disk = NULL;
}
xenbus_set_state(dev, XenbusStateClosed);
xenbus_set_state(dev, XenbusStateClosed);
}
@ -778,11 +878,16 @@ flush_requests(struct xb_softc *sc)
notify_remote_via_irq(sc->irq);
}
static void blkif_restart_queue_callback(void *arg)
static void
blkif_restart_queue_callback(void *arg)
{
struct xb_softc *sc = arg;
mtx_lock(&sc->xb_io_lock);
xb_startio(sc);
mtx_unlock(&sc->xb_io_lock);
}
static int
@ -874,20 +979,17 @@ xb_bio_command(struct xb_softc *sc)
return (NULL);
}
if (gnttab_alloc_grant_references(BLKIF_MAX_SEGMENTS_PER_REQUEST,
&cm->gref_head) < 0) {
if (gnttab_alloc_grant_references(sc->max_request_segments,
&cm->gref_head) != 0) {
gnttab_request_free_callback(&sc->callback,
blkif_restart_queue_callback, sc,
BLKIF_MAX_SEGMENTS_PER_REQUEST);
sc->max_request_segments);
xb_requeue_bio(sc, bp);
xb_enqueue_free(cm);
sc->xb_flags |= XB_FROZEN;
return (NULL);
}
/* XXX Can we grab refs before doing the load so that the ref can
* be filled out here?
*/
cm->bp = bp;
cm->data = bp->bio_data;
cm->datalen = bp->bio_bcount;
@ -921,13 +1023,19 @@ blkif_queue_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error)
struct xb_softc *sc;
struct xb_command *cm;
blkif_request_t *ring_req;
struct blkif_request_segment *sg;
struct blkif_request_segment *last_block_sg;
grant_ref_t *sg_ref;
vm_paddr_t buffer_ma;
uint64_t fsect, lsect;
int ref, i, op;
int ref;
int op;
int block_segs;
cm = arg;
sc = cm->cm_sc;
//printf("%s: Start\n", __func__);
if (error) {
printf("error %d in blkif_queue_cb\n", error);
cm->bp->bio_error = EIO;
@ -938,43 +1046,62 @@ blkif_queue_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error)
/* Fill out a communications ring structure. */
ring_req = RING_GET_REQUEST(&sc->ring, sc->ring.req_prod_pvt);
if (ring_req == NULL) {
/* XXX Is this possible? */
printf("ring_req NULL, requeuing\n");
xb_enqueue_ready(cm);
return;
}
ring_req->id = cm->req.id;
sc->ring.req_prod_pvt++;
ring_req->id = cm->id;
ring_req->operation = cm->operation;
ring_req->sector_number = cm->sector_number;
ring_req->handle = (blkif_vdev_t)(uintptr_t)sc->xb_disk;
ring_req->nr_segments = nsegs;
cm->nseg = nsegs;
for (i = 0; i < nsegs; i++) {
buffer_ma = segs[i].ds_addr;
fsect = (buffer_ma & PAGE_MASK) >> XBD_SECTOR_SHFT;
lsect = fsect + (segs[i].ds_len >> XBD_SECTOR_SHFT) - 1;
block_segs = MIN(nsegs, BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK);
sg = ring_req->seg;
last_block_sg = sg + block_segs;
sg_ref = cm->sg_refs;
KASSERT(lsect <= 7,
("XEN disk driver data cannot cross a page boundary"));
while (1) {
/* install a grant reference. */
ref = gnttab_claim_grant_reference(&cm->gref_head);
KASSERT( ref >= 0, ("grant_reference failed") );
while (sg < last_block_sg) {
buffer_ma = segs->ds_addr;
fsect = (buffer_ma & PAGE_MASK) >> XBD_SECTOR_SHFT;
lsect = fsect + (segs->ds_len >> XBD_SECTOR_SHFT) - 1;
gnttab_grant_foreign_access_ref(
ref,
xenbus_get_otherend_id(sc->xb_dev),
buffer_ma >> PAGE_SHIFT,
ring_req->operation & 1 ); /* ??? */
KASSERT(lsect <= 7, ("XEN disk driver data cannot "
"cross a page boundary"));
ring_req->seg[i] =
(struct blkif_request_segment) {
/* install a grant reference. */
ref = gnttab_claim_grant_reference(&cm->gref_head);
/*
* GNTTAB_LIST_END == 0xffffffff, but it is private
* to gnttab.c.
*/
KASSERT(ref != ~0, ("grant_reference failed"));
gnttab_grant_foreign_access_ref(
ref,
xenbus_get_otherend_id(sc->xb_dev),
buffer_ma >> PAGE_SHIFT,
ring_req->operation == BLKIF_OP_WRITE);
*sg_ref = ref;
*sg = (struct blkif_request_segment) {
.gref = ref,
.first_sect = fsect,
.last_sect = lsect };
}
sg++;
sg_ref++;
segs++;
nsegs--;
}
block_segs = MIN(nsegs, BLKIF_MAX_SEGMENTS_PER_SEGMENT_BLOCK);
if (block_segs == 0)
break;
sg = BLKRING_GET_SG_REQUEST(&sc->ring, sc->ring.req_prod_pvt);
sc->ring.req_prod_pvt++;
last_block_sg = sg + block_segs;
}
if (cm->operation == BLKIF_OP_READ)
op = BUS_DMASYNC_PREREAD;
@ -984,15 +1111,10 @@ blkif_queue_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error)
op = 0;
bus_dmamap_sync(sc->xb_io_dmat, cm->map, op);
sc->ring.req_prod_pvt++;
/* Keep a private copy so we can reissue requests when recovering. */
cm->req = *ring_req;
gnttab_free_grant_references(cm->gref_head);
xb_enqueue_busy(cm);
gnttab_free_grant_references(cm->gref_head);
/*
* This flag means that we're probably executing in the busdma swi
* instead of in the startio context, so an explicit flush is needed.
@ -1000,6 +1122,7 @@ blkif_queue_cb(void *arg, bus_dma_segment_t *segs, int nsegs, int error)
if (cm->cm_flags & XB_CMD_FROZEN)
flush_requests(sc);
//printf("%s: Done\n", __func__);
return;
}
@ -1018,7 +1141,7 @@ xb_startio(struct xb_softc *sc)
mtx_assert(&sc->xb_io_lock, MA_OWNED);
while (!RING_FULL(&sc->ring)) {
while (RING_FREE_REQUESTS(&sc->ring) >= sc->max_request_blocks) {
if (sc->xb_flags & XB_FROZEN)
break;
@ -1061,12 +1184,12 @@ blkif_int(void *xsc)
rp = sc->ring.sring->rsp_prod;
rmb(); /* Ensure we see queued responses up to 'rp'. */
for (i = sc->ring.rsp_cons; i != rp; i++) {
for (i = sc->ring.rsp_cons; i != rp;) {
bret = RING_GET_RESPONSE(&sc->ring, i);
cm = &sc->shadow[bret->id];
xb_remove_busy(cm);
blkif_completion(cm);
i += blkif_completion(cm);
if (cm->operation == BLKIF_OP_READ)
op = BUS_DMASYNC_POSTREAD;
@ -1116,35 +1239,61 @@ blkif_int(void *xsc)
static void
blkif_free(struct xb_softc *sc, int suspend)
{
uint8_t *sring_page_ptr;
int i;
/* Prevent new requests being issued until we fix things up. */
/* Prevent new requests being issued until we fix things up. */
mtx_lock(&sc->xb_io_lock);
sc->connected = suspend ?
BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
mtx_unlock(&sc->xb_io_lock);
/* Free resources associated with old device channel. */
if (sc->ring_ref != GRANT_INVALID_REF) {
gnttab_end_foreign_access(sc->ring_ref,
sc->ring.sring);
sc->ring_ref = GRANT_INVALID_REF;
if (sc->ring.sring != NULL) {
sring_page_ptr = (uint8_t *)sc->ring.sring;
for (i = 0; i < sc->ring_pages; i++) {
if (sc->ring_ref[i] != GRANT_INVALID_REF) {
gnttab_end_foreign_access_ref(sc->ring_ref[i]);
sc->ring_ref[i] = GRANT_INVALID_REF;
}
sring_page_ptr += PAGE_SIZE;
}
free(sc->ring.sring, M_XENBLOCKFRONT);
sc->ring.sring = NULL;
}
if (sc->irq)
unbind_from_irqhandler(sc->irq);
sc->irq = 0;
if (sc->shadow) {
for (i = 0; i < sc->max_requests; i++) {
struct xb_command *cm;
cm = &sc->shadow[i];
if (cm->sg_refs != NULL) {
free(cm->sg_refs, M_XENBLOCKFRONT);
cm->sg_refs = NULL;
}
bus_dmamap_destroy(sc->xb_io_dmat, cm->map);
}
free(sc->shadow, M_XENBLOCKFRONT);
sc->shadow = NULL;
}
if (sc->irq) {
unbind_from_irqhandler(sc->irq);
sc->irq = 0;
}
}
static void
static int
blkif_completion(struct xb_command *s)
{
int i;
for (i = 0; i < s->req.nr_segments; i++)
gnttab_end_foreign_access(s->req.seg[i].gref, 0UL);
//printf("%s: Req %p(%d)\n", __func__, s, s->nseg);
gnttab_end_foreign_access_references(s->nseg, s->sg_refs);
return (BLKIF_SEGS_TO_BLOCKS(s->nseg));
}
#if 0
static void
blkif_recover(struct xb_softc *sc)
{
@ -1157,6 +1306,7 @@ blkif_recover(struct xb_softc *sc)
* has been removed until further notice.
*/
}
#endif
/* ** Driver registration ** */
static device_method_t blkfront_methods[] = {
@ -1169,7 +1319,7 @@ static device_method_t blkfront_methods[] = {
DEVMETHOD(device_resume, blkfront_resume),
/* Xenbus interface */
DEVMETHOD(xenbus_backend_changed, blkfront_backend_changed),
DEVMETHOD(xenbus_otherend_changed, blkfront_backend_changed),
{ 0, 0 }
};
@ -1181,4 +1331,4 @@ static driver_t blkfront_driver = {
};
devclass_t blkfront_devclass;
DRIVER_MODULE(xbd, xenbus, blkfront_driver, blkfront_devclass, 0, 0);
DRIVER_MODULE(xbd, xenbusb_front, blkfront_driver, blkfront_devclass, 0, 0);

View File

@ -32,7 +32,43 @@
#ifndef __XEN_DRIVERS_BLOCK_H__
#define __XEN_DRIVERS_BLOCK_H__
#include <xen/interface/io/blkif.h>
#include <xen/blkif.h>
/**
* The maximum number of outstanding requests blocks (request headers plus
* additional segment blocks) we will allow in a negotiated block-front/back
* communication channel.
*/
#define XBF_MAX_REQUESTS 256
/**
* The maximum mapped region size per request we will allow in a negotiated
* block-front/back communication channel.
*
* \note We reserve a segement from the maximum supported by the transport to
* guarantee we can handle an unaligned transfer without the need to
* use a bounce buffer..
*/
#define XBF_MAX_REQUEST_SIZE \
MIN(MAXPHYS, (BLKIF_MAX_SEGMENTS_PER_REQUEST - 1) * PAGE_SIZE)
/**
* The maximum number of segments (within a request header and accompanying
* segment blocks) per request we will allow in a negotiated block-front/back
* communication channel.
*/
#define XBF_MAX_SEGMENTS_PER_REQUEST \
(MIN(BLKIF_MAX_SEGMENTS_PER_REQUEST, \
(XBF_MAX_REQUEST_SIZE / PAGE_SIZE) + 1))
/**
* The maximum number of shared memory ring pages we will allow in a
* negotiated block-front/back communication channel. Allow enough
* ring space for all requests to be XBF_MAX_REQUEST_SIZE'd.
*/
#define XBF_MAX_RING_PAGES \
BLKIF_RING_PAGES(BLKIF_SEGS_TO_BLOCKS(XBF_MAX_SEGMENTS_PER_REQUEST) \
* XBF_MAX_REQUESTS)
struct xlbd_type_info
{
@ -62,19 +98,19 @@ struct xb_command {
#define XB_ON_XBQ_COMPLETE (1<<5)
#define XB_ON_XBQ_MASK ((1<<2)|(1<<3)|(1<<4)|(1<<5))
bus_dmamap_t map;
blkif_request_t req;
uint64_t id;
grant_ref_t *sg_refs;
struct bio *bp;
grant_ref_t gref_head;
void *data;
size_t datalen;
u_int nseg;
int operation;
blkif_sector_t sector_number;
int status;
void (* cm_complete)(struct xb_command *);
};
#define BLK_RING_SIZE __RING_SIZE((blkif_sring_t *)0, PAGE_SIZE)
#define XBQ_FREE 0
#define XBQ_BIO 1
#define XBQ_READY 2
@ -108,10 +144,14 @@ struct xb_softc {
int vdevice;
blkif_vdev_t handle;
int connected;
int ring_ref;
u_int ring_pages;
uint32_t max_requests;
uint32_t max_request_segments;
uint32_t max_request_blocks;
uint32_t max_request_size;
grant_ref_t ring_ref[XBF_MAX_RING_PAGES];
blkif_front_ring_t ring;
unsigned int irq;
struct xlbd_major_info *mi;
struct gnttab_free_callback callback;
TAILQ_HEAD(,xb_command) cm_free;
TAILQ_HEAD(,xb_command) cm_ready;
@ -126,11 +166,12 @@ struct xb_softc {
*/
int users;
struct mtx xb_io_lock;
struct xb_command shadow[BLK_RING_SIZE];
struct xb_command *shadow;
};
int xlvbd_add(struct xb_softc *, blkif_sector_t capacity, int device,
uint16_t vdisk_info, uint16_t sector_size);
int xlvbd_add(struct xb_softc *, blkif_sector_t sectors, int device,
uint16_t vdisk_info, unsigned long sector_size);
void xlvbd_del(struct xb_softc *);
#define XBQ_ADD(sc, qname) \
@ -188,7 +229,8 @@ void xlvbd_del(struct xb_softc *);
struct xb_command *cm; \
\
if ((cm = TAILQ_FIRST(&sc->cm_ ## name)) != NULL) { \
if ((cm->cm_flags & XB_ON_ ## index) == 0) { \
if ((cm->cm_flags & XB_ON_XBQ_MASK) != \
XB_ON_ ## index) { \
printf("command %p not in queue, " \
"flags = %#x, bit = %#x\n", cm, \
cm->cm_flags, XB_ON_ ## index); \
@ -203,7 +245,7 @@ void xlvbd_del(struct xb_softc *);
static __inline void \
xb_remove_ ## name (struct xb_command *cm) \
{ \
if ((cm->cm_flags & XB_ON_ ## index) == 0) { \
if ((cm->cm_flags & XB_ON_XBQ_MASK) != XB_ON_ ## index){\
printf("command %p not in queue, flags = %#x, " \
"bit = %#x\n", cm, cm->cm_flags, \
XB_ON_ ## index); \

View File

@ -0,0 +1,493 @@
/*-
* Copyright (c) 2010 Justin T. Gibbs, Spectra Logic Corporation
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*/
/*-
* PV suspend/resume support:
*
* Copyright (c) 2004 Christian Limpach.
* Copyright (c) 2004-2006,2008 Kip Macy
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by Christian Limpach.
* 4. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
/*-
* HVM suspend/resume support:
*
* Copyright (c) 2008 Citrix Systems, Inc.
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
/**
* \file control.c
*
* \brief Device driver to repond to control domain events that impact
* this VM.
*/
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/kernel.h>
#include <sys/malloc.h>
#include <sys/bio.h>
#include <sys/bus.h>
#include <sys/conf.h>
#include <sys/disk.h>
#include <sys/fcntl.h>
#include <sys/filedesc.h>
#include <sys/kdb.h>
#include <sys/module.h>
#include <sys/namei.h>
#include <sys/proc.h>
#include <sys/reboot.h>
#include <sys/rman.h>
#include <sys/taskqueue.h>
#include <sys/types.h>
#include <sys/vnode.h>
#ifndef XENHVM
#include <sys/sched.h>
#include <sys/smp.h>
#endif
#include <geom/geom.h>
#include <machine/_inttypes.h>
#include <machine/xen/xen-os.h>
#include <vm/vm.h>
#include <vm/vm_extern.h>
#include <vm/vm_kern.h>
#include <xen/blkif.h>
#include <xen/evtchn.h>
#include <xen/gnttab.h>
#include <xen/xen_intr.h>
#include <xen/interface/event_channel.h>
#include <xen/interface/grant_table.h>
#include <xen/xenbus/xenbusvar.h>
#define NUM_ELEMENTS(x) (sizeof(x) / sizeof(*(x)))
/*--------------------------- Forward Declarations --------------------------*/
/** Function signature for shutdown event handlers. */
typedef void (xctrl_shutdown_handler_t)(void);
static xctrl_shutdown_handler_t xctrl_poweroff;
static xctrl_shutdown_handler_t xctrl_reboot;
static xctrl_shutdown_handler_t xctrl_suspend;
static xctrl_shutdown_handler_t xctrl_crash;
static xctrl_shutdown_handler_t xctrl_halt;
/*-------------------------- Private Data Structures -------------------------*/
/** Element type for lookup table of event name to handler. */
struct xctrl_shutdown_reason {
const char *name;
xctrl_shutdown_handler_t *handler;
};
/** Lookup table for shutdown event name to handler. */
static struct xctrl_shutdown_reason xctrl_shutdown_reasons[] = {
{ "poweroff", xctrl_poweroff },
{ "reboot", xctrl_reboot },
{ "suspend", xctrl_suspend },
{ "crash", xctrl_crash },
{ "halt", xctrl_halt },
};
struct xctrl_softc {
/** Must be first */
struct xs_watch xctrl_watch;
};
/*------------------------------ Event Handlers ------------------------------*/
static void
xctrl_poweroff()
{
shutdown_nice(RB_POWEROFF|RB_HALT);
}
static void
xctrl_reboot()
{
shutdown_nice(0);
}
#ifndef XENHVM
extern void xencons_suspend(void);
extern void xencons_resume(void);
/* Full PV mode suspension. */
static void
xctrl_suspend()
{
int i, j, k, fpp;
unsigned long max_pfn, start_info_mfn;
#ifdef SMP
cpumask_t map;
/*
* Bind us to CPU 0 and stop any other VCPUs.
*/
thread_lock(curthread);
sched_bind(curthread, 0);
thread_unlock(curthread);
KASSERT(PCPU_GET(cpuid) == 0, ("xen_suspend: not running on cpu 0"));
map = PCPU_GET(other_cpus) & ~stopped_cpus;
if (map)
stop_cpus(map);
#endif
if (DEVICE_SUSPEND(root_bus) != 0) {
printf("xen_suspend: device_suspend failed\n");
#ifdef SMP
if (map)
restart_cpus(map);
#endif
return;
}
local_irq_disable();
xencons_suspend();
gnttab_suspend();
max_pfn = HYPERVISOR_shared_info->arch.max_pfn;
void *shared_info = HYPERVISOR_shared_info;
HYPERVISOR_shared_info = NULL;
pmap_kremove((vm_offset_t) shared_info);
PT_UPDATES_FLUSH();
xen_start_info->store_mfn = MFNTOPFN(xen_start_info->store_mfn);
xen_start_info->console.domU.mfn = MFNTOPFN(xen_start_info->console.domU.mfn);
/*
* We'll stop somewhere inside this hypercall. When it returns,
* we'll start resuming after the restore.
*/
start_info_mfn = VTOMFN(xen_start_info);
pmap_suspend();
HYPERVISOR_suspend(start_info_mfn);
pmap_resume();
pmap_kenter_ma((vm_offset_t) shared_info, xen_start_info->shared_info);
HYPERVISOR_shared_info = shared_info;
HYPERVISOR_shared_info->arch.pfn_to_mfn_frame_list_list =
VTOMFN(xen_pfn_to_mfn_frame_list_list);
fpp = PAGE_SIZE/sizeof(unsigned long);
for (i = 0, j = 0, k = -1; i < max_pfn; i += fpp, j++) {
if ((j % fpp) == 0) {
k++;
xen_pfn_to_mfn_frame_list_list[k] =
VTOMFN(xen_pfn_to_mfn_frame_list[k]);
j = 0;
}
xen_pfn_to_mfn_frame_list[k][j] =
VTOMFN(&xen_phys_machine[i]);
}
HYPERVISOR_shared_info->arch.max_pfn = max_pfn;
gnttab_resume();
irq_resume();
local_irq_enable();
xencons_resume();
#ifdef CONFIG_SMP
for_each_cpu(i)
vcpu_prepare(i);
#endif
/*
* Only resume xenbus /after/ we've prepared our VCPUs; otherwise
* the VCPU hotplug callback can race with our vcpu_prepare
*/
DEVICE_RESUME(root_bus);
#ifdef SMP
thread_lock(curthread);
sched_unbind(curthread);
thread_unlock(curthread);
if (map)
restart_cpus(map);
#endif
}
static void
xen_pv_shutdown_final(void *arg, int howto)
{
/*
* Inform the hypervisor that shutdown is complete.
* This is not necessary in HVM domains since Xen
* emulates ACPI in that mode and FreeBSD's ACPI
* support will request this transition.
*/
if (howto & (RB_HALT | RB_POWEROFF))
HYPERVISOR_shutdown(SHUTDOWN_poweroff);
else
HYPERVISOR_shutdown(SHUTDOWN_reboot);
}
#else
extern void xenpci_resume(void);
/* HVM mode suspension. */
static void
xctrl_suspend()
{
int suspend_cancelled;
if (DEVICE_SUSPEND(root_bus)) {
printf("xen_suspend: device_suspend failed\n");
return;
}
/*
* Make sure we don't change cpus or switch to some other
* thread. for the duration.
*/
critical_enter();
/*
* Prevent any races with evtchn_interrupt() handler.
*/
irq_suspend();
disable_intr();
suspend_cancelled = HYPERVISOR_suspend(0);
if (!suspend_cancelled)
xenpci_resume();
/*
* Re-enable interrupts and put the scheduler back to normal.
*/
enable_intr();
critical_exit();
/*
* FreeBSD really needs to add DEVICE_SUSPEND_CANCEL or
* similar.
*/
if (!suspend_cancelled)
DEVICE_RESUME(root_bus);
}
#endif
static void
xctrl_crash()
{
panic("Xen directed crash");
}
static void
xctrl_halt()
{
shutdown_nice(RB_HALT);
}
/*------------------------------ Event Reception -----------------------------*/
static void
xctrl_on_watch_event(struct xs_watch *watch, const char **vec, unsigned int len)
{
struct xctrl_shutdown_reason *reason;
struct xctrl_shutdown_reason *last_reason;
char *result;
int error;
int result_len;
error = xs_read(XST_NIL, "control", "shutdown",
&result_len, (void **)&result);
if (error != 0)
return;
reason = xctrl_shutdown_reasons;
last_reason = reason + NUM_ELEMENTS(xctrl_shutdown_reasons);
while (reason < last_reason) {
if (!strcmp(result, reason->name)) {
reason->handler();
break;
}
reason++;
}
free(result, M_XENSTORE);
}
/*------------------ Private Device Attachment Functions --------------------*/
/**
* \brief Identify instances of this device type in the system.
*
* \param driver The driver performing this identify action.
* \param parent The NewBus parent device for any devices this method adds.
*/
static void
xctrl_identify(driver_t *driver __unused, device_t parent)
{
/*
* A single device instance for our driver is always present
* in a system operating under Xen.
*/
BUS_ADD_CHILD(parent, 0, driver->name, 0);
}
/**
* \brief Probe for the existance of the Xen Control device
*
* \param dev NewBus device_t for this Xen control instance.
*
* \return Always returns 0 indicating success.
*/
static int
xctrl_probe(device_t dev)
{
device_set_desc(dev, "Xen Control Device");
return (0);
}
/**
* \brief Attach the Xen control device.
*
* \param dev NewBus device_t for this Xen control instance.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
static int
xctrl_attach(device_t dev)
{
struct xctrl_softc *xctrl;
xctrl = device_get_softc(dev);
/* Activate watch */
xctrl->xctrl_watch.node = "control/shutdown";
xctrl->xctrl_watch.callback = xctrl_on_watch_event;
xs_register_watch(&xctrl->xctrl_watch);
#ifndef XENHVM
EVENTHANDLER_REGISTER(shutdown_final, xen_pv_shutdown_final, NULL,
SHUTDOWN_PRI_LAST);
#endif
return (0);
}
/**
* \brief Detach the Xen control device.
*
* \param dev NewBus device_t for this Xen control device instance.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
static int
xctrl_detach(device_t dev)
{
struct xctrl_softc *xctrl;
xctrl = device_get_softc(dev);
/* Release watch */
xs_unregister_watch(&xctrl->xctrl_watch);
return (0);
}
/*-------------------- Private Device Attachment Data -----------------------*/
static device_method_t xctrl_methods[] = {
/* Device interface */
DEVMETHOD(device_identify, xctrl_identify),
DEVMETHOD(device_probe, xctrl_probe),
DEVMETHOD(device_attach, xctrl_attach),
DEVMETHOD(device_detach, xctrl_detach),
{ 0, 0 }
};
DEFINE_CLASS_0(xctrl, xctrl_driver, xctrl_methods, sizeof(struct xctrl_softc));
devclass_t xctrl_devclass;
DRIVER_MODULE(xctrl, xenstore, xctrl_driver, xctrl_devclass, 0, 0);

View File

@ -91,8 +91,6 @@ __FBSDID("$FreeBSD$");
#define XN_CSUM_FEATURES (CSUM_TCP | CSUM_UDP | CSUM_TSO)
#define GRANT_INVALID_REF 0
#define NET_TX_RING_SIZE __RING_SIZE((netif_tx_sring_t *)0, PAGE_SIZE)
#define NET_RX_RING_SIZE __RING_SIZE((netif_rx_sring_t *)0, PAGE_SIZE)
@ -373,7 +371,8 @@ xennet_get_rx_ref(struct netfront_info *np, RING_IDX ri)
{
int i = xennet_rxidx(ri);
grant_ref_t ref = np->grant_rx_ref[i];
np->grant_rx_ref[i] = GRANT_INVALID_REF;
KASSERT(ref != GRANT_REF_INVALID, ("Invalid grant reference!\n"));
np->grant_rx_ref[i] = GRANT_REF_INVALID;
return ref;
}
@ -404,7 +403,7 @@ xen_net_read_mac(device_t dev, uint8_t mac[])
int error, i;
char *s, *e, *macstr;
error = xenbus_read(XBT_NIL, xenbus_get_node(dev), "mac", NULL,
error = xs_read(XST_NIL, xenbus_get_node(dev), "mac", NULL,
(void **) &macstr);
if (error)
return (error);
@ -413,12 +412,12 @@ xen_net_read_mac(device_t dev, uint8_t mac[])
for (i = 0; i < ETHER_ADDR_LEN; i++) {
mac[i] = strtoul(s, &e, 16);
if (s == e || (e[0] != ':' && e[0] != 0)) {
free(macstr, M_DEVBUF);
free(macstr, M_XENBUS);
return (ENOENT);
}
s = &e[1];
}
free(macstr, M_DEVBUF);
free(macstr, M_XENBUS);
return (0);
}
@ -483,7 +482,7 @@ static int
talk_to_backend(device_t dev, struct netfront_info *info)
{
const char *message;
struct xenbus_transaction xbt;
struct xs_transaction xst;
const char *node = xenbus_get_node(dev);
int err;
@ -499,54 +498,54 @@ talk_to_backend(device_t dev, struct netfront_info *info)
goto out;
again:
err = xenbus_transaction_start(&xbt);
err = xs_transaction_start(&xst);
if (err) {
xenbus_dev_fatal(dev, err, "starting transaction");
goto destroy_ring;
}
err = xenbus_printf(xbt, node, "tx-ring-ref","%u",
err = xs_printf(xst, node, "tx-ring-ref","%u",
info->tx_ring_ref);
if (err) {
message = "writing tx ring-ref";
goto abort_transaction;
}
err = xenbus_printf(xbt, node, "rx-ring-ref","%u",
err = xs_printf(xst, node, "rx-ring-ref","%u",
info->rx_ring_ref);
if (err) {
message = "writing rx ring-ref";
goto abort_transaction;
}
err = xenbus_printf(xbt, node,
err = xs_printf(xst, node,
"event-channel", "%u", irq_to_evtchn_port(info->irq));
if (err) {
message = "writing event-channel";
goto abort_transaction;
}
err = xenbus_printf(xbt, node, "request-rx-copy", "%u",
err = xs_printf(xst, node, "request-rx-copy", "%u",
info->copying_receiver);
if (err) {
message = "writing request-rx-copy";
goto abort_transaction;
}
err = xenbus_printf(xbt, node, "feature-rx-notify", "%d", 1);
err = xs_printf(xst, node, "feature-rx-notify", "%d", 1);
if (err) {
message = "writing feature-rx-notify";
goto abort_transaction;
}
err = xenbus_printf(xbt, node, "feature-sg", "%d", 1);
err = xs_printf(xst, node, "feature-sg", "%d", 1);
if (err) {
message = "writing feature-sg";
goto abort_transaction;
}
#if __FreeBSD_version >= 700000
err = xenbus_printf(xbt, node, "feature-gso-tcpv4", "%d", 1);
err = xs_printf(xst, node, "feature-gso-tcpv4", "%d", 1);
if (err) {
message = "writing feature-gso-tcpv4";
goto abort_transaction;
}
#endif
err = xenbus_transaction_end(xbt, 0);
err = xs_transaction_end(xst, 0);
if (err) {
if (err == EAGAIN)
goto again;
@ -557,7 +556,7 @@ talk_to_backend(device_t dev, struct netfront_info *info)
return 0;
abort_transaction:
xenbus_transaction_end(xbt, 1);
xs_transaction_end(xst, 1);
xenbus_dev_fatal(dev, err, "%s", message);
destroy_ring:
netif_free(info);
@ -576,8 +575,8 @@ setup_device(device_t dev, struct netfront_info *info)
ifp = info->xn_ifp;
info->tx_ring_ref = GRANT_INVALID_REF;
info->rx_ring_ref = GRANT_INVALID_REF;
info->tx_ring_ref = GRANT_REF_INVALID;
info->rx_ring_ref = GRANT_REF_INVALID;
info->rx.sring = NULL;
info->tx.sring = NULL;
info->irq = 0;
@ -750,7 +749,7 @@ netif_release_tx_bufs(struct netfront_info *np)
GNTMAP_readonly);
gnttab_release_grant_reference(&np->gref_tx_head,
np->grant_tx_ref[i]);
np->grant_tx_ref[i] = GRANT_INVALID_REF;
np->grant_tx_ref[i] = GRANT_REF_INVALID;
add_id_to_freelist(np->tx_mbufs, i);
np->xn_cdata.xn_tx_chain_cnt--;
if (np->xn_cdata.xn_tx_chain_cnt < 0) {
@ -854,7 +853,8 @@ network_alloc_rx_buffers(struct netfront_info *sc)
sc->rx_mbufs[id] = m_new;
ref = gnttab_claim_grant_reference(&sc->gref_rx_head);
KASSERT((short)ref >= 0, ("negative ref"));
KASSERT(ref != GNTTAB_LIST_END,
("reserved grant references exhuasted"));
sc->grant_rx_ref[id] = ref;
vaddr = mtod(m_new, vm_offset_t);
@ -1135,7 +1135,7 @@ xn_txeof(struct netfront_info *np)
np->grant_tx_ref[id]);
gnttab_release_grant_reference(
&np->gref_tx_head, np->grant_tx_ref[id]);
np->grant_tx_ref[id] = GRANT_INVALID_REF;
np->grant_tx_ref[id] = GRANT_REF_INVALID;
np->tx_mbufs[id] = NULL;
add_id_to_freelist(np->tx_mbufs, id);
@ -1318,12 +1318,13 @@ xennet_get_responses(struct netfront_info *np,
* the backend driver. In future this should flag the bad
* situation to the system controller to reboot the backed.
*/
if (ref == GRANT_INVALID_REF) {
if (ref == GRANT_REF_INVALID) {
#if 0
if (net_ratelimit())
WPRINTK("Bad rx response id %d.\n", rx->id);
#endif
printf("%s: Bad rx response id %d.\n", __func__,rx->id);
err = EINVAL;
goto next;
}
@ -1384,7 +1385,7 @@ xennet_get_responses(struct netfront_info *np,
err = ENOENT;
printf("%s: cons %u frags %u rp %u, not enough frags\n",
__func__, *cons, frags, rp);
break;
break;
}
/*
* Note that m can be NULL, if rx->status < 0 or if
@ -1526,6 +1527,11 @@ xn_assemble_tx_request(struct netfront_info *sc, struct mbuf *m_head)
* tell the TCP stack to generate a shorter chain of packets.
*/
if (nfrags > MAX_TX_REQ_FRAGS) {
#ifdef DEBUG
printf("%s: nfrags %d > MAX_TX_REQ_FRAGS %d, netback "
"won't be able to handle it, dropping\n",
__func__, nfrags, MAX_TX_REQ_FRAGS);
#endif
m_freem(m_head);
return (EMSGSIZE);
}
@ -1881,11 +1887,11 @@ network_connect(struct netfront_info *np)
netif_rx_request_t *req;
u_int feature_rx_copy, feature_rx_flip;
error = xenbus_scanf(XBT_NIL, xenbus_get_otherend_path(np->xbdev),
error = xs_scanf(XST_NIL, xenbus_get_otherend_path(np->xbdev),
"feature-rx-copy", NULL, "%u", &feature_rx_copy);
if (error)
feature_rx_copy = 0;
error = xenbus_scanf(XBT_NIL, xenbus_get_otherend_path(np->xbdev),
error = xs_scanf(XST_NIL, xenbus_get_otherend_path(np->xbdev),
"feature-rx-flip", NULL, "%u", &feature_rx_flip);
if (error)
feature_rx_flip = 1;
@ -1999,14 +2005,14 @@ create_netdev(device_t dev)
/* Initialise {tx,rx}_skbs to be a free chain containing every entry. */
for (i = 0; i <= NET_TX_RING_SIZE; i++) {
np->tx_mbufs[i] = (void *) ((u_long) i+1);
np->grant_tx_ref[i] = GRANT_INVALID_REF;
np->grant_tx_ref[i] = GRANT_REF_INVALID;
}
np->tx_mbufs[NET_TX_RING_SIZE] = (void *)0;
for (i = 0; i <= NET_RX_RING_SIZE; i++) {
np->rx_mbufs[i] = NULL;
np->grant_rx_ref[i] = GRANT_INVALID_REF;
np->grant_rx_ref[i] = GRANT_REF_INVALID;
}
/* A grant for every tx ring slot */
if (gnttab_alloc_grant_references(NET_TX_RING_SIZE,
@ -2128,8 +2134,8 @@ netif_disconnect_backend(struct netfront_info *info)
end_access(info->tx_ring_ref, info->tx.sring);
end_access(info->rx_ring_ref, info->rx.sring);
info->tx_ring_ref = GRANT_INVALID_REF;
info->rx_ring_ref = GRANT_INVALID_REF;
info->tx_ring_ref = GRANT_REF_INVALID;
info->rx_ring_ref = GRANT_REF_INVALID;
info->tx.sring = NULL;
info->rx.sring = NULL;
@ -2143,7 +2149,7 @@ netif_disconnect_backend(struct netfront_info *info)
static void
end_access(int ref, void *page)
{
if (ref != GRANT_INVALID_REF)
if (ref != GRANT_REF_INVALID)
gnttab_end_foreign_access(ref, page);
}
@ -2171,7 +2177,7 @@ static device_method_t netfront_methods[] = {
DEVMETHOD(device_resume, netfront_resume),
/* Xenbus interface */
DEVMETHOD(xenbus_backend_changed, netfront_backend_changed),
DEVMETHOD(xenbus_otherend_changed, netfront_backend_changed),
{ 0, 0 }
};
@ -2183,4 +2189,4 @@ static driver_t netfront_driver = {
};
devclass_t netfront_devclass;
DRIVER_MODULE(xe, xenbus, netfront_driver, netfront_devclass, 0, 0);
DRIVER_MODULE(xe, xenbusb_front, netfront_driver, netfront_devclass, 0, 0);

View File

@ -181,6 +181,49 @@ bind_listening_port_to_irqhandler(unsigned int remote_domain,
return (0);
}
int
bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
unsigned int remote_port, const char *devname, driver_intr_t handler,
void *arg, unsigned long irqflags, unsigned int *irqp)
{
struct evtchn_bind_interdomain bind_interdomain;
unsigned int irq;
int error;
irq = alloc_xen_irq();
if (irq < 0)
return irq;
mtx_lock(&irq_evtchn[irq].lock);
bind_interdomain.remote_dom = remote_domain;
bind_interdomain.remote_port = remote_port;
error = HYPERVISOR_event_channel_op(EVTCHNOP_bind_interdomain,
&bind_interdomain);
if (error) {
mtx_unlock(&irq_evtchn[irq].lock);
free_xen_irq(irq);
return (-error);
}
irq_evtchn[irq].handler = handler;
irq_evtchn[irq].arg = arg;
irq_evtchn[irq].evtchn = bind_interdomain.local_port;
irq_evtchn[irq].close = 1;
irq_evtchn[irq].mpsafe = (irqflags & INTR_MPSAFE) != 0;
evtchn_to_irq[bind_interdomain.local_port] = irq;
unmask_evtchn(bind_interdomain.local_port);
mtx_unlock(&irq_evtchn[irq].lock);
if (irqp)
*irqp = irq;
return (0);
}
int
bind_caller_port_to_irqhandler(unsigned int caller_port,
const char *devname, driver_intr_t handler, void *arg,

View File

@ -66,6 +66,7 @@ __FBSDID("$FreeBSD$");
char *hypercall_stubs;
shared_info_t *HYPERVISOR_shared_info;
static vm_paddr_t shared_info_pa;
static device_t nexus;
/*
* This is used to find our platform device instance.
@ -80,7 +81,7 @@ xenpci_cpuid_base(void)
{
uint32_t base, regs[4];
for (base = 0x40000000; base < 0x40001000; base += 0x100) {
for (base = 0x40000000; base < 0x40010000; base += 0x100) {
do_cpuid(base, regs);
if (!memcmp("XenVMMXenVMM", &regs[1], 12)
&& (regs[0] - base) >= 2)
@ -204,14 +205,21 @@ xenpci_allocate_resources(device_t dev)
scp->res_irq = bus_alloc_resource_any(dev, SYS_RES_IRQ,
&scp->rid_irq, RF_SHAREABLE|RF_ACTIVE);
if (scp->res_irq == NULL)
if (scp->res_irq == NULL) {
printf("xenpci Could not allocate irq.\n");
goto errexit;
}
scp->rid_memory = PCIR_BAR(1);
scp->res_memory = bus_alloc_resource_any(dev, SYS_RES_MEMORY,
&scp->rid_memory, RF_ACTIVE);
if (scp->res_memory == NULL)
if (scp->res_memory == NULL) {
printf("xenpci Could not allocate memory bar.\n");
goto errexit;
}
scp->phys_next = rman_get_start(scp->res_memory);
return (0);
errexit:
@ -254,6 +262,36 @@ xenpci_alloc_space(size_t sz, vm_paddr_t *pa)
}
}
static struct resource *
xenpci_alloc_resource(device_t dev, device_t child, int type, int *rid,
u_long start, u_long end, u_long count, u_int flags)
{
return (BUS_ALLOC_RESOURCE(nexus, child, type, rid, start,
end, count, flags));
}
static int
xenpci_release_resource(device_t dev, device_t child, int type, int rid,
struct resource *r)
{
return (BUS_RELEASE_RESOURCE(nexus, child, type, rid, r));
}
static int
xenpci_activate_resource(device_t dev, device_t child, int type, int rid,
struct resource *r)
{
return (BUS_ACTIVATE_RESOURCE(nexus, child, type, rid, r));
}
static int
xenpci_deactivate_resource(device_t dev, device_t child, int type,
int rid, struct resource *r)
{
return (BUS_DEACTIVATE_RESOURCE(nexus, child, type, rid, r));
}
/*
* Called very early in the resume sequence - reinitialise the various
* bits of Xen machinery including the hypercall page and the shared
@ -303,20 +341,36 @@ xenpci_probe(device_t dev)
static int
xenpci_attach(device_t dev)
{
int error;
int error;
struct xenpci_softc *scp = device_get_softc(dev);
struct xen_add_to_physmap xatp;
vm_offset_t shared_va;
devclass_t dc;
/*
* Find and record nexus0. Since we are not really on the
* PCI bus, all resource operations are directed to nexus
* instead of through our parent.
*/
if ((dc = devclass_find("nexus")) == 0
|| (nexus = devclass_get_device(dc, 0)) == 0) {
device_printf(dev, "unable to find nexus.");
return (ENOENT);
}
error = xenpci_allocate_resources(dev);
if (error)
if (error) {
device_printf(dev, "xenpci_allocate_resources failed(%d).\n",
error);
goto errexit;
scp->phys_next = rman_get_start(scp->res_memory);
}
error = xenpci_init_hypercall_stubs(dev, scp);
if (error)
if (error) {
device_printf(dev, "xenpci_init_hypercall_stubs failed(%d).\n",
error);
goto errexit;
}
setup_xen_features();
@ -346,7 +400,7 @@ xenpci_attach(device_t dev)
* Undo anything we may have done.
*/
xenpci_deallocate_resources(dev);
return (error);
return (error);
}
/*
@ -364,8 +418,9 @@ xenpci_detach(device_t dev)
*/
if (scp->intr_cookie != NULL) {
if (BUS_TEARDOWN_INTR(parent, dev,
scp->res_irq, scp->intr_cookie) != 0)
printf("intr teardown failed.. continuing\n");
scp->res_irq, scp->intr_cookie) != 0)
device_printf(dev,
"intr teardown failed.. continuing\n");
scp->intr_cookie = NULL;
}
@ -386,6 +441,10 @@ static device_method_t xenpci_methods[] = {
/* Bus interface */
DEVMETHOD(bus_add_child, bus_generic_add_child),
DEVMETHOD(bus_alloc_resource, xenpci_alloc_resource),
DEVMETHOD(bus_release_resource, xenpci_release_resource),
DEVMETHOD(bus_activate_resource, xenpci_activate_resource),
DEVMETHOD(bus_deactivate_resource, xenpci_deactivate_resource),
{ 0, 0 }
};

View File

@ -722,7 +722,9 @@ char *bootmem_start, *bootmem_current, *bootmem_end;
pteinfo_t *pteinfo_list;
void initvalues(start_info_t *startinfo);
struct ringbuf_head *xen_store; /* XXX move me */
struct xenstore_domain_interface;
extern struct xenstore_domain_interface *xen_store;
char *console_page;
void *
@ -1082,7 +1084,7 @@ initvalues(start_info_t *startinfo)
HYPERVISOR_shared_info = (shared_info_t *)cur_space;
cur_space += PAGE_SIZE;
xen_store = (struct ringbuf_head *)cur_space;
xen_store = (struct xenstore_domain_interface *)cur_space;
cur_space += PAGE_SIZE;
console_page = (char *)cur_space;

145
sys/xen/blkif.h Normal file
View File

@ -0,0 +1,145 @@
/*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to
* deal in the Software without restriction, including without limitation the
* rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
* sell copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*
* $FreeBSD$
*/
#ifndef __XEN_BLKIF_H__
#define __XEN_BLKIF_H__
#include <xen/interface/io/ring.h>
#include <xen/interface/io/blkif.h>
#include <xen/interface/io/protocols.h>
/* Not a real protocol. Used to generate ring structs which contain
* the elements common to all protocols only. This way we get a
* compiler-checkable way to use common struct elements, so we can
* avoid using switch(protocol) in a number of places. */
struct blkif_common_request {
char dummy;
};
struct blkif_common_response {
char dummy;
};
/* i386 protocol version */
#pragma pack(push, 4)
struct blkif_x86_32_request {
uint8_t operation; /* BLKIF_OP_??? */
uint8_t nr_segments; /* number of segments */
blkif_vdev_t handle; /* only for read/write requests */
uint64_t id; /* private guest value, echoed in resp */
blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK];
};
struct blkif_x86_32_response {
uint64_t id; /* copied from request */
uint8_t operation; /* copied from request */
int16_t status; /* BLKIF_RSP_??? */
};
typedef struct blkif_x86_32_request blkif_x86_32_request_t;
typedef struct blkif_x86_32_response blkif_x86_32_response_t;
#pragma pack(pop)
/* x86_64 protocol version */
struct blkif_x86_64_request {
uint8_t operation; /* BLKIF_OP_??? */
uint8_t nr_segments; /* number of segments */
blkif_vdev_t handle; /* only for read/write requests */
uint64_t __attribute__((__aligned__(8))) id;
blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK];
};
struct blkif_x86_64_response {
uint64_t __attribute__((__aligned__(8))) id;
uint8_t operation; /* copied from request */
int16_t status; /* BLKIF_RSP_??? */
};
typedef struct blkif_x86_64_request blkif_x86_64_request_t;
typedef struct blkif_x86_64_response blkif_x86_64_response_t;
DEFINE_RING_TYPES(blkif_common, struct blkif_common_request, struct blkif_common_response);
DEFINE_RING_TYPES(blkif_x86_32, struct blkif_x86_32_request, struct blkif_x86_32_response);
DEFINE_RING_TYPES(blkif_x86_64, struct blkif_x86_64_request, struct blkif_x86_64_response);
/*
* Maximum number of requests that can be active for a given instance
* regardless of the protocol in use, based on the ring size. This constant
* facilitates resource pre-allocation in backend drivers since the size is
* known well in advance of attaching to a front end.
*/
#define BLKIF_MAX_RING_REQUESTS(_sz) \
MAX(__RING_SIZE((blkif_x86_64_sring_t *)NULL, _sz), \
MAX(__RING_SIZE((blkif_x86_32_sring_t *)NULL, _sz), \
__RING_SIZE((blkif_sring_t *)NULL, _sz)))
/*
* The number of ring pages required to support a given number of requests
* for a given instance regardless of the protocol in use.
*/
#define BLKIF_RING_PAGES(_entries) \
MAX(__RING_PAGES((blkif_x86_64_sring_t *)NULL, _entries), \
MAX(__RING_PAGES((blkif_x86_32_sring_t *)NULL, _entries), \
__RING_PAGES((blkif_sring_t *)NULL, _entries)))
union blkif_back_rings {
blkif_back_ring_t native;
blkif_common_back_ring_t common;
blkif_x86_32_back_ring_t x86_32;
blkif_x86_64_back_ring_t x86_64;
};
typedef union blkif_back_rings blkif_back_rings_t;
enum blkif_protocol {
BLKIF_PROTOCOL_NATIVE = 1,
BLKIF_PROTOCOL_X86_32 = 2,
BLKIF_PROTOCOL_X86_64 = 3,
};
static void inline blkif_get_x86_32_req(blkif_request_t *dst, blkif_x86_32_request_t *src)
{
int i, n = BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK;
dst->operation = src->operation;
dst->nr_segments = src->nr_segments;
dst->handle = src->handle;
dst->id = src->id;
dst->sector_number = src->sector_number;
barrier();
if (n > dst->nr_segments)
n = dst->nr_segments;
for (i = 0; i < n; i++)
dst->seg[i] = src->seg[i];
}
static void inline blkif_get_x86_64_req(blkif_request_t *dst, blkif_x86_64_request_t *src)
{
int i, n = BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK;
dst->operation = src->operation;
dst->nr_segments = src->nr_segments;
dst->handle = src->handle;
dst->id = src->id;
dst->sector_number = src->sector_number;
barrier();
if (n > dst->nr_segments)
n = dst->nr_segments;
for (i = 0; i < n; i++)
dst->seg[i] = src->seg[i];
}
#endif /* __XEN_BLKIF_H__ */

View File

@ -492,15 +492,15 @@ bind_listening_port_to_irqhandler(unsigned int remote_domain,
int
bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
unsigned int remote_port, const char *devname,
driver_filter_t filter, driver_intr_t handler,
unsigned long irqflags, unsigned int *irqp)
driver_intr_t handler, void *arg, unsigned long irqflags,
unsigned int *irqp)
{
unsigned int irq;
int error;
irq = bind_interdomain_evtchn_to_irq(remote_domain, remote_port);
intr_register_source(&xp->xp_pins[irq].xp_intsrc);
error = intr_add_handler(devname, irq, filter, handler, NULL,
error = intr_add_handler(devname, irq, NULL, handler, arg,
irqflags, &xp->xp_pins[irq].xp_cookie);
if (error) {
unbind_from_irq(irq);

View File

@ -42,7 +42,6 @@ __FBSDID("$FreeBSD$");
/* External tools reserve first few grant table entries. */
#define NR_RESERVED_ENTRIES 8
#define GNTTAB_LIST_END 0xffffffff
#define GREFS_PER_GRANT_FRAME (PAGE_SIZE / sizeof(grant_entry_t))
static grant_ref_t **gnttab_list;
@ -66,7 +65,7 @@ get_free_entries(int count, int *entries)
{
int ref, error;
grant_ref_t head;
mtx_lock(&gnttab_list_lock);
if ((gnttab_free_count < count) &&
((error = gnttab_expand(count - gnttab_free_count)) != 0)) {
@ -79,7 +78,7 @@ get_free_entries(int count, int *entries)
head = gnttab_entry(head);
gnttab_free_head = gnttab_entry(head);
gnttab_entry(head) = GNTTAB_LIST_END;
mtx_unlock(&gnttab_list_lock);
mtx_unlock(&gnttab_list_lock);
*entries = ref;
return (0);
@ -122,7 +121,7 @@ put_free_entry(grant_ref_t ref)
gnttab_free_head = ref;
gnttab_free_count++;
check_free_callbacks();
mtx_unlock(&gnttab_list_lock);
mtx_unlock(&gnttab_list_lock);
}
/*
@ -136,7 +135,7 @@ gnttab_grant_foreign_access(domid_t domid, unsigned long frame, int readonly,
int error, ref;
error = get_free_entries(1, &ref);
if (unlikely(error))
return (error);
@ -166,9 +165,9 @@ int
gnttab_query_foreign_access(grant_ref_t ref)
{
uint16_t nflags;
nflags = shared[ref].flags;
return (nflags & (GTF_reading|GTF_writing));
}
@ -180,7 +179,7 @@ gnttab_end_foreign_access_ref(grant_ref_t ref)
nflags = shared[ref].flags;
do {
if ( (flags = nflags) & (GTF_reading|GTF_writing) ) {
printf("WARNING: g.e. still in use!\n");
printf("%s: WARNING: g.e. still in use!\n", __func__);
return (0);
}
} while ((nflags = synch_cmpxchg(&shared[ref].flags, flags, 0)) !=
@ -201,7 +200,44 @@ gnttab_end_foreign_access(grant_ref_t ref, void *page)
else {
/* XXX This needs to be fixed so that the ref and page are
placed on a list to be freed up later. */
printf("WARNING: leaking g.e. and page still in use!\n");
printf("%s: WARNING: leaking g.e. and page still in use!\n",
__func__);
}
}
void
gnttab_end_foreign_access_references(u_int count, grant_ref_t *refs)
{
grant_ref_t *last_ref;
grant_ref_t head;
grant_ref_t tail;
head = GNTTAB_LIST_END;
tail = *refs;
last_ref = refs + count;
while (refs != last_ref) {
if (gnttab_end_foreign_access_ref(*refs)) {
gnttab_entry(*refs) = head;
head = *refs;
} else {
/*
* XXX This needs to be fixed so that the ref
* is placed on a list to be freed up later.
*/
printf("%s: WARNING: leaking g.e. still in use!\n",
__func__);
count--;
}
refs++;
}
if (count != 0) {
mtx_lock(&gnttab_list_lock);
gnttab_free_count += count;
gnttab_entry(tail) = gnttab_free_head;
gnttab_free_head = head;
mtx_unlock(&gnttab_list_lock);
}
}
@ -216,7 +252,7 @@ gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn,
return (error);
gnttab_grant_foreign_transfer_ref(ref, domid, pfn);
*result = ref;
return (0);
}
@ -282,16 +318,16 @@ gnttab_free_grant_references(grant_ref_t head)
{
grant_ref_t ref;
int count = 1;
if (head == GNTTAB_LIST_END)
return;
mtx_lock(&gnttab_list_lock);
ref = head;
while (gnttab_entry(ref) != GNTTAB_LIST_END) {
ref = gnttab_entry(ref);
count++;
}
mtx_lock(&gnttab_list_lock);
gnttab_entry(ref) = gnttab_free_head;
gnttab_free_head = head;
gnttab_free_count += count;
@ -403,7 +439,7 @@ grow_gnttab_list(unsigned int more_frames)
check_free_callbacks();
return (0);
grow_nomem:
for ( ; i >= nr_grant_frames; i--)
free(gnttab_list[i], M_DEVBUF);
@ -490,7 +526,7 @@ gnttab_map(unsigned int start_idx, unsigned int end_idx)
if (shared == NULL) {
vm_offset_t area;
area = kmem_alloc_nofault(kernel_map,
PAGE_SIZE * max_nr_grant_frames());
KASSERT(area, ("can't allocate VM space for grant table"));
@ -502,7 +538,7 @@ gnttab_map(unsigned int start_idx, unsigned int end_idx)
((vm_paddr_t)frames[i]) << PAGE_SHIFT | PG_RW | PG_V);
free(frames, M_DEVBUF);
return (0);
}
@ -517,7 +553,7 @@ gnttab_resume(void)
int
gnttab_suspend(void)
{
{
int i;
for (i = 0; i < nr_grant_frames; i++)
@ -532,7 +568,8 @@ gnttab_suspend(void)
static vm_paddr_t resume_frames;
static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
static int
gnttab_map(unsigned int start_idx, unsigned int end_idx)
{
struct xen_add_to_physmap xatp;
unsigned int i = end_idx;
@ -552,7 +589,7 @@ static int gnttab_map(unsigned int start_idx, unsigned int end_idx)
if (shared == NULL) {
vm_offset_t area;
area = kmem_alloc_nofault(kernel_map,
PAGE_SIZE * max_nr_grant_frames());
KASSERT(area, ("can't allocate VM space for grant table"));
@ -643,10 +680,10 @@ gnttab_init()
if (gnttab_list[i] == NULL)
goto ini_nomem;
}
if (gnttab_resume())
return (ENODEV);
nr_init_grefs = nr_grant_frames * GREFS_PER_GRANT_FRAME;
for (i = NR_RESERVED_ENTRIES; i < nr_init_grefs - 1; i++)
@ -670,4 +707,3 @@ gnttab_init()
}
MTX_SYSINIT(gnttab, &gnttab_list_lock, "GNTTAB LOCK", MTX_DEF);
//SYSINIT(gnttab, SI_SUB_PSEUDO, SI_ORDER_FIRST, gnttab_init, NULL);

View File

@ -43,6 +43,8 @@
#include <machine/xen/xen-os.h>
#include <xen/features.h>
#define GNTTAB_LIST_END GRANT_REF_INVALID
struct gnttab_free_callback {
struct gnttab_free_callback *next;
void (*fn)(void *);
@ -74,6 +76,13 @@ int gnttab_end_foreign_access_ref(grant_ref_t ref);
*/
void gnttab_end_foreign_access(grant_ref_t ref, void *page);
/*
* Eventually end access through the given array of grant references.
* Access will be ended immediately iff the grant entry is not in use,
* otherwise it will happen some time later
*/
void gnttab_end_foreign_access_references(u_int count, grant_ref_t *refs);
int gnttab_grant_foreign_transfer(domid_t domid, unsigned long pfn, grant_ref_t *result);
unsigned long gnttab_end_foreign_transfer_ref(grant_ref_t ref);

View File

@ -159,6 +159,8 @@ typedef struct grant_entry grant_entry_t;
*/
typedef uint32_t grant_ref_t;
#define GRANT_REF_INVALID 0xffffffff
/*
* Handle to track a mapping created via a grant reference.
*/

View File

@ -95,4 +95,30 @@
#define HVM_NR_PARAMS 15
#ifdef XENHVM
/**
* Retrieve an HVM setting from the hypervisor.
*
* \param index The index of the HVM parameter to retrieve.
*
* \return On error, 0. Otherwise the value of the requested parameter.
*/
static inline unsigned long
hvm_get_parameter(int index)
{
struct xen_hvm_param xhv;
int error;
xhv.domid = DOMID_SELF;
xhv.index = index;
error = HYPERVISOR_hvm_op(HVMOP_get_param, &xhv);
if (error) {
printf("hvm_get_parameter: failed to get %d, error %d\n",
index, error);
return (0);
}
return (xhv.value);
}
#endif
#endif /* __XEN_PUBLIC_HVM_PARAMS_H__ */

View File

@ -78,11 +78,19 @@
#define BLKIF_OP_FLUSH_DISKCACHE 3
/*
* Maximum scatter/gather segments per request.
* This is carefully chosen so that sizeof(blkif_ring_t) <= PAGE_SIZE.
* NB. This could be 12 if the ring indexes weren't stored in the same page.
* Maximum scatter/gather segments associated with a request header block.
*/
#define BLKIF_MAX_SEGMENTS_PER_REQUEST 11
#define BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK 11
/*
* Maximum scatter/gather segments associated with a segment block.
*/
#define BLKIF_MAX_SEGMENTS_PER_SEGMENT_BLOCK 14
/*
* Maximum scatter/gather segments per request (header + segment blocks).
*/
#define BLKIF_MAX_SEGMENTS_PER_REQUEST 255
struct blkif_request_segment {
grant_ref_t gref; /* reference to I/O buffer frame */
@ -90,6 +98,7 @@ struct blkif_request_segment {
/* @last_sect: last sector in frame to transfer (inclusive). */
uint8_t first_sect, last_sect;
};
typedef struct blkif_request_segment blkif_request_segment_t;
struct blkif_request {
uint8_t operation; /* BLKIF_OP_??? */
@ -97,7 +106,7 @@ struct blkif_request {
blkif_vdev_t handle; /* only for read/write requests */
uint64_t id; /* private guest value, echoed in resp */
blkif_sector_t sector_number;/* start sector idx on disk (r/w only) */
struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_REQUEST];
struct blkif_request_segment seg[BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK];
};
typedef struct blkif_request blkif_request_t;
@ -124,10 +133,22 @@ typedef struct blkif_response blkif_response_t;
DEFINE_RING_TYPES(blkif, struct blkif_request, struct blkif_response);
#define BLKRING_GET_SG_REQUEST(_r, _idx) \
((struct blkif_request_segment *)RING_GET_REQUEST(_r, _idx))
#define VDISK_CDROM 0x1
#define VDISK_REMOVABLE 0x2
#define VDISK_READONLY 0x4
/*
* The number of ring request blocks required to handle an I/O
* request containing _segs segments.
*/
#define BLKIF_SEGS_TO_BLOCKS(_segs) \
((((_segs - BLKIF_MAX_SEGMENTS_PER_HEADER_BLOCK) \
+ (BLKIF_MAX_SEGMENTS_PER_SEGMENT_BLOCK - 1)) \
/ BLKIF_MAX_SEGMENTS_PER_SEGMENT_BLOCK) + /*header_block*/1)
#endif /* __XEN_PUBLIC_IO_BLKIF_H__ */
/*

View File

@ -26,6 +26,7 @@
#define XEN_IO_PROTO_ABI_X86_32 "x86_32-abi"
#define XEN_IO_PROTO_ABI_X86_64 "x86_64-abi"
#define XEN_IO_PROTO_ABI_IA64 "ia64-abi"
#define XEN_IO_PROTO_ABI_POWERPC64 "powerpc64-abi"
#if defined(__i386__)
# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_32
@ -33,6 +34,8 @@
# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_X86_64
#elif defined(__ia64__)
# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_IA64
#elif defined(__powerpc64__)
# define XEN_IO_PROTO_ABI_NATIVE XEN_IO_PROTO_ABI_POWERPC64
#else
# error arch fixup needed here
#endif

View File

@ -44,6 +44,12 @@ typedef unsigned int RING_IDX;
#define __RD16(_x) (((_x) & 0x0000ff00) ? __RD8((_x)>>8)<<8 : __RD8(_x))
#define __RD32(_x) (((_x) & 0xffff0000) ? __RD16((_x)>>16)<<16 : __RD16(_x))
/*
* The amount of space reserved in the shared ring for accounting information.
*/
#define __RING_HEADER_SIZE(_s) \
((intptr_t)(_s)->ring - (intptr_t)(_s))
/*
* Calculate size of a shared ring, given the total available space for the
* ring and indexes (_sz), and the name tag of the request/response structure.
@ -51,7 +57,17 @@ typedef unsigned int RING_IDX;
* power of two (so we can mask with (size-1) to loop around).
*/
#define __RING_SIZE(_s, _sz) \
(__RD32(((_sz) - (long)(_s)->ring + (long)(_s)) / sizeof((_s)->ring[0])))
(__RD32(((_sz) - __RING_HEADER_SIZE(_s)) / sizeof((_s)->ring[0])))
/*
* The number of pages needed to support a given number of request/reponse
* entries. The entry count is rounded down to the nearest power of two
* as required by the ring macros.
*/
#define __RING_PAGES(_s, _entries) \
((__RING_HEADER_SIZE(_s) \
+ (__RD32(_entries) * sizeof((_s)->ring[0])) \
+ PAGE_SIZE - 1) / PAGE_SIZE)
/*
* Macros to make the correct C datatypes for a new kind of ring.

View File

@ -36,6 +36,9 @@
enum xenbus_state {
XenbusStateUnknown = 0,
/*
* Initializing: Back-end is initializing.
*/
XenbusStateInitialising = 1,
/*
@ -49,6 +52,9 @@ enum xenbus_state {
*/
XenbusStateInitialised = 3,
/*
* Connected: The normal state for a front to backend connection.
*/
XenbusStateConnected = 4,
/*
@ -56,6 +62,9 @@ enum xenbus_state {
*/
XenbusStateClosing = 5,
/*
* Closed: No connection exists between front and back end.
*/
XenbusStateClosed = 6,
/*

View File

@ -1,266 +0,0 @@
/*
*
* Copyright (c) 2004 Christian Limpach.
* Copyright (c) 2004-2006,2008 Kip Macy
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* This product includes software developed by Christian Limpach.
* 4. The name of the author may not be used to endorse or promote products
* derived from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
* IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
* OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
* IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
* INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/bus.h>
#include <sys/malloc.h>
#include <sys/kernel.h>
#include <sys/proc.h>
#include <sys/reboot.h>
#include <sys/sched.h>
#include <sys/smp.h>
#include <sys/systm.h>
#include <machine/xen/xen-os.h>
#include <xen/hypervisor.h>
#include <xen/gnttab.h>
#include <xen/xen_intr.h>
#include <xen/xenbus/xenbusvar.h>
#include <vm/vm.h>
#include <vm/pmap.h>
#ifdef XENHVM
#include <dev/xen/xenpci/xenpcivar.h>
#else
static void xen_suspend(void);
#endif
static void
shutdown_handler(struct xenbus_watch *watch,
const char **vec, unsigned int len)
{
char *str;
struct xenbus_transaction xbt;
int error, howto;
howto = 0;
again:
error = xenbus_transaction_start(&xbt);
if (error)
return;
error = xenbus_read(xbt, "control", "shutdown", NULL, (void **) &str);
/* Ignore read errors and empty reads. */
if (error || strlen(str) == 0) {
xenbus_transaction_end(xbt, 1);
return;
}
xenbus_write(xbt, "control", "shutdown", "");
error = xenbus_transaction_end(xbt, 0);
if (error == EAGAIN) {
free(str, M_DEVBUF);
goto again;
}
if (strcmp(str, "reboot") == 0)
howto = 0;
else if (strcmp(str, "poweroff") == 0)
howto |= (RB_POWEROFF | RB_HALT);
else if (strcmp(str, "halt") == 0)
#ifdef XENHVM
/*
* We rely on acpi powerdown to halt the VM.
*/
howto |= (RB_POWEROFF | RB_HALT);
#else
howto |= RB_HALT;
#endif
else if (strcmp(str, "suspend") == 0)
howto = -1;
else {
printf("Ignoring shutdown request: %s\n", str);
goto done;
}
if (howto == -1) {
xen_suspend();
goto done;
}
shutdown_nice(howto);
done:
free(str, M_DEVBUF);
}
#ifndef XENHVM
/*
* In HV mode, we let acpi take care of halts and reboots.
*/
static void
xen_shutdown_final(void *arg, int howto)
{
if (howto & (RB_HALT | RB_POWEROFF))
HYPERVISOR_shutdown(SHUTDOWN_poweroff);
else
HYPERVISOR_shutdown(SHUTDOWN_reboot);
}
#endif
static struct xenbus_watch shutdown_watch = {
.node = "control/shutdown",
.callback = shutdown_handler
};
static void
setup_shutdown_watcher(void *unused)
{
if (register_xenbus_watch(&shutdown_watch))
printf("Failed to set shutdown watcher\n");
#ifndef XENHVM
EVENTHANDLER_REGISTER(shutdown_final, xen_shutdown_final, NULL,
SHUTDOWN_PRI_LAST);
#endif
}
SYSINIT(shutdown, SI_SUB_PSEUDO, SI_ORDER_ANY, setup_shutdown_watcher, NULL);
#ifndef XENHVM
extern void xencons_suspend(void);
extern void xencons_resume(void);
static void
xen_suspend()
{
int i, j, k, fpp;
unsigned long max_pfn, start_info_mfn;
#ifdef SMP
cpumask_t map;
/*
* Bind us to CPU 0 and stop any other VCPUs.
*/
thread_lock(curthread);
sched_bind(curthread, 0);
thread_unlock(curthread);
KASSERT(PCPU_GET(cpuid) == 0, ("xen_suspend: not running on cpu 0"));
map = PCPU_GET(other_cpus) & ~stopped_cpus;
if (map)
stop_cpus(map);
#endif
if (DEVICE_SUSPEND(root_bus) != 0) {
printf("xen_suspend: device_suspend failed\n");
#ifdef SMP
if (map)
restart_cpus(map);
#endif
return;
}
local_irq_disable();
xencons_suspend();
gnttab_suspend();
max_pfn = HYPERVISOR_shared_info->arch.max_pfn;
void *shared_info = HYPERVISOR_shared_info;
HYPERVISOR_shared_info = NULL;
pmap_kremove((vm_offset_t) shared_info);
PT_UPDATES_FLUSH();
xen_start_info->store_mfn = MFNTOPFN(xen_start_info->store_mfn);
xen_start_info->console.domU.mfn = MFNTOPFN(xen_start_info->console.domU.mfn);
/*
* We'll stop somewhere inside this hypercall. When it returns,
* we'll start resuming after the restore.
*/
start_info_mfn = VTOMFN(xen_start_info);
pmap_suspend();
HYPERVISOR_suspend(start_info_mfn);
pmap_resume();
pmap_kenter_ma((vm_offset_t) shared_info, xen_start_info->shared_info);
HYPERVISOR_shared_info = shared_info;
HYPERVISOR_shared_info->arch.pfn_to_mfn_frame_list_list =
VTOMFN(xen_pfn_to_mfn_frame_list_list);
fpp = PAGE_SIZE/sizeof(unsigned long);
for (i = 0, j = 0, k = -1; i < max_pfn; i += fpp, j++) {
if ((j % fpp) == 0) {
k++;
xen_pfn_to_mfn_frame_list_list[k] =
VTOMFN(xen_pfn_to_mfn_frame_list[k]);
j = 0;
}
xen_pfn_to_mfn_frame_list[k][j] =
VTOMFN(&xen_phys_machine[i]);
}
HYPERVISOR_shared_info->arch.max_pfn = max_pfn;
gnttab_resume();
irq_resume();
local_irq_enable();
xencons_resume();
#ifdef CONFIG_SMP
for_each_cpu(i)
vcpu_prepare(i);
#endif
/*
* Only resume xenbus /after/ we've prepared our VCPUs; otherwise
* the VCPU hotplug callback can race with our vcpu_prepare
*/
DEVICE_RESUME(root_bus);
#ifdef SMP
thread_lock(curthread);
sched_unbind(curthread);
thread_unlock(curthread);
if (map)
restart_cpus(map);
#endif
}
#endif

View File

@ -76,7 +76,7 @@ extern int bind_ipi_to_irqhandler(unsigned int ipi, unsigned int cpu,
*/
extern int bind_interdomain_evtchn_to_irqhandler(unsigned int remote_domain,
unsigned int remote_port, const char *devname,
driver_filter_t filter, driver_intr_t handler,
driver_intr_t handler, void *arg,
unsigned long irqflags, unsigned int *irqp);
/*

View File

@ -1,14 +0,0 @@
- frontend driver initializes static xenbus_driver with _ids, _probe, _remove,
_resume, _otherend_changed
- initialization calls xenbus_register_frontend(xenbus_driver)
- xenbus_register_frontend sets read_otherend details to read_backend_details
then calls xenbus_register_driver_common(xenbus_driver, xenbus_frontend)
- xenbus_register_driver_common sets underlying driver name to xenbus_driver name
underlying driver bus to xenbus_frontend's bus, driver's probe to xenbus_dev_probe
driver's remove to xenbus_dev_remove then calls driver_register

View File

@ -1,8 +1,4 @@
/******************************************************************************
* Client-facing interface for the Xenbus driver. In other words, the
* interface between the Xenbus and the device-specific code, be it the
* frontend or the backend of that driver.
*
* Copyright (C) 2005 XenSource Ltd
*
* This file may be distributed separately from the Linux kernel, or
@ -27,6 +23,14 @@
* IN THE SOFTWARE.
*/
/**
* \file xenbus.c
*
* \brief Client-facing interface for the Xenbus driver.
*
* In other words, the interface between the Xenbus and the device-specific
* code, be it the frontend or the backend of that driver.
*/
#if 0
#define DPRINTK(fmt, args...) \
@ -39,9 +43,12 @@
__FBSDID("$FreeBSD$");
#include <sys/cdefs.h>
#include <sys/param.h>
#include <sys/kernel.h>
#include <sys/types.h>
#include <sys/malloc.h>
#include <sys/libkern.h>
#include <sys/sbuf.h>
#include <machine/xen/xen-os.h>
#include <xen/hypervisor.h>
@ -50,6 +57,34 @@ __FBSDID("$FreeBSD$");
#include <xen/xenbus/xenbusvar.h>
#include <machine/stdarg.h>
MALLOC_DEFINE(M_XENBUS, "xenbus", "XenBus Support");
/*------------------------- Private Functions --------------------------------*/
/**
* \brief Construct the error path corresponding to the given XenBus
* device.
*
* \param dev The XenBus device for which we are constructing an error path.
*
* \return On success, the contructed error path. Otherwise NULL.
*
* It is the caller's responsibility to free any returned error path
* node using the M_XENBUS malloc type.
*/
static char *
error_path(device_t dev)
{
char *path_buffer = malloc(strlen("error/")
+ strlen(xenbus_get_node(dev)) + 1,M_XENBUS, M_WAITOK);
strcpy(path_buffer, "error/");
strcpy(path_buffer + strlen("error/"), xenbus_get_node(dev));
return (path_buffer);
}
/*--------------------------- Public Functions -------------------------------*/
/*-------- API comments for these methods can be found in xenbusvar.h --------*/
const char *
xenbus_strstate(XenbusState state)
{
@ -67,15 +102,15 @@ xenbus_strstate(XenbusState state)
}
int
xenbus_watch_path(device_t dev, char *path, struct xenbus_watch *watch,
void (*callback)(struct xenbus_watch *, const char **, unsigned int))
xenbus_watch_path(device_t dev, char *path, struct xs_watch *watch,
xs_watch_cb_t *callback)
{
int error;
watch->node = path;
watch->callback = callback;
error = register_xenbus_watch(watch);
error = xs_register_watch(watch);
if (error) {
watch->node = NULL;
@ -88,12 +123,12 @@ xenbus_watch_path(device_t dev, char *path, struct xenbus_watch *watch,
int
xenbus_watch_path2(device_t dev, const char *path,
const char *path2, struct xenbus_watch *watch,
void (*callback)(struct xenbus_watch *, const char **, unsigned int))
const char *path2, struct xs_watch *watch,
xs_watch_cb_t *callback)
{
int error;
char *state = malloc(strlen(path) + 1 + strlen(path2) + 1,
M_DEVBUF, M_WAITOK);
M_XENBUS, M_WAITOK);
strcpy(state, path);
strcat(state, "/");
@ -101,46 +136,27 @@ xenbus_watch_path2(device_t dev, const char *path,
error = xenbus_watch_path(dev, state, watch, callback);
if (error) {
free(state, M_DEVBUF);
free(state,M_XENBUS);
}
return (error);
}
/**
* Return the path to the error node for the given device, or NULL on failure.
* If the value returned is non-NULL, then it is the caller's to kfree.
*/
static char *
error_path(device_t dev)
{
char *path_buffer = malloc(strlen("error/")
+ strlen(xenbus_get_node(dev)) + 1, M_DEVBUF, M_WAITOK);
strcpy(path_buffer, "error/");
strcpy(path_buffer + strlen("error/"), xenbus_get_node(dev));
return (path_buffer);
}
static void
_dev_error(device_t dev, int err, const char *fmt, va_list ap)
void
xenbus_dev_verror(device_t dev, int err, const char *fmt, va_list ap)
{
int ret;
unsigned int len;
char *printf_buffer = NULL, *path_buffer = NULL;
#define PRINTF_BUFFER_SIZE 4096
printf_buffer = malloc(PRINTF_BUFFER_SIZE, M_DEVBUF, M_WAITOK);
printf_buffer = malloc(PRINTF_BUFFER_SIZE,M_XENBUS, M_WAITOK);
len = sprintf(printf_buffer, "%i ", err);
ret = vsnprintf(printf_buffer+len, PRINTF_BUFFER_SIZE-len, fmt, ap);
KASSERT(len + ret <= PRINTF_BUFFER_SIZE-1, ("xenbus error message too big"));
#if 0
dev_err(&dev->dev, "%s\n", printf_buffer);
#endif
device_printf(dev, "Error %s\n", printf_buffer);
path_buffer = error_path(dev);
if (path_buffer == NULL) {
@ -149,7 +165,7 @@ _dev_error(device_t dev, int err, const char *fmt, va_list ap)
goto fail;
}
if (xenbus_write(XBT_NIL, path_buffer, "error", printf_buffer) != 0) {
if (xs_write(XST_NIL, path_buffer, "error", printf_buffer) != 0) {
printf("xenbus: failed to write error node for %s (%s)\n",
xenbus_get_node(dev), printf_buffer);
goto fail;
@ -157,9 +173,9 @@ _dev_error(device_t dev, int err, const char *fmt, va_list ap)
fail:
if (printf_buffer)
free(printf_buffer, M_DEVBUF);
free(printf_buffer,M_XENBUS);
if (path_buffer)
free(path_buffer, M_DEVBUF);
free(path_buffer,M_XENBUS);
}
void
@ -168,41 +184,45 @@ xenbus_dev_error(device_t dev, int err, const char *fmt, ...)
va_list ap;
va_start(ap, fmt);
_dev_error(dev, err, fmt, ap);
xenbus_dev_verror(dev, err, fmt, ap);
va_end(ap);
}
void
xenbus_dev_vfatal(device_t dev, int err, const char *fmt, va_list ap)
{
xenbus_dev_verror(dev, err, fmt, ap);
device_printf(dev, "Fatal error. Transitioning to Closing State\n");
xenbus_set_state(dev, XenbusStateClosing);
}
void
xenbus_dev_fatal(device_t dev, int err, const char *fmt, ...)
{
va_list ap;
va_start(ap, fmt);
_dev_error(dev, err, fmt, ap);
xenbus_dev_vfatal(dev, err, fmt, ap);
va_end(ap);
xenbus_set_state(dev, XenbusStateClosing);
}
int
xenbus_grant_ring(device_t dev, unsigned long ring_mfn, int *refp)
xenbus_grant_ring(device_t dev, unsigned long ring_mfn, grant_ref_t *refp)
{
int error;
grant_ref_t ref;
error = gnttab_grant_foreign_access(
xenbus_get_otherend_id(dev), ring_mfn, 0, &ref);
xenbus_get_otherend_id(dev), ring_mfn, 0, refp);
if (error) {
xenbus_dev_fatal(dev, error, "granting access to ring page");
return (error);
}
*refp = ref;
return (0);
}
int
xenbus_alloc_evtchn(device_t dev, int *port)
xenbus_alloc_evtchn(device_t dev, evtchn_port_t *port)
{
struct evtchn_alloc_unbound alloc_unbound;
int err;
@ -222,7 +242,7 @@ xenbus_alloc_evtchn(device_t dev, int *port)
}
int
xenbus_free_evtchn(device_t dev, int port)
xenbus_free_evtchn(device_t dev, evtchn_port_t port)
{
struct evtchn_close close;
int err;
@ -240,12 +260,29 @@ xenbus_free_evtchn(device_t dev, int port)
XenbusState
xenbus_read_driver_state(const char *path)
{
XenbusState result;
int error;
XenbusState result;
int error;
error = xenbus_gather(XBT_NIL, path, "state", "%d", &result, NULL);
if (error)
result = XenbusStateClosed;
error = xs_gather(XST_NIL, path, "state", "%d", &result, NULL);
if (error)
result = XenbusStateClosed;
return (result);
return (result);
}
int
xenbus_dev_is_online(device_t dev)
{
const char *path;
int error;
int value;
path = xenbus_get_node(dev);
error = xs_gather(XST_NIL, path, "online", "%d", &value, NULL);
if (error != 0) {
/* Default to not online. */
value = 0;
}
return (value);
}

View File

@ -1,226 +0,0 @@
/******************************************************************************
* xenbus_comms.c
*
* Low level code to talks to Xen Store: ringbuffer and event channel.
*
* Copyright (C) 2005 Rusty Russell, IBM Corporation
*
* This file may be distributed separately from the Linux kernel, or
* incorporated into other software packages, subject to the following license:
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this source file (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy, modify,
* merge, publish, distribute, sublicense, and/or sell copies of the Software,
* and to permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/bus.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/sx.h>
#include <sys/param.h>
#include <sys/systm.h>
#include <sys/syslog.h>
#include <machine/xen/xen-os.h>
#include <xen/hypervisor.h>
#include <xen/xen_intr.h>
#include <xen/evtchn.h>
#include <xen/interface/io/xs_wire.h>
#include <xen/xenbus/xenbus_comms.h>
static unsigned int xenstore_irq;
static inline struct xenstore_domain_interface *
xenstore_domain_interface(void)
{
return (struct xenstore_domain_interface *)xen_store;
}
static void
xb_intr(void * arg __attribute__((unused)))
{
wakeup(xen_store);
}
static int
xb_check_indexes(XENSTORE_RING_IDX cons, XENSTORE_RING_IDX prod)
{
return ((prod - cons) <= XENSTORE_RING_SIZE);
}
static void *
xb_get_output_chunk(XENSTORE_RING_IDX cons, XENSTORE_RING_IDX prod,
char *buf, uint32_t *len)
{
*len = XENSTORE_RING_SIZE - MASK_XENSTORE_IDX(prod);
if ((XENSTORE_RING_SIZE - (prod - cons)) < *len)
*len = XENSTORE_RING_SIZE - (prod - cons);
return (buf + MASK_XENSTORE_IDX(prod));
}
static const void *
xb_get_input_chunk(XENSTORE_RING_IDX cons, XENSTORE_RING_IDX prod,
const char *buf, uint32_t *len)
{
*len = XENSTORE_RING_SIZE - MASK_XENSTORE_IDX(cons);
if ((prod - cons) < *len)
*len = prod - cons;
return (buf + MASK_XENSTORE_IDX(cons));
}
int
xb_write(const void *tdata, unsigned len, struct lock_object *lock)
{
struct xenstore_domain_interface *intf = xenstore_domain_interface();
XENSTORE_RING_IDX cons, prod;
const char *data = (const char *)tdata;
int error;
while (len != 0) {
void *dst;
unsigned int avail;
while ((intf->req_prod - intf->req_cons)
== XENSTORE_RING_SIZE) {
error = _sleep(intf,
lock,
PCATCH, "xbwrite", hz/10);
if (error && error != EWOULDBLOCK)
return (error);
}
/* Read indexes, then verify. */
cons = intf->req_cons;
prod = intf->req_prod;
mb();
if (!xb_check_indexes(cons, prod)) {
intf->req_cons = intf->req_prod = 0;
return (EIO);
}
dst = xb_get_output_chunk(cons, prod, intf->req, &avail);
if (avail == 0)
continue;
if (avail > len)
avail = len;
mb();
memcpy(dst, data, avail);
data += avail;
len -= avail;
/* Other side must not see new header until data is there. */
wmb();
intf->req_prod += avail;
/* This implies mb() before other side sees interrupt. */
notify_remote_via_evtchn(xen_store_evtchn);
}
return (0);
}
int
xb_read(void *tdata, unsigned len, struct lock_object *lock)
{
struct xenstore_domain_interface *intf = xenstore_domain_interface();
XENSTORE_RING_IDX cons, prod;
char *data = (char *)tdata;
int error;
while (len != 0) {
unsigned int avail;
const char *src;
while (intf->rsp_cons == intf->rsp_prod) {
error = _sleep(intf, lock,
PCATCH, "xbread", hz/10);
if (error && error != EWOULDBLOCK)
return (error);
}
/* Read indexes, then verify. */
cons = intf->rsp_cons;
prod = intf->rsp_prod;
if (!xb_check_indexes(cons, prod)) {
intf->rsp_cons = intf->rsp_prod = 0;
return (EIO);
}
src = xb_get_input_chunk(cons, prod, intf->rsp, &avail);
if (avail == 0)
continue;
if (avail > len)
avail = len;
/* We must read header before we read data. */
rmb();
memcpy(data, src, avail);
data += avail;
len -= avail;
/* Other side must not see free space until we've copied out */
mb();
intf->rsp_cons += avail;
/* Implies mb(): they will see new header. */
notify_remote_via_evtchn(xen_store_evtchn);
}
return (0);
}
/* Set up interrupt handler off store event channel. */
int
xb_init_comms(void)
{
struct xenstore_domain_interface *intf = xenstore_domain_interface();
int error;
if (intf->rsp_prod != intf->rsp_cons) {
log(LOG_WARNING, "XENBUS response ring is not quiescent "
"(%08x:%08x): fixing up\n",
intf->rsp_cons, intf->rsp_prod);
intf->rsp_cons = intf->rsp_prod;
}
if (xenstore_irq)
unbind_from_irqhandler(xenstore_irq);
error = bind_caller_port_to_irqhandler(
xen_store_evtchn, "xenbus",
xb_intr, NULL, INTR_TYPE_NET, &xenstore_irq);
if (error) {
log(LOG_WARNING, "XENBUS request irq failed %i\n", error);
return (error);
}
return (0);
}

View File

@ -1,48 +0,0 @@
/*
* Private include for xenbus communications.
*
* Copyright (C) 2005 Rusty Russell, IBM Corporation
*
* This file may be distributed separately from the Linux kernel, or
* incorporated into other software packages, subject to the following license:
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this source file (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy, modify,
* merge, publish, distribute, sublicense, and/or sell copies of the Software,
* and to permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*
* $FreeBSD$
*/
#ifndef _XENBUS_COMMS_H
#define _XENBUS_COMMS_H
struct sx;
extern int xen_store_evtchn;
extern char *xen_store;
int xs_init(void);
int xb_init_comms(void);
/* Low level routines. */
int xb_write(const void *data, unsigned len, struct lock_object *);
int xb_read(void *data, unsigned len, struct lock_object *);
extern int xenbus_running;
char *kasprintf(const char *fmt, ...);
#endif /* _XENBUS_COMMS_H */

View File

@ -31,7 +31,15 @@
INTERFACE xenbus;
METHOD int backend_changed {
device_t dev;
enum xenbus_state newstate;
/**
* \brief Callback triggered when the state of the otherend
* of a split device changes.
*
* \param _dev NewBus device_t for this XenBus device whose otherend's
* state has changed..
* \param _newstate The new state of the otherend device.
*/
METHOD int otherend_changed {
device_t _dev;
enum xenbus_state _newstate;
};

View File

@ -1,602 +0,0 @@
/******************************************************************************
* Talks to Xen Store to figure out what devices we have.
*
* Copyright (C) 2008 Doug Rabson
* Copyright (C) 2005 Rusty Russell, IBM Corporation
* Copyright (C) 2005 Mike Wray, Hewlett-Packard
* Copyright (C) 2005 XenSource Ltd
*
* This file may be distributed separately from the Linux kernel, or
* incorporated into other software packages, subject to the following license:
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this source file (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy, modify,
* merge, publish, distribute, sublicense, and/or sell copies of the Software,
* and to permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#if 0
#define DPRINTK(fmt, args...) \
printf("xenbus_probe (%s:%d) " fmt ".\n", __FUNCTION__, __LINE__, ##args)
#else
#define DPRINTK(fmt, args...) ((void)0)
#endif
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/bus.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/sysctl.h>
#include <sys/syslog.h>
#include <sys/systm.h>
#include <sys/sx.h>
#include <sys/taskqueue.h>
#include <machine/xen/xen-os.h>
#include <machine/stdarg.h>
#include <xen/gnttab.h>
#include <xen/xenbus/xenbusvar.h>
#include <xen/xenbus/xenbus_comms.h>
struct xenbus_softc {
struct xenbus_watch xs_devicewatch;
struct task xs_probechildren;
struct intr_config_hook xs_attachcb;
device_t xs_dev;
};
struct xenbus_device_ivars {
struct xenbus_watch xd_otherend_watch; /* must be first */
struct sx xd_lock;
device_t xd_dev;
char *xd_node; /* node name in xenstore */
char *xd_type; /* xen device type */
enum xenbus_state xd_state;
int xd_otherend_id;
char *xd_otherend_path;
};
/* Simplified asprintf. */
char *
kasprintf(const char *fmt, ...)
{
va_list ap;
unsigned int len;
char *p, dummy[1];
va_start(ap, fmt);
/* FIXME: vsnprintf has a bug, NULL should work */
len = vsnprintf(dummy, 0, fmt, ap);
va_end(ap);
p = malloc(len + 1, M_DEVBUF, M_WAITOK);
va_start(ap, fmt);
vsprintf(p, fmt, ap);
va_end(ap);
return p;
}
static void
xenbus_identify(driver_t *driver, device_t parent)
{
BUS_ADD_CHILD(parent, 0, "xenbus", 0);
}
static int
xenbus_probe(device_t dev)
{
int err = 0;
DPRINTK("");
/* Initialize the interface to xenstore. */
err = xs_init();
if (err) {
log(LOG_WARNING,
"XENBUS: Error initializing xenstore comms: %i\n", err);
return (ENXIO);
}
err = gnttab_init();
if (err) {
log(LOG_WARNING,
"XENBUS: Error initializing grant table: %i\n", err);
return (ENXIO);
}
device_set_desc(dev, "Xen Devices");
return (0);
}
static enum xenbus_state
xenbus_otherend_state(struct xenbus_device_ivars *ivars)
{
return (xenbus_read_driver_state(ivars->xd_otherend_path));
}
static void
xenbus_backend_changed(struct xenbus_watch *watch, const char **vec,
unsigned int len)
{
struct xenbus_device_ivars *ivars;
device_t dev;
enum xenbus_state newstate;
ivars = (struct xenbus_device_ivars *) watch;
dev = ivars->xd_dev;
if (!ivars->xd_otherend_path
|| strncmp(ivars->xd_otherend_path, vec[XS_WATCH_PATH],
strlen(ivars->xd_otherend_path)))
return;
newstate = xenbus_otherend_state(ivars);
XENBUS_BACKEND_CHANGED(dev, newstate);
}
static int
xenbus_device_exists(device_t dev, const char *node)
{
device_t *kids;
struct xenbus_device_ivars *ivars;
int i, count, result;
if (device_get_children(dev, &kids, &count))
return (FALSE);
result = FALSE;
for (i = 0; i < count; i++) {
ivars = device_get_ivars(kids[i]);
if (!strcmp(ivars->xd_node, node)) {
result = TRUE;
break;
}
}
free(kids, M_TEMP);
return (result);
}
static int
xenbus_add_device(device_t dev, const char *bus,
const char *type, const char *id)
{
device_t child;
struct xenbus_device_ivars *ivars;
enum xenbus_state state;
char *statepath;
int error;
ivars = malloc(sizeof(struct xenbus_device_ivars),
M_DEVBUF, M_ZERO|M_WAITOK);
ivars->xd_node = kasprintf("%s/%s/%s", bus, type, id);
if (xenbus_device_exists(dev, ivars->xd_node)) {
/*
* We are already tracking this node
*/
free(ivars->xd_node, M_DEVBUF);
free(ivars, M_DEVBUF);
return (0);
}
state = xenbus_read_driver_state(ivars->xd_node);
if (state != XenbusStateInitialising) {
/*
* Device is not new, so ignore it. This can
* happen if a device is going away after
* switching to Closed.
*/
free(ivars->xd_node, M_DEVBUF);
free(ivars, M_DEVBUF);
return (0);
}
/*
* Find the backend details
*/
error = xenbus_gather(XBT_NIL, ivars->xd_node,
"backend-id", "%i", &ivars->xd_otherend_id,
"backend", NULL, &ivars->xd_otherend_path,
NULL);
if (error)
return (error);
sx_init(&ivars->xd_lock, "xdlock");
ivars->xd_type = strdup(type, M_DEVBUF);
ivars->xd_state = XenbusStateInitialising;
statepath = malloc(strlen(ivars->xd_otherend_path)
+ strlen("/state") + 1, M_DEVBUF, M_WAITOK);
sprintf(statepath, "%s/state", ivars->xd_otherend_path);
ivars->xd_otherend_watch.node = statepath;
ivars->xd_otherend_watch.callback = xenbus_backend_changed;
child = device_add_child(dev, NULL, -1);
ivars->xd_dev = child;
device_set_ivars(child, ivars);
return (0);
}
static int
xenbus_enumerate_type(device_t dev, const char *bus, const char *type)
{
char **dir;
unsigned int i, count;
int error;
error = xenbus_directory(XBT_NIL, bus, type, &count, &dir);
if (error)
return (error);
for (i = 0; i < count; i++)
xenbus_add_device(dev, bus, type, dir[i]);
free(dir, M_DEVBUF);
return (0);
}
static int
xenbus_enumerate_bus(device_t dev, const char *bus)
{
char **dir;
unsigned int i, count;
int error;
error = xenbus_directory(XBT_NIL, bus, "", &count, &dir);
if (error)
return (error);
for (i = 0; i < count; i++) {
xenbus_enumerate_type(dev, bus, dir[i]);
}
free(dir, M_DEVBUF);
return (0);
}
static int
xenbus_probe_children(device_t dev)
{
device_t *kids;
struct xenbus_device_ivars *ivars;
int i, count;
/*
* Probe any new devices and register watches for any that
* attach successfully. Since part of the protocol which
* establishes a connection with the other end is interrupt
* driven, we sleep until the device reaches a stable state
* (closed or connected).
*/
if (device_get_children(dev, &kids, &count) == 0) {
for (i = 0; i < count; i++) {
if (device_get_state(kids[i]) != DS_NOTPRESENT)
continue;
if (device_probe_and_attach(kids[i]))
continue;
ivars = device_get_ivars(kids[i]);
register_xenbus_watch(
&ivars->xd_otherend_watch);
sx_xlock(&ivars->xd_lock);
while (ivars->xd_state != XenbusStateClosed
&& ivars->xd_state != XenbusStateConnected)
sx_sleep(&ivars->xd_state, &ivars->xd_lock,
0, "xdattach", 0);
sx_xunlock(&ivars->xd_lock);
}
free(kids, M_TEMP);
}
return (0);
}
static void
xenbus_probe_children_cb(void *arg, int pending)
{
device_t dev = (device_t) arg;
xenbus_probe_children(dev);
}
static void
xenbus_devices_changed(struct xenbus_watch *watch,
const char **vec, unsigned int len)
{
struct xenbus_softc *sc = (struct xenbus_softc *) watch;
device_t dev = sc->xs_dev;
char *node, *bus, *type, *id, *p;
node = strdup(vec[XS_WATCH_PATH], M_DEVBUF);
p = strchr(node, '/');
if (!p)
goto out;
bus = node;
*p = 0;
type = p + 1;
p = strchr(type, '/');
if (!p)
goto out;
*p = 0;
id = p + 1;
p = strchr(id, '/');
if (p)
*p = 0;
xenbus_add_device(dev, bus, type, id);
taskqueue_enqueue(taskqueue_thread, &sc->xs_probechildren);
out:
free(node, M_DEVBUF);
}
static void
xenbus_attach_deferred(void *arg)
{
device_t dev = (device_t) arg;
struct xenbus_softc *sc = device_get_softc(dev);
int error;
error = xenbus_enumerate_bus(dev, "device");
if (error)
return;
xenbus_probe_children(dev);
sc->xs_dev = dev;
sc->xs_devicewatch.node = "device";
sc->xs_devicewatch.callback = xenbus_devices_changed;
TASK_INIT(&sc->xs_probechildren, 0, xenbus_probe_children_cb, dev);
register_xenbus_watch(&sc->xs_devicewatch);
config_intrhook_disestablish(&sc->xs_attachcb);
}
static int
xenbus_attach(device_t dev)
{
struct xenbus_softc *sc = device_get_softc(dev);
sc->xs_attachcb.ich_func = xenbus_attach_deferred;
sc->xs_attachcb.ich_arg = dev;
config_intrhook_establish(&sc->xs_attachcb);
return (0);
}
static int
xenbus_suspend(device_t dev)
{
int error;
DPRINTK("");
error = bus_generic_suspend(dev);
if (error)
return (error);
xs_suspend();
return (0);
}
static int
xenbus_resume(device_t dev)
{
device_t *kids;
struct xenbus_device_ivars *ivars;
int i, count, error;
char *statepath;
xb_init_comms();
xs_resume();
/*
* We must re-examine each device and find the new path for
* its backend.
*/
if (device_get_children(dev, &kids, &count) == 0) {
for (i = 0; i < count; i++) {
if (device_get_state(kids[i]) == DS_NOTPRESENT)
continue;
ivars = device_get_ivars(kids[i]);
unregister_xenbus_watch(
&ivars->xd_otherend_watch);
ivars->xd_state = XenbusStateInitialising;
/*
* Find the new backend details and
* re-register our watch.
*/
free(ivars->xd_otherend_path, M_DEVBUF);
error = xenbus_gather(XBT_NIL, ivars->xd_node,
"backend-id", "%i", &ivars->xd_otherend_id,
"backend", NULL, &ivars->xd_otherend_path,
NULL);
if (error)
return (error);
DEVICE_RESUME(kids[i]);
statepath = malloc(strlen(ivars->xd_otherend_path)
+ strlen("/state") + 1, M_DEVBUF, M_WAITOK);
sprintf(statepath, "%s/state", ivars->xd_otherend_path);
free(ivars->xd_otherend_watch.node, M_DEVBUF);
ivars->xd_otherend_watch.node = statepath;
register_xenbus_watch(
&ivars->xd_otherend_watch);
#if 0
/*
* Can't do this yet since we are running in
* the xenwatch thread and if we sleep here,
* we will stop delivering watch notifications
* and the device will never come back online.
*/
sx_xlock(&ivars->xd_lock);
while (ivars->xd_state != XenbusStateClosed
&& ivars->xd_state != XenbusStateConnected)
sx_sleep(&ivars->xd_state, &ivars->xd_lock,
0, "xdresume", 0);
sx_xunlock(&ivars->xd_lock);
#endif
}
free(kids, M_TEMP);
}
return (0);
}
static int
xenbus_print_child(device_t dev, device_t child)
{
struct xenbus_device_ivars *ivars = device_get_ivars(child);
int retval = 0;
retval += bus_print_child_header(dev, child);
retval += printf(" at %s", ivars->xd_node);
retval += bus_print_child_footer(dev, child);
return (retval);
}
static int
xenbus_read_ivar(device_t dev, device_t child, int index,
uintptr_t * result)
{
struct xenbus_device_ivars *ivars = device_get_ivars(child);
switch (index) {
case XENBUS_IVAR_NODE:
*result = (uintptr_t) ivars->xd_node;
return (0);
case XENBUS_IVAR_TYPE:
*result = (uintptr_t) ivars->xd_type;
return (0);
case XENBUS_IVAR_STATE:
*result = (uintptr_t) ivars->xd_state;
return (0);
case XENBUS_IVAR_OTHEREND_ID:
*result = (uintptr_t) ivars->xd_otherend_id;
return (0);
case XENBUS_IVAR_OTHEREND_PATH:
*result = (uintptr_t) ivars->xd_otherend_path;
return (0);
}
return (ENOENT);
}
static int
xenbus_write_ivar(device_t dev, device_t child, int index, uintptr_t value)
{
struct xenbus_device_ivars *ivars = device_get_ivars(child);
enum xenbus_state newstate;
int currstate;
int error;
switch (index) {
case XENBUS_IVAR_STATE:
newstate = (enum xenbus_state) value;
sx_xlock(&ivars->xd_lock);
if (ivars->xd_state == newstate)
goto out;
error = xenbus_scanf(XBT_NIL, ivars->xd_node, "state",
NULL, "%d", &currstate);
if (error)
goto out;
error = xenbus_printf(XBT_NIL, ivars->xd_node, "state",
"%d", newstate);
if (error) {
if (newstate != XenbusStateClosing) /* Avoid looping */
xenbus_dev_fatal(dev, error, "writing new state");
goto out;
}
ivars->xd_state = newstate;
wakeup(&ivars->xd_state);
out:
sx_xunlock(&ivars->xd_lock);
return (0);
case XENBUS_IVAR_NODE:
case XENBUS_IVAR_TYPE:
case XENBUS_IVAR_OTHEREND_ID:
case XENBUS_IVAR_OTHEREND_PATH:
/*
* These variables are read-only.
*/
return (EINVAL);
}
return (ENOENT);
}
SYSCTL_NODE(_dev, OID_AUTO, xen, CTLFLAG_RD, NULL, "Xen");
SYSCTL_INT(_dev_xen, OID_AUTO, xsd_port, CTLFLAG_RD, &xen_store_evtchn, 0, "");
SYSCTL_ULONG(_dev_xen, OID_AUTO, xsd_kva, CTLFLAG_RD, (u_long *) &xen_store, 0, "");
static device_method_t xenbus_methods[] = {
/* Device interface */
DEVMETHOD(device_identify, xenbus_identify),
DEVMETHOD(device_probe, xenbus_probe),
DEVMETHOD(device_attach, xenbus_attach),
DEVMETHOD(device_detach, bus_generic_detach),
DEVMETHOD(device_shutdown, bus_generic_shutdown),
DEVMETHOD(device_suspend, xenbus_suspend),
DEVMETHOD(device_resume, xenbus_resume),
/* Bus interface */
DEVMETHOD(bus_print_child, xenbus_print_child),
DEVMETHOD(bus_read_ivar, xenbus_read_ivar),
DEVMETHOD(bus_write_ivar, xenbus_write_ivar),
{ 0, 0 }
};
static char driver_name[] = "xenbus";
static driver_t xenbus_driver = {
driver_name,
xenbus_methods,
sizeof(struct xenbus_softc),
};
devclass_t xenbus_devclass;
#ifdef XENHVM
DRIVER_MODULE(xenbus, xenpci, xenbus_driver, xenbus_devclass, 0, 0);
#else
DRIVER_MODULE(xenbus, nexus, xenbus_driver, xenbus_devclass, 0, 0);
#endif

View File

@ -1,308 +0,0 @@
/******************************************************************************
* Talks to Xen Store to figure out what devices we have (backend half).
*
* Copyright (C) 2005 Rusty Russell, IBM Corporation
* Copyright (C) 2005 Mike Wray, Hewlett-Packard
* Copyright (C) 2005, 2006 XenSource Ltd
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License version 2
* as published by the Free Software Foundation; or, when distributed
* separately from the Linux kernel or incorporated into other
* software packages, subject to the following license:
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this source file (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy, modify,
* merge, publish, distribute, sublicense, and/or sell copies of the Software,
* and to permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#if 0
#define DPRINTK(fmt, args...) \
printf("xenbus_probe (%s:%d) " fmt ".\n", __FUNCTION__, __LINE__, ##args)
#else
#define DPRINTK(fmt, args...) ((void)0)
#endif
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/types.h>
#include <sys/cdefs.h>
#include <sys/time.h>
#include <sys/sema.h>
#include <sys/eventhandler.h>
#include <sys/errno.h>
#include <sys/kernel.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/conf.h>
#include <sys/systm.h>
#include <sys/syslog.h>
#include <sys/proc.h>
#include <sys/bus.h>
#include <sys/sx.h>
#include <machine/xen/xen-os.h>
#include <xen/hypervisor.h>
#include <machine/xen/xenbus.h>
#include <machine/stdarg.h>
#include <xen/evtchn.h>
#include <xen/xenbus/xenbus_comms.h>
#define BUG_ON PANIC_IF
#define semaphore sema
#define rw_semaphore sema
#define DEFINE_SPINLOCK(lock) struct mtx lock
#define DECLARE_MUTEX(lock) struct sema lock
#define u32 uint32_t
#define list_del(head, ent) TAILQ_REMOVE(head, ent, list)
#define simple_strtoul strtoul
#define ARRAY_SIZE(x) (sizeof(x)/sizeof(x[0]))
#define list_empty TAILQ_EMPTY
extern struct xendev_list_head xenbus_device_backend_list;
#if 0
static int xenbus_uevent_backend(struct device *dev, char **envp,
int num_envp, char *buffer, int buffer_size);
#endif
static int xenbus_probe_backend(const char *type, const char *domid);
static int read_frontend_details(struct xenbus_device *xendev)
{
return read_otherend_details(xendev, "frontend-id", "frontend");
}
/* backend/<type>/<fe-uuid>/<id> => <type>-<fe-domid>-<id> */
static int backend_bus_id(char bus_id[BUS_ID_SIZE], const char *nodename)
{
int domid, err;
const char *devid, *type, *frontend;
unsigned int typelen;
type = strchr(nodename, '/');
if (!type)
return -EINVAL;
type++;
typelen = strcspn(type, "/");
if (!typelen || type[typelen] != '/')
return -EINVAL;
devid = strrchr(nodename, '/') + 1;
err = xenbus_gather(XBT_NIL, nodename, "frontend-id", "%i", &domid,
"frontend", NULL, &frontend,
NULL);
if (err)
return err;
if (strlen(frontend) == 0)
err = -ERANGE;
if (!err && !xenbus_exists(XBT_NIL, frontend, ""))
err = -ENOENT;
kfree(frontend);
if (err)
return err;
if (snprintf(bus_id, BUS_ID_SIZE,
"%.*s-%i-%s", typelen, type, domid, devid) >= BUS_ID_SIZE)
return -ENOSPC;
return 0;
}
static struct xen_bus_type xenbus_backend = {
.root = "backend",
.levels = 3, /* backend/type/<frontend>/<id> */
.get_bus_id = backend_bus_id,
.probe = xenbus_probe_backend,
.bus = &xenbus_device_backend_list,
#if 0
.error = -ENODEV,
.bus = {
.name = "xen-backend",
.match = xenbus_match,
.probe = xenbus_dev_probe,
.remove = xenbus_dev_remove,
// .shutdown = xenbus_dev_shutdown,
.uevent = xenbus_uevent_backend,
},
.dev = {
.bus_id = "xen-backend",
},
#endif
};
#if 0
static int xenbus_uevent_backend(struct device *dev, char **envp,
int num_envp, char *buffer, int buffer_size)
{
struct xenbus_device *xdev;
struct xenbus_driver *drv;
int i = 0;
int length = 0;
DPRINTK("");
if (dev == NULL)
return -ENODEV;
xdev = to_xenbus_device(dev);
if (xdev == NULL)
return -ENODEV;
2
/* stuff we want to pass to /sbin/hotplug */
add_uevent_var(envp, num_envp, &i, buffer, buffer_size, &length,
"XENBUS_TYPE=%s", xdev->devicetype);
add_uevent_var(envp, num_envp, &i, buffer, buffer_size, &length,
"XENBUS_PATH=%s", xdev->nodename);
add_uevent_var(envp, num_envp, &i, buffer, buffer_size, &length,
"XENBUS_BASE_PATH=%s", xenbus_backend.root);
/* terminate, set to next free slot, shrink available space */
envp[i] = NULL;
envp = &envp[i];
num_envp -= i;
buffer = &buffer[length];
buffer_size -= length;
if (dev->driver) {
drv = to_xenbus_driver(dev->driver);
if (drv && drv->uevent)
return drv->uevent(xdev, envp, num_envp, buffer,
buffer_size);
}
return 0;
}
#endif
int xenbus_register_backend(struct xenbus_driver *drv)
{
drv->read_otherend_details = read_frontend_details;
return xenbus_register_driver_common(drv, &xenbus_backend);
}
/* backend/<typename>/<frontend-uuid>/<name> */
static int xenbus_probe_backend_unit(const char *dir,
const char *type,
const char *name)
{
char *nodename;
int err;
nodename = kasprintf("%s/%s", dir, name);
if (!nodename)
return -ENOMEM;
DPRINTK("%s\n", nodename);
err = xenbus_probe_node(&xenbus_backend, type, nodename);
kfree(nodename);
return err;
}
/* backend/<typename>/<frontend-domid> */
static int xenbus_probe_backend(const char *type, const char *domid)
{
char *nodename;
int err = 0;
char **dir;
unsigned int i, dir_n = 0;
DPRINTK("");
nodename = kasprintf("%s/%s/%s", xenbus_backend.root, type, domid);
if (!nodename)
return -ENOMEM;
dir = xenbus_directory(XBT_NIL, nodename, "", &dir_n);
if (IS_ERR(dir)) {
kfree(nodename);
return PTR_ERR(dir);
}
for (i = 0; i < dir_n; i++) {
err = xenbus_probe_backend_unit(nodename, type, dir[i]);
if (err)
break;
}
kfree(dir);
kfree(nodename);
return err;
}
static void backend_changed(struct xenbus_watch *watch,
const char **vec, unsigned int len)
{
DPRINTK("");
dev_changed(vec[XS_WATCH_PATH], &xenbus_backend);
}
static struct xenbus_watch be_watch = {
.node = "backend",
.callback = backend_changed,
};
#if 0
void xenbus_backend_suspend(int (*fn)(struct device *, void *))
{
DPRINTK("");
if (!xenbus_backend.error)
bus_for_each_dev(&xenbus_backend.bus, NULL, NULL, fn);
}
void xenbus_backend_resume(int (*fn)(struct device *, void *))
{
DPRINTK("");
if (!xenbus_backend.error)
bus_for_each_dev(&xenbus_backend.bus, NULL, NULL, fn);
}
#endif
void xenbus_backend_probe_and_watch(void)
{
xenbus_probe_devices(&xenbus_backend);
register_xenbus_watch(&be_watch);
}
#if 0
void xenbus_backend_bus_register(void)
{
xenbus_backend.error = bus_register(&xenbus_backend.bus);
if (xenbus_backend.error)
log(LOG_WARNING,
"XENBUS: Error registering backend bus: %i\n",
xenbus_backend.error);
}
void xenbus_backend_device_register(void)
{
if (xenbus_backend.error)
return;
xenbus_backend.error = device_register(&xenbus_backend.dev);
if (xenbus_backend.error) {
bus_unregister(&xenbus_backend.bus);
log(LOG_WARNING,
"XENBUS: Error registering backend device: %i\n",
xenbus_backend.error);
}
}
#endif

View File

@ -1,935 +0,0 @@
/******************************************************************************
* xenbus_xs.c
*
* This is the kernel equivalent of the "xs" library. We don't need everything
* and we use xenbus_comms for communication.
*
* Copyright (C) 2005 Rusty Russell, IBM Corporation
*
* This file may be distributed separately from the Linux kernel, or
* incorporated into other software packages, subject to the following license:
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this source file (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy, modify,
* merge, publish, distribute, sublicense, and/or sell copies of the Software,
* and to permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/uio.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/mutex.h>
#include <sys/sx.h>
#include <sys/syslog.h>
#include <sys/malloc.h>
#include <sys/systm.h>
#include <sys/proc.h>
#include <sys/kthread.h>
#include <sys/unistd.h>
#include <machine/xen/xen-os.h>
#include <xen/hypervisor.h>
#include <machine/stdarg.h>
#include <xen/xenbus/xenbusvar.h>
#include <xen/xenbus/xenbus_comms.h>
#include <xen/interface/hvm/params.h>
#include <vm/vm.h>
#include <vm/pmap.h>
static int xs_process_msg(enum xsd_sockmsg_type *type);
int xenwatch_running = 0;
int xenbus_running = 0;
int xen_store_evtchn;
struct xs_stored_msg {
TAILQ_ENTRY(xs_stored_msg) list;
struct xsd_sockmsg hdr;
union {
/* Queued replies. */
struct {
char *body;
} reply;
/* Queued watch events. */
struct {
struct xenbus_watch *handle;
char **vec;
unsigned int vec_size;
} watch;
} u;
};
struct xs_handle {
/* A list of replies. Currently only one will ever be outstanding. */
TAILQ_HEAD(xs_handle_list, xs_stored_msg) reply_list;
struct mtx reply_lock;
int reply_waitq;
/* One request at a time. */
struct sx request_mutex;
/* Protect transactions against save/restore. */
struct sx suspend_mutex;
};
static struct xs_handle xs_state;
/* List of registered watches, and a lock to protect it. */
static LIST_HEAD(watch_list_head, xenbus_watch) watches;
static struct mtx watches_lock;
/* List of pending watch callback events, and a lock to protect it. */
static TAILQ_HEAD(event_list_head, xs_stored_msg) watch_events;
static struct mtx watch_events_lock;
/*
* Details of the xenwatch callback kernel thread. The thread waits on the
* watch_events_waitq for work to do (queued on watch_events list). When it
* wakes up it acquires the xenwatch_mutex before reading the list and
* carrying out work.
*/
static pid_t xenwatch_pid;
struct sx xenwatch_mutex;
static int watch_events_waitq;
#define xsd_error_count (sizeof(xsd_errors) / sizeof(xsd_errors[0]))
static int
xs_get_error(const char *errorstring)
{
unsigned int i;
for (i = 0; i < xsd_error_count; i++) {
if (!strcmp(errorstring, xsd_errors[i].errstring))
return (xsd_errors[i].errnum);
}
log(LOG_WARNING, "XENBUS xen store gave: unknown error %s",
errorstring);
return (EINVAL);
}
extern void kdb_backtrace(void);
static int
xs_read_reply(enum xsd_sockmsg_type *type, unsigned int *len, void **result)
{
struct xs_stored_msg *msg;
char *body;
int error;
mtx_lock(&xs_state.reply_lock);
while (TAILQ_EMPTY(&xs_state.reply_list)) {
while (TAILQ_EMPTY(&xs_state.reply_list)) {
error = mtx_sleep(&xs_state.reply_waitq,
&xs_state.reply_lock,
PCATCH, "xswait", hz/10);
if (error && error != EWOULDBLOCK) {
mtx_unlock(&xs_state.reply_lock);
return (error);
}
}
}
msg = TAILQ_FIRST(&xs_state.reply_list);
TAILQ_REMOVE(&xs_state.reply_list, msg, list);
mtx_unlock(&xs_state.reply_lock);
*type = msg->hdr.type;
if (len)
*len = msg->hdr.len;
body = msg->u.reply.body;
free(msg, M_DEVBUF);
*result = body;
return (0);
}
#if 0
/* Emergency write. UNUSED*/
void xenbus_debug_write(const char *str, unsigned int count)
{
struct xsd_sockmsg msg = { 0 };
msg.type = XS_DEBUG;
msg.len = sizeof("print") + count + 1;
sx_xlock(&xs_state.request_mutex);
xb_write(&msg, sizeof(msg));
xb_write("print", sizeof("print"));
xb_write(str, count);
xb_write("", 1);
sx_xunlock(&xs_state.request_mutex);
}
#endif
int
xenbus_dev_request_and_reply(struct xsd_sockmsg *msg, void **result)
{
struct xsd_sockmsg req_msg = *msg;
int error;
if (req_msg.type == XS_TRANSACTION_START)
sx_slock(&xs_state.suspend_mutex);
sx_xlock(&xs_state.request_mutex);
error = xb_write(msg, sizeof(*msg) + msg->len,
&xs_state.request_mutex.lock_object);
if (error) {
msg->type = XS_ERROR;
} else {
error = xs_read_reply(&msg->type, &msg->len, result);
}
sx_xunlock(&xs_state.request_mutex);
if ((msg->type == XS_TRANSACTION_END) ||
((req_msg.type == XS_TRANSACTION_START) &&
(msg->type == XS_ERROR)))
sx_sunlock(&xs_state.suspend_mutex);
return (error);
}
/*
* Send message to xs. The reply is returned in *result and should be
* fred with free(*result, M_DEVBUF). Return zero on success or an
* error code on failure.
*/
static int
xs_talkv(struct xenbus_transaction t, enum xsd_sockmsg_type type,
const struct iovec *iovec, unsigned int num_vecs,
unsigned int *len, void **result)
{
struct xsd_sockmsg msg;
void *ret = NULL;
unsigned int i;
int error;
msg.tx_id = t.id;
msg.req_id = 0;
msg.type = type;
msg.len = 0;
for (i = 0; i < num_vecs; i++)
msg.len += iovec[i].iov_len;
sx_xlock(&xs_state.request_mutex);
error = xb_write(&msg, sizeof(msg),
&xs_state.request_mutex.lock_object);
if (error) {
sx_xunlock(&xs_state.request_mutex);
printf("xs_talkv failed %d\n", error);
return (error);
}
for (i = 0; i < num_vecs; i++) {
error = xb_write(iovec[i].iov_base, iovec[i].iov_len,
&xs_state.request_mutex.lock_object);
if (error) {
sx_xunlock(&xs_state.request_mutex);
printf("xs_talkv failed %d\n", error);
return (error);
}
}
error = xs_read_reply(&msg.type, len, &ret);
sx_xunlock(&xs_state.request_mutex);
if (error)
return (error);
if (msg.type == XS_ERROR) {
error = xs_get_error(ret);
free(ret, M_DEVBUF);
return (error);
}
#if 0
if ((xenwatch_running == 0) && (xenwatch_inline == 0)) {
xenwatch_inline = 1;
while (!TAILQ_EMPTY(&watch_events)
&& xenwatch_running == 0) {
struct xs_stored_msg *wmsg = TAILQ_FIRST(&watch_events);
TAILQ_REMOVE(&watch_events, wmsg, list);
wmsg->u.watch.handle->callback(
wmsg->u.watch.handle,
(const char **)wmsg->u.watch.vec,
wmsg->u.watch.vec_size);
free(wmsg->u.watch.vec, M_DEVBUF);
free(wmsg, M_DEVBUF);
}
xenwatch_inline = 0;
}
#endif
KASSERT(msg.type == type, ("bad xenstore message type"));
if (result)
*result = ret;
else
free(ret, M_DEVBUF);
return (0);
}
/* Simplified version of xs_talkv: single message. */
static int
xs_single(struct xenbus_transaction t, enum xsd_sockmsg_type type,
const char *string, unsigned int *len, void **result)
{
struct iovec iovec;
iovec.iov_base = (void *)(uintptr_t) string;
iovec.iov_len = strlen(string) + 1;
return (xs_talkv(t, type, &iovec, 1, len, result));
}
static unsigned int
count_strings(const char *strings, unsigned int len)
{
unsigned int num;
const char *p;
for (p = strings, num = 0; p < strings + len; p += strlen(p) + 1)
num++;
return num;
}
/* Return the path to dir with /name appended. Buffer must be kfree()'ed. */
static char *
join(const char *dir, const char *name)
{
char *buffer;
buffer = malloc(strlen(dir) + strlen("/") + strlen(name) + 1,
M_DEVBUF, M_WAITOK);
strcpy(buffer, dir);
if (strcmp(name, "")) {
strcat(buffer, "/");
strcat(buffer, name);
}
return (buffer);
}
static char **
split(char *strings, unsigned int len, unsigned int *num)
{
char *p, **ret;
/* Count the strings. */
*num = count_strings(strings, len) + 1;
/* Transfer to one big alloc for easy freeing. */
ret = malloc(*num * sizeof(char *) + len, M_DEVBUF, M_WAITOK);
memcpy(&ret[*num], strings, len);
free(strings, M_DEVBUF);
strings = (char *)&ret[*num];
for (p = strings, *num = 0; p < strings + len; p += strlen(p) + 1)
ret[(*num)++] = p;
ret[*num] = strings + len;
return ret;
}
/*
* Return the contents of a directory in *result which should be freed
* with free(*result, M_DEVBUF).
*/
int
xenbus_directory(struct xenbus_transaction t, const char *dir,
const char *node, unsigned int *num, char ***result)
{
char *strings, *path;
unsigned int len = 0;
int error;
path = join(dir, node);
error = xs_single(t, XS_DIRECTORY, path, &len, (void **) &strings);
free(path, M_DEVBUF);
if (error)
return (error);
*result = split(strings, len, num);
return (0);
}
/*
* Check if a path exists. Return 1 if it does.
*/
int
xenbus_exists(struct xenbus_transaction t, const char *dir, const char *node)
{
char **d;
int error, dir_n;
error = xenbus_directory(t, dir, node, &dir_n, &d);
if (error)
return (0);
free(d, M_DEVBUF);
return (1);
}
/*
* Get the value of a single file. Returns the contents in *result
* which should be freed with free(*result, M_DEVBUF) after use.
* The length of the value in bytes is returned in *len.
*/
int
xenbus_read(struct xenbus_transaction t, const char *dir, const char *node,
unsigned int *len, void **result)
{
char *path;
void *ret;
int error;
path = join(dir, node);
error = xs_single(t, XS_READ, path, len, &ret);
free(path, M_DEVBUF);
if (error)
return (error);
*result = ret;
return (0);
}
/*
* Write the value of a single file. Returns error on failure.
*/
int
xenbus_write(struct xenbus_transaction t, const char *dir, const char *node,
const char *string)
{
char *path;
struct iovec iovec[2];
int error;
path = join(dir, node);
iovec[0].iov_base = (void *)(uintptr_t) path;
iovec[0].iov_len = strlen(path) + 1;
iovec[1].iov_base = (void *)(uintptr_t) string;
iovec[1].iov_len = strlen(string);
error = xs_talkv(t, XS_WRITE, iovec, 2, NULL, NULL);
free(path, M_DEVBUF);
return (error);
}
/*
* Create a new directory.
*/
int
xenbus_mkdir(struct xenbus_transaction t, const char *dir, const char *node)
{
char *path;
int ret;
path = join(dir, node);
ret = xs_single(t, XS_MKDIR, path, NULL, NULL);
free(path, M_DEVBUF);
return (ret);
}
/*
* Destroy a file or directory (directories must be empty).
*/
int
xenbus_rm(struct xenbus_transaction t, const char *dir, const char *node)
{
char *path;
int ret;
path = join(dir, node);
ret = xs_single(t, XS_RM, path, NULL, NULL);
free(path, M_DEVBUF);
return (ret);
}
/*
* Start a transaction: changes by others will not be seen during this
* transaction, and changes will not be visible to others until end.
*/
int
xenbus_transaction_start(struct xenbus_transaction *t)
{
char *id_str;
int error;
sx_slock(&xs_state.suspend_mutex);
error = xs_single(XBT_NIL, XS_TRANSACTION_START, "", NULL,
(void **) &id_str);
if (error) {
sx_sunlock(&xs_state.suspend_mutex);
return (error);
}
t->id = strtoul(id_str, NULL, 0);
free(id_str, M_DEVBUF);
return (0);
}
/*
* End a transaction. If abandon is true, transaction is discarded
* instead of committed.
*/
int xenbus_transaction_end(struct xenbus_transaction t, int abort)
{
char abortstr[2];
int error;
if (abort)
strcpy(abortstr, "F");
else
strcpy(abortstr, "T");
error = xs_single(t, XS_TRANSACTION_END, abortstr, NULL, NULL);
sx_sunlock(&xs_state.suspend_mutex);
return (error);
}
/* Single read and scanf: returns zero or errno. */
int
xenbus_scanf(struct xenbus_transaction t,
const char *dir, const char *node, int *scancountp, const char *fmt, ...)
{
va_list ap;
int error, ns;
char *val;
error = xenbus_read(t, dir, node, NULL, (void **) &val);
if (error)
return (error);
va_start(ap, fmt);
ns = vsscanf(val, fmt, ap);
va_end(ap);
free(val, M_DEVBUF);
/* Distinctive errno. */
if (ns == 0)
return (ERANGE);
if (scancountp)
*scancountp = ns;
return (0);
}
/* Single printf and write: returns zero or errno. */
int
xenbus_printf(struct xenbus_transaction t,
const char *dir, const char *node, const char *fmt, ...)
{
va_list ap;
int error, ret;
#define PRINTF_BUFFER_SIZE 4096
char *printf_buffer;
printf_buffer = malloc(PRINTF_BUFFER_SIZE, M_DEVBUF, M_WAITOK);
va_start(ap, fmt);
ret = vsnprintf(printf_buffer, PRINTF_BUFFER_SIZE, fmt, ap);
va_end(ap);
KASSERT(ret <= PRINTF_BUFFER_SIZE-1, ("xenbus_printf: message too large"));
error = xenbus_write(t, dir, node, printf_buffer);
free(printf_buffer, M_DEVBUF);
return (error);
}
/* Takes tuples of names, scanf-style args, and void **, NULL terminated. */
int
xenbus_gather(struct xenbus_transaction t, const char *dir, ...)
{
va_list ap;
const char *name;
int error, i;
for (i = 0; i < 10000; i++)
HYPERVISOR_yield();
va_start(ap, dir);
error = 0;
while (error == 0 && (name = va_arg(ap, char *)) != NULL) {
const char *fmt = va_arg(ap, char *);
void *result = va_arg(ap, void *);
char *p;
error = xenbus_read(t, dir, name, NULL, (void **) &p);
if (error)
break;
if (fmt) {
if (sscanf(p, fmt, result) == 0)
error = EINVAL;
free(p, M_DEVBUF);
} else
*(char **)result = p;
}
va_end(ap);
return (error);
}
static int
xs_watch(const char *path, const char *token)
{
struct iovec iov[2];
iov[0].iov_base = (void *)(uintptr_t) path;
iov[0].iov_len = strlen(path) + 1;
iov[1].iov_base = (void *)(uintptr_t) token;
iov[1].iov_len = strlen(token) + 1;
return (xs_talkv(XBT_NIL, XS_WATCH, iov, 2, NULL, NULL));
}
static int
xs_unwatch(const char *path, const char *token)
{
struct iovec iov[2];
iov[0].iov_base = (void *)(uintptr_t) path;
iov[0].iov_len = strlen(path) + 1;
iov[1].iov_base = (void *)(uintptr_t) token;
iov[1].iov_len = strlen(token) + 1;
return (xs_talkv(XBT_NIL, XS_UNWATCH, iov, 2, NULL, NULL));
}
static struct xenbus_watch *
find_watch(const char *token)
{
struct xenbus_watch *i, *cmp;
cmp = (void *)strtoul(token, NULL, 16);
LIST_FOREACH(i, &watches, list)
if (i == cmp)
return (i);
return (NULL);
}
/* Register callback to watch this node. */
int
register_xenbus_watch(struct xenbus_watch *watch)
{
/* Pointer in ascii is the token. */
char token[sizeof(watch) * 2 + 1];
int error;
sprintf(token, "%lX", (long)watch);
sx_slock(&xs_state.suspend_mutex);
mtx_lock(&watches_lock);
KASSERT(find_watch(token) == NULL, ("watch already registered"));
LIST_INSERT_HEAD(&watches, watch, list);
mtx_unlock(&watches_lock);
error = xs_watch(watch->node, token);
/* Ignore errors due to multiple registration. */
if (error == EEXIST) {
mtx_lock(&watches_lock);
LIST_REMOVE(watch, list);
mtx_unlock(&watches_lock);
}
sx_sunlock(&xs_state.suspend_mutex);
return (error);
}
void
unregister_xenbus_watch(struct xenbus_watch *watch)
{
struct xs_stored_msg *msg, *tmp;
char token[sizeof(watch) * 2 + 1];
int error;
sprintf(token, "%lX", (long)watch);
sx_slock(&xs_state.suspend_mutex);
mtx_lock(&watches_lock);
KASSERT(find_watch(token), ("watch not registered"));
LIST_REMOVE(watch, list);
mtx_unlock(&watches_lock);
error = xs_unwatch(watch->node, token);
if (error)
log(LOG_WARNING, "XENBUS Failed to release watch %s: %i\n",
watch->node, error);
sx_sunlock(&xs_state.suspend_mutex);
/* Cancel pending watch events. */
mtx_lock(&watch_events_lock);
TAILQ_FOREACH_SAFE(msg, &watch_events, list, tmp) {
if (msg->u.watch.handle != watch)
continue;
TAILQ_REMOVE(&watch_events, msg, list);
free(msg->u.watch.vec, M_DEVBUF);
free(msg, M_DEVBUF);
}
mtx_unlock(&watch_events_lock);
/* Flush any currently-executing callback, unless we are it. :-) */
if (curproc->p_pid != xenwatch_pid) {
sx_xlock(&xenwatch_mutex);
sx_xunlock(&xenwatch_mutex);
}
}
void
xs_suspend(void)
{
sx_xlock(&xs_state.suspend_mutex);
sx_xlock(&xs_state.request_mutex);
}
void
xs_resume(void)
{
struct xenbus_watch *watch;
char token[sizeof(watch) * 2 + 1];
sx_xunlock(&xs_state.request_mutex);
/* No need for watches_lock: the suspend_mutex is sufficient. */
LIST_FOREACH(watch, &watches, list) {
sprintf(token, "%lX", (long)watch);
xs_watch(watch->node, token);
}
sx_xunlock(&xs_state.suspend_mutex);
}
static void
xenwatch_thread(void *unused)
{
struct xs_stored_msg *msg;
for (;;) {
mtx_lock(&watch_events_lock);
while (TAILQ_EMPTY(&watch_events))
mtx_sleep(&watch_events_waitq,
&watch_events_lock,
PWAIT | PCATCH, "waitev", hz/10);
mtx_unlock(&watch_events_lock);
sx_xlock(&xenwatch_mutex);
mtx_lock(&watch_events_lock);
msg = TAILQ_FIRST(&watch_events);
if (msg)
TAILQ_REMOVE(&watch_events, msg, list);
mtx_unlock(&watch_events_lock);
if (msg != NULL) {
/*
* XXX There are messages coming in with a NULL callback.
* XXX This deserves further investigation; the workaround
* XXX here simply prevents the kernel from panic'ing
* XXX on startup.
*/
if (msg->u.watch.handle->callback != NULL)
msg->u.watch.handle->callback(
msg->u.watch.handle,
(const char **)msg->u.watch.vec,
msg->u.watch.vec_size);
free(msg->u.watch.vec, M_DEVBUF);
free(msg, M_DEVBUF);
}
sx_xunlock(&xenwatch_mutex);
}
}
static int
xs_process_msg(enum xsd_sockmsg_type *type)
{
struct xs_stored_msg *msg;
char *body;
int error;
msg = malloc(sizeof(*msg), M_DEVBUF, M_WAITOK);
mtx_lock(&xs_state.reply_lock);
error = xb_read(&msg->hdr, sizeof(msg->hdr),
&xs_state.reply_lock.lock_object);
mtx_unlock(&xs_state.reply_lock);
if (error) {
free(msg, M_DEVBUF);
return (error);
}
body = malloc(msg->hdr.len + 1, M_DEVBUF, M_WAITOK);
mtx_lock(&xs_state.reply_lock);
error = xb_read(body, msg->hdr.len,
&xs_state.reply_lock.lock_object);
mtx_unlock(&xs_state.reply_lock);
if (error) {
free(body, M_DEVBUF);
free(msg, M_DEVBUF);
return (error);
}
body[msg->hdr.len] = '\0';
*type = msg->hdr.type;
if (msg->hdr.type == XS_WATCH_EVENT) {
msg->u.watch.vec = split(body, msg->hdr.len,
&msg->u.watch.vec_size);
mtx_lock(&watches_lock);
msg->u.watch.handle = find_watch(
msg->u.watch.vec[XS_WATCH_TOKEN]);
if (msg->u.watch.handle != NULL) {
mtx_lock(&watch_events_lock);
TAILQ_INSERT_TAIL(&watch_events, msg, list);
wakeup(&watch_events_waitq);
mtx_unlock(&watch_events_lock);
} else {
free(msg->u.watch.vec, M_DEVBUF);
free(msg, M_DEVBUF);
}
mtx_unlock(&watches_lock);
} else {
msg->u.reply.body = body;
mtx_lock(&xs_state.reply_lock);
TAILQ_INSERT_TAIL(&xs_state.reply_list, msg, list);
wakeup(&xs_state.reply_waitq);
mtx_unlock(&xs_state.reply_lock);
}
return 0;
}
static void
xenbus_thread(void *unused)
{
int error;
enum xsd_sockmsg_type type;
xenbus_running = 1;
for (;;) {
error = xs_process_msg(&type);
if (error)
printf("XENBUS error %d while reading message\n",
error);
}
}
#ifdef XENHVM
static unsigned long xen_store_mfn;
char *xen_store;
static inline unsigned long
hvm_get_parameter(int index)
{
struct xen_hvm_param xhv;
int error;
xhv.domid = DOMID_SELF;
xhv.index = index;
error = HYPERVISOR_hvm_op(HVMOP_get_param, &xhv);
if (error) {
printf("hvm_get_parameter: failed to get %d, error %d\n",
index, error);
return (0);
}
return (xhv.value);
}
#endif
int
xs_init(void)
{
int error;
struct proc *p;
#ifdef XENHVM
xen_store_evtchn = hvm_get_parameter(HVM_PARAM_STORE_EVTCHN);
xen_store_mfn = hvm_get_parameter(HVM_PARAM_STORE_PFN);
xen_store = pmap_mapdev(xen_store_mfn * PAGE_SIZE, PAGE_SIZE);
#else
xen_store_evtchn = xen_start_info->store_evtchn;
#endif
TAILQ_INIT(&xs_state.reply_list);
TAILQ_INIT(&watch_events);
sx_init(&xenwatch_mutex, "xenwatch");
mtx_init(&xs_state.reply_lock, "state reply", NULL, MTX_DEF);
sx_init(&xs_state.request_mutex, "xenstore request");
sx_init(&xs_state.suspend_mutex, "xenstore suspend");
#if 0
mtx_init(&xs_state.suspend_mutex, "xenstore suspend", NULL, MTX_DEF);
sema_init(&xs_state.request_mutex, 1, "xenstore request");
sema_init(&xenwatch_mutex, 1, "xenwatch");
#endif
mtx_init(&watches_lock, "watches", NULL, MTX_DEF);
mtx_init(&watch_events_lock, "watch events", NULL, MTX_DEF);
/* Initialize the shared memory rings to talk to xenstored */
error = xb_init_comms();
if (error)
return (error);
xenwatch_running = 1;
error = kproc_create(xenwatch_thread, NULL, &p,
RFHIGHPID, 0, "xenwatch");
if (error)
return (error);
xenwatch_pid = p->p_pid;
error = kproc_create(xenbus_thread, NULL, NULL,
RFHIGHPID, 0, "xenbus");
return (error);
}

878
sys/xen/xenbus/xenbusb.c Normal file
View File

@ -0,0 +1,878 @@
/******************************************************************************
* Copyright (C) 2010 Spectra Logic Corporation
* Copyright (C) 2008 Doug Rabson
* Copyright (C) 2005 Rusty Russell, IBM Corporation
* Copyright (C) 2005 Mike Wray, Hewlett-Packard
* Copyright (C) 2005 XenSource Ltd
*
* This file may be distributed separately from the Linux kernel, or
* incorporated into other software packages, subject to the following license:
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this source file (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy, modify,
* merge, publish, distribute, sublicense, and/or sell copies of the Software,
* and to permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
/**
* \file xenbusb.c
*
* \brief Shared support functions for managing the NewBus busses that contain
* Xen front and back end device instances.
*
* The NewBus implementation of XenBus attaches a xenbusb_front and xenbusb_back
* child bus to the xenstore device. This strategy allows the small differences
* in the handling of XenBus operations for front and back devices to be handled
* as overrides in xenbusb_front/back.c. Front and back specific device
* classes are also provided so device drivers can register for the devices they
* can handle without the need to filter within their probe routines. The
* net result is a device hierarchy that might look like this:
*
* xenstore0/
* xenbusb_front0/
* xn0
* xbd0
* xbd1
* xenbusb_back0/
* xbbd0
* xnb0
* xnb1
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/bus.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/sbuf.h>
#include <sys/sysctl.h>
#include <sys/syslog.h>
#include <sys/systm.h>
#include <sys/sx.h>
#include <sys/taskqueue.h>
#include <machine/xen/xen-os.h>
#include <machine/stdarg.h>
#include <xen/gnttab.h>
#include <xen/xenstore/xenstorevar.h>
#include <xen/xenbus/xenbusb.h>
#include <xen/xenbus/xenbusvar.h>
/*------------------------- Private Functions --------------------------------*/
/**
* \brief Deallocate XenBus device instance variables.
*
* \param ivars The instance variable block to free.
*/
static void
xenbusb_free_child_ivars(struct xenbus_device_ivars *ivars)
{
if (ivars->xd_otherend_watch.node != NULL) {
xs_unregister_watch(&ivars->xd_otherend_watch);
free(ivars->xd_otherend_watch.node, M_XENBUS);
ivars->xd_otherend_watch.node = NULL;
}
if (ivars->xd_node != NULL) {
free(ivars->xd_node, M_XENBUS);
ivars->xd_node = NULL;
}
if (ivars->xd_type != NULL) {
free(ivars->xd_type, M_XENBUS);
ivars->xd_type = NULL;
}
if (ivars->xd_otherend_path != NULL) {
free(ivars->xd_otherend_path, M_XENBUS);
ivars->xd_otherend_path = NULL;
}
free(ivars, M_XENBUS);
}
/**
* XenBus watch callback registered against the "state" XenStore
* node of the other-end of a split device connection.
*
* This callback is invoked whenever the state of a device instance's
* peer changes.
*
* \param watch The xs_watch object used to register this callback
* function.
* \param vec An array of pointers to NUL terminated strings containing
* watch event data. The vector should be indexed via the
* xs_watch_type enum in xs_wire.h.
* \param vec_size The number of elements in vec.
*
* \return The device_t of the found device if any, or NULL.
*
* \note device_t is a pointer type, so it can be compared against
* NULL for validity.
*/
static void
xenbusb_otherend_changed(struct xs_watch *watch, const char **vec,
unsigned int vec_size __unused)
{
struct xenbus_device_ivars *ivars;
device_t dev;
enum xenbus_state newstate;
ivars = (struct xenbus_device_ivars *) watch;
dev = ivars->xd_dev;
if (!ivars->xd_otherend_path
|| strncmp(ivars->xd_otherend_path, vec[XS_WATCH_PATH],
strlen(ivars->xd_otherend_path)))
return;
newstate = xenbus_read_driver_state(ivars->xd_otherend_path);
XENBUS_OTHEREND_CHANGED(dev, newstate);
}
/**
* Search our internal record of configured devices (not the XenStore)
* to determine if the XenBus device indicated by \a node is known to
* the system.
*
* \param dev The XenBus bus instance to search for device children.
* \param node The XenStore node path for the device to find.
*
* \return The device_t of the found device if any, or NULL.
*
* \note device_t is a pointer type, so it can be compared against
* NULL for validity.
*/
static device_t
xenbusb_device_exists(device_t dev, const char *node)
{
device_t *kids;
device_t result;
struct xenbus_device_ivars *ivars;
int i, count;
if (device_get_children(dev, &kids, &count))
return (FALSE);
result = NULL;
for (i = 0; i < count; i++) {
ivars = device_get_ivars(kids[i]);
if (!strcmp(ivars->xd_node, node)) {
result = kids[i];
break;
}
}
free(kids, M_TEMP);
return (result);
}
static void
xenbusb_delete_child(device_t dev, device_t child)
{
struct xenbus_device_ivars *ivars;
ivars = device_get_ivars(child);
/*
* We no longer care about the otherend of the
* connection. Cancel the watch now so that we
* don't try to handle an event for a partially
* detached child.
*/
if (ivars->xd_otherend_watch.node != NULL)
xs_unregister_watch(&ivars->xd_otherend_watch);
device_delete_child(dev, child);
xenbusb_free_child_ivars(ivars);
}
/**
* \param dev The NewBus device representing this XenBus bus.
* \param child The NewBus device representing a child of dev%'s XenBus bus.
*/
static void
xenbusb_verify_device(device_t dev, device_t child)
{
if (xs_exists(XST_NIL, xenbus_get_node(child), "") == 0) {
/*
* Device tree has been removed from Xenbus.
* Tear down the device.
*/
xenbusb_delete_child(dev, child);
}
}
/**
* \brief Enumerate the devices on a XenBus bus and register them with
* the NewBus device tree.
*
* xenbusb_enumerate_bus() will create entries (in state DS_NOTPRESENT)
* for nodes that appear in the XenStore, but will not invoke probe/attach
* operations on drivers. Probe/Attach processing must be separately
* performed via an invocation of xenbusb_probe_children(). This is usually
* done via the xbs_probe_children task.
*
* \param xbs XenBus Bus device softc of the owner of the bus to enumerate.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
static int
xenbusb_enumerate_bus(struct xenbusb_softc *xbs)
{
const char **types;
u_int type_idx;
u_int type_count;
int error;
error = xs_directory(XST_NIL, xbs->xbs_node, "", &type_count, &types);
if (error)
return (error);
for (type_idx = 0; type_idx < type_count; type_idx++)
XENBUSB_ENUMERATE_TYPE(xbs->xbs_dev, types[type_idx]);
free(types, M_XENSTORE);
return (0);
}
/**
* Handler for all generic XenBus device systcl nodes.
*/
static int
xenbusb_device_sysctl_handler(SYSCTL_HANDLER_ARGS)
{
device_t dev;
const char *value;
dev = (device_t)arg1;
switch (arg2) {
case XENBUS_IVAR_NODE:
value = xenbus_get_node(dev);
break;
case XENBUS_IVAR_TYPE:
value = xenbus_get_type(dev);
break;
case XENBUS_IVAR_STATE:
value = xenbus_strstate(xenbus_get_state(dev));
break;
case XENBUS_IVAR_OTHEREND_ID:
return (sysctl_handle_int(oidp, NULL,
xenbus_get_otherend_id(dev),
req));
/* NOTREACHED */
case XENBUS_IVAR_OTHEREND_PATH:
value = xenbus_get_otherend_path(dev);
break;
default:
return (EINVAL);
}
return (SYSCTL_OUT(req, value, strlen(value)));
}
/**
* Create read-only systcl nodes for xenbusb device ivar data.
*
* \param dev The XenBus device instance to register with sysctl.
*/
static void
xenbusb_device_sysctl_init(device_t dev)
{
struct sysctl_ctx_list *ctx;
struct sysctl_oid *tree;
ctx = device_get_sysctl_ctx(dev);
tree = device_get_sysctl_tree(dev);
SYSCTL_ADD_PROC(ctx,
SYSCTL_CHILDREN(tree),
OID_AUTO,
"xenstore_path",
CTLFLAG_RD,
dev,
XENBUS_IVAR_NODE,
xenbusb_device_sysctl_handler,
"A",
"XenStore path to device");
SYSCTL_ADD_PROC(ctx,
SYSCTL_CHILDREN(tree),
OID_AUTO,
"xenbus_dev_type",
CTLFLAG_RD,
dev,
XENBUS_IVAR_TYPE,
xenbusb_device_sysctl_handler,
"A",
"XenBus device type");
SYSCTL_ADD_PROC(ctx,
SYSCTL_CHILDREN(tree),
OID_AUTO,
"xenbus_connection_state",
CTLFLAG_RD,
dev,
XENBUS_IVAR_STATE,
xenbusb_device_sysctl_handler,
"A",
"XenBus state of peer connection");
SYSCTL_ADD_PROC(ctx,
SYSCTL_CHILDREN(tree),
OID_AUTO,
"xenbus_peer_domid",
CTLFLAG_RD,
dev,
XENBUS_IVAR_OTHEREND_ID,
xenbusb_device_sysctl_handler,
"I",
"Xen domain ID of peer");
SYSCTL_ADD_PROC(ctx,
SYSCTL_CHILDREN(tree),
OID_AUTO,
"xenstore_peer_path",
CTLFLAG_RD,
dev,
XENBUS_IVAR_OTHEREND_PATH,
xenbusb_device_sysctl_handler,
"A",
"XenStore path to peer device");
}
/**
* \brief Verify the existance of attached device instances and perform
* probe/attach processing for newly arrived devices.
*
* \param dev The NewBus device representing this XenBus bus.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
static int
xenbusb_probe_children(device_t dev)
{
device_t *kids;
struct xenbus_device_ivars *ivars;
int i, count;
if (device_get_children(dev, &kids, &count) == 0) {
for (i = 0; i < count; i++) {
if (device_get_state(kids[i]) != DS_NOTPRESENT) {
/*
* We already know about this one.
* Make sure it's still here.
*/
xenbusb_verify_device(dev, kids[i]);
continue;
}
if (device_probe_and_attach(kids[i])) {
/*
* Transition device to the closed state
* so the world knows that attachment will
* not occur.
*/
xenbus_set_state(kids[i], XenbusStateClosed);
/*
* Remove our record of this device.
* So long as it remains in the closed
* state in the XenStore, we will not find
* it again. The state will only change
* if the control domain actively reconfigures
* this device.
*/
xenbusb_delete_child(dev, kids[i]);
continue;
}
/*
* Augment default newbus provided dynamic sysctl
* variables with the standard ivar contents of
* XenBus devices.
*/
xenbusb_device_sysctl_init(kids[i]);
/*
* Now that we have a driver managing this device
* that can receive otherend state change events,
* hook up a watch for them.
*/
ivars = device_get_ivars(kids[i]);
xs_register_watch(&ivars->xd_otherend_watch);
}
free(kids, M_TEMP);
}
return (0);
}
/**
* \brief Task callback function to perform XenBus probe operations
* from a known safe context.
*
* \param arg The NewBus device_t representing the bus instance to
* on which to perform probe processing.
* \param pending The number of times this task was queued before it could
* be run.
*/
static void
xenbusb_probe_children_cb(void *arg, int pending __unused)
{
device_t dev = (device_t)arg;
/*
* Hold Giant until the Giant free newbus changes are committed.
*/
mtx_lock(&Giant);
xenbusb_probe_children(dev);
mtx_unlock(&Giant);
}
/**
* \brief XenStore watch callback for the root node of the XenStore
* subtree representing a XenBus.
*
* This callback performs, or delegates to the xbs_probe_children task,
* all processing necessary to handle dynmaic device arrival and departure
* events from a XenBus.
*
* \param watch The XenStore watch object associated with this callback.
* \param vec The XenStore watch event data.
* \param len The number of fields in the event data stream.
*/
static void
xenbusb_devices_changed(struct xs_watch *watch, const char **vec,
unsigned int len)
{
struct xenbusb_softc *xbs;
device_t dev;
char *node;
char *bus;
char *type;
char *id;
char *p;
u_int component;
xbs = (struct xenbusb_softc *)watch;
dev = xbs->xbs_dev;
if (len <= XS_WATCH_PATH) {
device_printf(dev, "xenbusb_devices_changed: "
"Short Event Data.\n");
return;
}
node = strdup(vec[XS_WATCH_PATH], M_XENBUS);
p = strchr(node, '/');
if (p == NULL)
goto out;
bus = node;
*p = 0;
type = p + 1;
p = strchr(type, '/');
if (p == NULL)
goto out;
*p++ = 0;
/*
* Extract the device ID. A device ID has one or more path
* components separated by the '/' character.
*
* e.g. "<frontend vm id>/<frontend dev id>" for backend devices.
*/
id = p;
for (component = 0; component < xbs->xbs_id_components; component++) {
p = strchr(p, '/');
if (p == NULL)
break;
p++;
}
if (p != NULL)
*p = 0;
if (*id != 0 && component >= xbs->xbs_id_components - 1) {
xenbusb_add_device(xbs->xbs_dev, type, id);
taskqueue_enqueue(taskqueue_thread, &xbs->xbs_probe_children);
}
out:
free(node, M_XENBUS);
}
/**
* \brief Interrupt configuration hook callback associated with xbs_attch_ch.
*
* Since interrupts are always functional at the time of XenBus configuration,
* there is nothing to be done when the callback occurs. This hook is only
* registered to hold up boot processing while XenBus devices come online.
*
* \param arg Unused configuration hook callback argument.
*/
static void
xenbusb_nop_confighook_cb(void *arg __unused)
{
}
/**
* \brief Decrement the number of XenBus child devices in the
* connecting state by one and release the xbs_attch_ch
* interrupt configuration hook if the connecting count
* drops to zero.
*
* \param xbs XenBus Bus device softc of the owner of the bus to enumerate.
*/
static void
xenbusb_release_confighook(struct xenbusb_softc *xbs)
{
mtx_lock(&xbs->xbs_lock);
KASSERT(xbs->xbs_connecting_children > 0,
("Connecting device count error\n"));
xbs->xbs_connecting_children--;
if (xbs->xbs_connecting_children == 0
&& (xbs->xbs_flags & XBS_ATTACH_CH_ACTIVE) != 0) {
xbs->xbs_flags &= ~XBS_ATTACH_CH_ACTIVE;
mtx_unlock(&xbs->xbs_lock);
config_intrhook_disestablish(&xbs->xbs_attach_ch);
} else {
mtx_unlock(&xbs->xbs_lock);
}
}
/*--------------------------- Public Functions -------------------------------*/
/*--------- API comments for these methods can be found in xenbusb.h ---------*/
void
xenbusb_identify(driver_t *driver __unused, device_t parent)
{
/*
* A single instance of each bus type for which we have a driver
* is always present in a system operating under Xen.
*/
BUS_ADD_CHILD(parent, 0, driver->name, 0);
}
int
xenbusb_add_device(device_t dev, const char *type, const char *id)
{
struct xenbusb_softc *xbs;
struct sbuf *devpath_sbuf;
char *devpath;
struct xenbus_device_ivars *ivars;
int error;
xbs = device_get_softc(dev);
devpath_sbuf = sbuf_new_auto();
sbuf_printf(devpath_sbuf, "%s/%s/%s", xbs->xbs_node, type, id);
sbuf_finish(devpath_sbuf);
devpath = sbuf_data(devpath_sbuf);
ivars = malloc(sizeof(*ivars), M_XENBUS, M_ZERO|M_WAITOK);
error = ENXIO;
if (xs_exists(XST_NIL, devpath, "") != 0) {
device_t child;
enum xenbus_state state;
char *statepath;
child = xenbusb_device_exists(dev, devpath);
if (child != NULL) {
/*
* We are already tracking this node
*/
error = 0;
goto out;
}
state = xenbus_read_driver_state(devpath);
if (state != XenbusStateInitialising) {
/*
* Device is not new, so ignore it. This can
* happen if a device is going away after
* switching to Closed.
*/
printf("xenbusb_add_device: Device %s ignored. "
"State %d\n", devpath, state);
error = 0;
goto out;
}
sx_init(&ivars->xd_lock, "xdlock");
ivars->xd_flags = XDF_CONNECTING;
ivars->xd_node = strdup(devpath, M_XENBUS);
ivars->xd_type = strdup(type, M_XENBUS);
ivars->xd_state = XenbusStateInitialising;
error = XENBUSB_GET_OTHEREND_NODE(dev, ivars);
if (error) {
printf("xenbus_update_device: %s no otherend id\n",
devpath);
goto out;
}
statepath = malloc(strlen(ivars->xd_otherend_path)
+ strlen("/state") + 1, M_XENBUS, M_WAITOK);
sprintf(statepath, "%s/state", ivars->xd_otherend_path);
ivars->xd_otherend_watch.node = statepath;
ivars->xd_otherend_watch.callback = xenbusb_otherend_changed;
mtx_lock(&xbs->xbs_lock);
xbs->xbs_connecting_children++;
mtx_unlock(&xbs->xbs_lock);
child = device_add_child(dev, NULL, -1);
ivars->xd_dev = child;
device_set_ivars(child, ivars);
}
out:
sbuf_delete(devpath_sbuf);
if (error != 0)
xenbusb_free_child_ivars(ivars);
return (error);
}
int
xenbusb_attach(device_t dev, char *bus_node, u_int id_components)
{
struct xenbusb_softc *xbs;
xbs = device_get_softc(dev);
mtx_init(&xbs->xbs_lock, "xenbusb softc lock", NULL, MTX_DEF);
xbs->xbs_node = bus_node;
xbs->xbs_id_components = id_components;
xbs->xbs_dev = dev;
/*
* Since XenBus busses are attached to the XenStore, and
* the XenStore does not probe children until after interrupt
* services are available, this config hook is used solely
* to ensure that the remainder of the boot process (e.g.
* mount root) is deferred until child devices are adequately
* probed. We unblock the boot process as soon as the
* connecting child count in our softc goes to 0.
*/
xbs->xbs_attach_ch.ich_func = xenbusb_nop_confighook_cb;
xbs->xbs_attach_ch.ich_arg = dev;
config_intrhook_establish(&xbs->xbs_attach_ch);
xbs->xbs_flags |= XBS_ATTACH_CH_ACTIVE;
xbs->xbs_connecting_children = 1;
/*
* The subtree for this bus type may not yet exist
* causing initial enumeration to fail. We still
* want to return success from our attach though
* so that we are ready to handle devices for this
* bus when they are dynamically attached to us
* by a Xen management action.
*/
(void)xenbusb_enumerate_bus(xbs);
xenbusb_probe_children(dev);
xbs->xbs_device_watch.node = bus_node;
xbs->xbs_device_watch.callback = xenbusb_devices_changed;
TASK_INIT(&xbs->xbs_probe_children, 0, xenbusb_probe_children_cb, dev);
xs_register_watch(&xbs->xbs_device_watch);
xenbusb_release_confighook(xbs);
return (0);
}
int
xenbusb_resume(device_t dev)
{
device_t *kids;
struct xenbus_device_ivars *ivars;
int i, count, error;
char *statepath;
/*
* We must re-examine each device and find the new path for
* its backend.
*/
if (device_get_children(dev, &kids, &count) == 0) {
for (i = 0; i < count; i++) {
if (device_get_state(kids[i]) == DS_NOTPRESENT)
continue;
ivars = device_get_ivars(kids[i]);
xs_unregister_watch(&ivars->xd_otherend_watch);
ivars->xd_state = XenbusStateInitialising;
/*
* Find the new backend details and
* re-register our watch.
*/
error = XENBUSB_GET_OTHEREND_NODE(dev, ivars);
if (error)
return (error);
DEVICE_RESUME(kids[i]);
statepath = malloc(strlen(ivars->xd_otherend_path)
+ strlen("/state") + 1, M_XENBUS, M_WAITOK);
sprintf(statepath, "%s/state", ivars->xd_otherend_path);
free(ivars->xd_otherend_watch.node, M_XENBUS);
ivars->xd_otherend_watch.node = statepath;
xs_register_watch(&ivars->xd_otherend_watch);
#if 0
/*
* Can't do this yet since we are running in
* the xenwatch thread and if we sleep here,
* we will stop delivering watch notifications
* and the device will never come back online.
*/
sx_xlock(&ivars->xd_lock);
while (ivars->xd_state != XenbusStateClosed
&& ivars->xd_state != XenbusStateConnected)
sx_sleep(&ivars->xd_state, &ivars->xd_lock,
0, "xdresume", 0);
sx_xunlock(&ivars->xd_lock);
#endif
}
free(kids, M_TEMP);
}
return (0);
}
int
xenbusb_print_child(device_t dev, device_t child)
{
struct xenbus_device_ivars *ivars = device_get_ivars(child);
int retval = 0;
retval += bus_print_child_header(dev, child);
retval += printf(" at %s", ivars->xd_node);
retval += bus_print_child_footer(dev, child);
return (retval);
}
int
xenbusb_read_ivar(device_t dev, device_t child, int index, uintptr_t *result)
{
struct xenbus_device_ivars *ivars = device_get_ivars(child);
switch (index) {
case XENBUS_IVAR_NODE:
*result = (uintptr_t) ivars->xd_node;
return (0);
case XENBUS_IVAR_TYPE:
*result = (uintptr_t) ivars->xd_type;
return (0);
case XENBUS_IVAR_STATE:
*result = (uintptr_t) ivars->xd_state;
return (0);
case XENBUS_IVAR_OTHEREND_ID:
*result = (uintptr_t) ivars->xd_otherend_id;
return (0);
case XENBUS_IVAR_OTHEREND_PATH:
*result = (uintptr_t) ivars->xd_otherend_path;
return (0);
}
return (ENOENT);
}
int
xenbusb_write_ivar(device_t dev, device_t child, int index, uintptr_t value)
{
struct xenbus_device_ivars *ivars = device_get_ivars(child);
enum xenbus_state newstate;
int currstate;
switch (index) {
case XENBUS_IVAR_STATE:
{
int error;
newstate = (enum xenbus_state) value;
sx_xlock(&ivars->xd_lock);
if (ivars->xd_state == newstate) {
error = 0;
goto out;
}
error = xs_scanf(XST_NIL, ivars->xd_node, "state",
NULL, "%d", &currstate);
if (error)
goto out;
do {
error = xs_printf(XST_NIL, ivars->xd_node, "state",
"%d", newstate);
} while (error == EAGAIN);
if (error) {
/*
* Avoid looping through xenbus_dev_fatal()
* which calls xenbus_write_ivar to set the
* state to closing.
*/
if (newstate != XenbusStateClosing)
xenbus_dev_fatal(dev, error,
"writing new state");
goto out;
}
ivars->xd_state = newstate;
if ((ivars->xd_flags & XDF_CONNECTING) != 0
&& (newstate == XenbusStateClosed
|| newstate == XenbusStateConnected)) {
struct xenbusb_softc *xbs;
ivars->xd_flags &= ~XDF_CONNECTING;
xbs = device_get_softc(dev);
xenbusb_release_confighook(xbs);
}
wakeup(&ivars->xd_state);
out:
sx_xunlock(&ivars->xd_lock);
return (error);
}
case XENBUS_IVAR_NODE:
case XENBUS_IVAR_TYPE:
case XENBUS_IVAR_OTHEREND_ID:
case XENBUS_IVAR_OTHEREND_PATH:
/*
* These variables are read-only.
*/
return (EINVAL);
}
return (ENOENT);
}

272
sys/xen/xenbus/xenbusb.h Normal file
View File

@ -0,0 +1,272 @@
/*-
* Core definitions and data structures shareable across OS platforms.
*
* Copyright (c) 2010 Spectra Logic Corporation
* Copyright (C) 2008 Doug Rabson
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*
* $FreeBSD$
*/
#ifndef _XEN_XENBUS_XENBUSB_H
#define _XEN_XENBUS_XENBUSB_H
/**
* \file xenbusb.h
*
* Datastructures and function declarations for use in implementing
* bus attachements (e.g. frontend and backend device busses) for XenBus.
*/
#include "xenbusb_if.h"
/**
* Enumeration of state flag values for the xbs_flags field of
* the xenbusb_softc structure.
*/
typedef enum {
/** */
XBS_ATTACH_CH_ACTIVE = 0x01
} xenbusb_softc_flag;
/**
* \brief Container for all state needed to manage a Xenbus Bus
* attachment.
*/
struct xenbusb_softc {
/**
* XenStore watch used to monitor the subtree of the
* XenStore where devices for this bus attachment arrive
* and depart.
*
* \note This field must be the first in the softc structure
* so that a simple cast can be used to retrieve the
* softc from within a XenStore watch event callback.
*/
struct xs_watch xbs_device_watch;
/** Mutex used to protect fields of the xenbusb_softc. */
struct mtx xbs_lock;
/** State flags. */
xenbusb_softc_flag xbs_flags;
/**
* A dedicated task for processing child arrival and
* departure events.
*/
struct task xbs_probe_children;
/**
* Config Hook used to block boot processing until
* XenBus devices complete their connection processing
* with other VMs.
*/
struct intr_config_hook xbs_attach_ch;
/**
* The number of children for this bus that are still
* in the connecting (to other VMs) state. This variable
* is used to determine when to release xbs_attach_ch.
*/
u_int xbs_connecting_children;
/** The NewBus device_t for this bus attachment. */
device_t xbs_dev;
/**
* The VM relative path to the XenStore subtree this
* bus attachment manages.
*/
const char *xbs_node;
/**
* The number of path components (strings separated by the '/'
* character) that make up the device ID on this bus.
*/
u_int xbs_id_components;
};
/**
* Enumeration of state flag values for the xbs_flags field of
* the xenbusb_softc structure.
*/
typedef enum {
/**
* This device is contributing to the xbs_connecting_children
* count of its parent bus.
*/
XDF_CONNECTING = 0x01
} xenbus_dev_flag;
/** Instance variables for devices on a XenBus bus. */
struct xenbus_device_ivars {
/**
* XenStore watch used to monitor the subtree of the
* XenStore where information about the otherend of
* the split Xen device this device instance represents.
*
* \note This field must be the first in the instance
* variable structure so that a simple cast can be
* used to retrieve ivar data from within a XenStore
* watch event callback.
*/
struct xs_watch xd_otherend_watch;
/** Sleepable lock used to protect instance data. */
struct sx xd_lock;
/** State flags. */
xenbus_dev_flag xd_flags;
/** The NewBus device_t for this XenBus device instance. */
device_t xd_dev;
/**
* The VM relative path to the XenStore subtree representing
* this VMs half of this device.
*/
char *xd_node;
/** XenBus device type ("vbd", "vif", etc.). */
char *xd_type;
/**
* Cached version of <xd_node>/state node in the XenStore.
*/
enum xenbus_state xd_state;
/** The VM identifier of the other end of this split device. */
int xd_otherend_id;
/**
* The path to the subtree of the XenStore where information
* about the otherend of this split device instance.
*/
char *xd_otherend_path;
};
/**
* \brief Identify instances of this device type in the system.
*
* \param driver The driver performing this identify action.
* \param parent The NewBus parent device for any devices this method adds.
*/
void xenbusb_identify(driver_t *driver __unused, device_t parent);
/**
* \brief Perform common XenBus bus attach processing.
*
* \param dev The NewBus device representing this XenBus bus.
* \param bus_node The XenStore path to the XenStore subtree for
* this XenBus bus.
* \param id_components The number of '/' separated path components that
* make up a unique device ID on this XenBus bus.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* Intiailizes the softc for this bus, installs an interrupt driven
* configuration hook to block boot processing until XenBus devices fully
* configure, performs an initial probe/attach of the bus, and registers
* a XenStore watch so we are notified when the bus topology changes.
*/
int xenbusb_attach(device_t dev, char *bus_node, u_int id_components);
/**
* \brief Perform common XenBus bus resume handling.
*
* \param dev The NewBus device representing this XenBus bus.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
int xenbusb_resume(device_t dev);
/**
* \brief Pretty-prints information about a child of a XenBus bus.
*
* \param dev The NewBus device representing this XenBus bus.
* \param child The NewBus device representing a child of dev%'s XenBus bus.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
int xenbusb_print_child(device_t dev, device_t child);
/**
* \brief Common XenBus child instance variable read access method.
*
* \param dev The NewBus device representing this XenBus bus.
* \param child The NewBus device representing a child of dev%'s XenBus bus.
* \param index The index of the instance variable to access.
* \param result The value of the instance variable accessed.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
int xenbusb_read_ivar(device_t dev, device_t child, int index,
uintptr_t *result);
/**
* \brief Common XenBus child instance variable write access method.
*
* \param dev The NewBus device representing this XenBus bus.
* \param child The NewBus device representing a child of dev%'s XenBus bus.
* \param index The index of the instance variable to access.
* \param value The new value to set in the instance variable accessed.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
int xenbusb_write_ivar(device_t dev, device_t child, int index,
uintptr_t value);
/**
* \brief Attempt to add a XenBus device instance to this XenBus bus.
*
* \param dev The NewBus device representing this XenBus bus.
* \param type The device type being added (e.g. "vbd", "vif").
* \param id The device ID for this device.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure. Failure indicates that either the
* path to this device no longer exists or insufficient
* information exists in the XenStore to create a new
* device.
*
* If successful, this routine will add a device_t with instance
* variable storage to the NewBus device topology. Probe/Attach
* processing is not performed by this routine, but must be scheduled
* via the xbs_probe_children task. This separation of responsibilities
* is required to avoid hanging up the XenStore event delivery thread
* with our probe/attach work in the event a device is added via
* a callback from the XenStore.
*/
int xenbusb_add_device(device_t dev, const char *type, const char *id);
#endif /* _XEN_XENBUS_XENBUSB_H */

View File

@ -0,0 +1,295 @@
/******************************************************************************
* Talks to Xen Store to figure out what devices we have.
*
* Copyright (C) 2009, 2010 Spectra Logic Corporation
* Copyright (C) 2008 Doug Rabson
* Copyright (C) 2005 Rusty Russell, IBM Corporation
* Copyright (C) 2005 Mike Wray, Hewlett-Packard
* Copyright (C) 2005 XenSource Ltd
*
* This file may be distributed separately from the Linux kernel, or
* incorporated into other software packages, subject to the following license:
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this source file (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy, modify,
* merge, publish, distribute, sublicense, and/or sell copies of the Software,
* and to permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
/**
* \file xenbusb_back.c
*
* XenBus management of the NewBus bus containing the backend instances of
* Xen split devices.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/bus.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/sbuf.h>
#include <sys/sysctl.h>
#include <sys/syslog.h>
#include <sys/systm.h>
#include <sys/sx.h>
#include <sys/taskqueue.h>
#include <machine/xen/xen-os.h>
#include <machine/stdarg.h>
#include <xen/gnttab.h>
#include <xen/xenbus/xenbusvar.h>
#include <xen/xenbus/xenbusb.h>
/*------------------ Private Device Attachment Functions --------------------*/
/**
* \brief Probe for the existance of the XenBus back bus.
*
* \param dev NewBus device_t for this XenBus back bus instance.
*
* \return Always returns 0 indicating success.
*/
static int
xenbusb_back_probe(device_t dev)
{
device_set_desc(dev, "Xen Backend Devices");
return (0);
}
/**
* \brief Attach the XenBus back bus.
*
* \param dev NewBus device_t for this XenBus back bus instance.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
static int
xenbusb_back_attach(device_t dev)
{
struct xenbusb_softc *xbs;
int error;
xbs = device_get_softc(dev);
error = xenbusb_attach(dev, "backend", /*id_components*/2);
/*
* Backend devices operate to serve other domains,
* so there is no need to hold up boot processing
* while connections to foreign domains are made.
*/
mtx_lock(&xbs->xbs_lock);
if ((xbs->xbs_flags & XBS_ATTACH_CH_ACTIVE) != 0) {
xbs->xbs_flags &= ~XBS_ATTACH_CH_ACTIVE;
mtx_unlock(&xbs->xbs_lock);
config_intrhook_disestablish(&xbs->xbs_attach_ch);
} else {
mtx_unlock(&xbs->xbs_lock);
}
return (error);
}
/**
* \brief Enumerate all devices of the given type on this bus.
*
* \param dev NewBus device_t for this XenBus backend bus instance.
* \param type String indicating the device sub-tree (e.g. "vfb", "vif")
* to enumerate.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* Devices that are found are entered into the NewBus hierarchy via
* xenbusb_add_device(). xenbusb_add_device() ignores duplicate detects
* and ignores duplicate devices, so it can be called unconditionally
* for any device found in the XenStore.
*
* The backend XenStore hierarchy has the following format:
*
* backend/<device type>/<frontend vm id>/<device id>
*
*/
static int
xenbusb_back_enumerate_type(device_t dev, const char *type)
{
struct xenbusb_softc *xbs;
const char **vms;
u_int vm_idx;
u_int vm_count;
int error;
xbs = device_get_softc(dev);
error = xs_directory(XST_NIL, xbs->xbs_node, type, &vm_count, &vms);
if (error)
return (error);
for (vm_idx = 0; vm_idx < vm_count; vm_idx++) {
struct sbuf *vm_path;
const char *vm;
const char **devs;
u_int dev_idx;
u_int dev_count;
vm = vms[vm_idx];
vm_path = xs_join(type, vm);
error = xs_directory(XST_NIL, xbs->xbs_node, sbuf_data(vm_path),
&dev_count, &devs);
sbuf_delete(vm_path);
if (error)
break;
for (dev_idx = 0; dev_idx < dev_count; dev_idx++) {
const char *dev_num;
struct sbuf *id;
dev_num = devs[dev_idx];
id = xs_join(vm, dev_num);
xenbusb_add_device(dev, type, sbuf_data(id));
sbuf_delete(id);
}
free(devs, M_XENSTORE);
}
free(vms, M_XENSTORE);
return (0);
}
/**
* \brief Determine and store the XenStore path for the other end of
* a split device whose local end is represented by ivars.
*
* \param dev NewBus device_t for this XenBus backend bus instance.
* \param ivars Instance variables from the XenBus child device for
* which to perform this function.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* If successful, the xd_otherend_path field of the child's instance
* variables will be updated.
*
*/
static int
xenbusb_back_get_otherend_node(device_t dev, struct xenbus_device_ivars *ivars)
{
char *otherend_path;
int error;
if (ivars->xd_otherend_path != NULL) {
free(ivars->xd_otherend_path, M_XENBUS);
ivars->xd_otherend_path = NULL;
}
error = xs_gather(XST_NIL, ivars->xd_node,
"frontend-id", "%i", &ivars->xd_otherend_id,
"frontend", NULL, &otherend_path,
NULL);
if (error == 0) {
ivars->xd_otherend_path = strdup(otherend_path, M_XENBUS);
free(otherend_path, M_XENSTORE);
}
return (error);
}
/**
* \brief Backend XenBus child instance variable write access method.
*
* \param dev The NewBus device representing this XenBus bus.
* \param child The NewBus device representing a child of dev%'s XenBus bus.
* \param index The index of the instance variable to access.
* \param value The new value to set in the instance variable accessed.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* Xenbus_back overrides this method so that it can trap state transitions
* of local backend devices and clean up their XenStore entries as necessary
* during device instance teardown.
*/
static int
xenbusb_back_write_ivar(device_t dev, device_t child, int index,
uintptr_t value)
{
int error;
error = xenbusb_write_ivar(dev, child, index, value);
if (index == XENBUS_IVAR_STATE
&& (enum xenbus_state)value == XenbusStateClosed
&& xenbus_dev_is_online(child) == 0) {
/*
* Cleanup the hotplug entry in the XenStore if
* present. The control domain expects any userland
* component associated with this device to destroy
* this node in order to signify it is safe to
* teardown the device. However, not all backends
* rely on userland components, and those that
* do should either use a communication channel
* other than the XenStore, or ensure the hotplug
* data is already cleaned up.
*
* This removal ensures that no matter what path
* is taken to mark a back-end closed, the control
* domain will understand that it is closed.
*/
xs_rm(XST_NIL, xenbus_get_node(child), "hotplug-status");
}
return (error);
}
/*-------------------- Private Device Attachment Data -----------------------*/
static device_method_t xenbusb_back_methods[] = {
/* Device interface */
DEVMETHOD(device_identify, xenbusb_identify),
DEVMETHOD(device_probe, xenbusb_back_probe),
DEVMETHOD(device_attach, xenbusb_back_attach),
DEVMETHOD(device_detach, bus_generic_detach),
DEVMETHOD(device_shutdown, bus_generic_shutdown),
DEVMETHOD(device_suspend, bus_generic_suspend),
DEVMETHOD(device_resume, bus_generic_resume),
/* Bus Interface */
DEVMETHOD(bus_print_child, xenbusb_print_child),
DEVMETHOD(bus_read_ivar, xenbusb_read_ivar),
DEVMETHOD(bus_write_ivar, xenbusb_back_write_ivar),
DEVMETHOD(bus_alloc_resource, bus_generic_alloc_resource),
DEVMETHOD(bus_release_resource, bus_generic_release_resource),
DEVMETHOD(bus_activate_resource, bus_generic_activate_resource),
DEVMETHOD(bus_deactivate_resource, bus_generic_deactivate_resource),
/* XenBus Bus Interface */
DEVMETHOD(xenbusb_enumerate_type, xenbusb_back_enumerate_type),
DEVMETHOD(xenbusb_get_otherend_node, xenbusb_back_get_otherend_node),
{ 0, 0 }
};
DEFINE_CLASS_0(xenbusb_back, xenbusb_back_driver, xenbusb_back_methods,
sizeof(struct xenbusb_softc));
devclass_t xenbusb_back_devclass;
DRIVER_MODULE(xenbusb_back, xenstore, xenbusb_back_driver,
xenbusb_back_devclass, 0, 0);

View File

@ -0,0 +1,195 @@
/******************************************************************************
* Talks to Xen Store to figure out what devices we have.
*
* Copyright (C) 2009, 2010 Spectra Logic Corporation
* Copyright (C) 2008 Doug Rabson
* Copyright (C) 2005 Rusty Russell, IBM Corporation
* Copyright (C) 2005 Mike Wray, Hewlett-Packard
* Copyright (C) 2005 XenSource Ltd
*
* This file may be distributed separately from the Linux kernel, or
* incorporated into other software packages, subject to the following license:
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this source file (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy, modify,
* merge, publish, distribute, sublicense, and/or sell copies of the Software,
* and to permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*/
/**
* \file xenbusb_front.c
*
* XenBus management of the NewBus bus containing the frontend instances of
* Xen split devices.
*/
#include <sys/cdefs.h>
__FBSDID("$FreeBSD$");
#include <sys/param.h>
#include <sys/bus.h>
#include <sys/kernel.h>
#include <sys/lock.h>
#include <sys/malloc.h>
#include <sys/module.h>
#include <sys/sbuf.h>
#include <sys/sysctl.h>
#include <sys/syslog.h>
#include <sys/systm.h>
#include <sys/sx.h>
#include <sys/taskqueue.h>
#include <machine/xen/xen-os.h>
#include <machine/stdarg.h>
#include <xen/gnttab.h>
#include <xen/xenbus/xenbusvar.h>
#include <xen/xenbus/xenbusb.h>
/*------------------ Private Device Attachment Functions --------------------*/
/**
* \brief Probe for the existance of the XenBus front bus.
*
* \param dev NewBus device_t for this XenBus front bus instance.
*
* \return Always returns 0 indicating success.
*/
static int
xenbusb_front_probe(device_t dev)
{
device_set_desc(dev, "Xen Frontend Devices");
return (0);
}
/**
* \brief Attach the XenBus front bus.
*
* \param dev NewBus device_t for this XenBus front bus instance.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
static int
xenbusb_front_attach(device_t dev)
{
return (xenbusb_attach(dev, "device", /*id_components*/1));
}
/**
* \brief Enumerate all devices of the given type on this bus.
*
* \param dev NewBus device_t for this XenBus front bus instance.
* \param type String indicating the device sub-tree (e.g. "vfb", "vif")
* to enumerate.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* Devices that are found are entered into the NewBus hierarchy via
* xenbusb_add_device(). xenbusb_add_device() ignores duplicate detects
* and ignores duplicate devices, so it can be called unconditionally
* for any device found in the XenStore.
*/
static int
xenbusb_front_enumerate_type(device_t dev, const char *type)
{
struct xenbusb_softc *xbs;
const char **dir;
unsigned int i, count;
int error;
xbs = device_get_softc(dev);
error = xs_directory(XST_NIL, xbs->xbs_node, type, &count, &dir);
if (error)
return (error);
for (i = 0; i < count; i++)
xenbusb_add_device(dev, type, dir[i]);
free(dir, M_XENSTORE);
return (0);
}
/**
* \brief Determine and store the XenStore path for the other end of
* a split device whose local end is represented by ivars.
*
* If successful, the xd_otherend_path field of the child's instance
* variables will be updated.
*
* \param dev NewBus device_t for this XenBus front bus instance.
* \param ivars Instance variables from the XenBus child device for
* which to perform this function.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
static int
xenbusb_front_get_otherend_node(device_t dev, struct xenbus_device_ivars *ivars)
{
char *otherend_path;
int error;
if (ivars->xd_otherend_path != NULL) {
free(ivars->xd_otherend_path, M_XENBUS);
ivars->xd_otherend_path = NULL;
}
error = xs_gather(XST_NIL, ivars->xd_node,
"backend-id", "%i", &ivars->xd_otherend_id,
"backend", NULL, &otherend_path,
NULL);
if (error == 0) {
ivars->xd_otherend_path = strdup(otherend_path, M_XENBUS);
free(otherend_path, M_XENSTORE);
}
return (error);
}
/*-------------------- Private Device Attachment Data -----------------------*/
static device_method_t xenbusb_front_methods[] = {
/* Device interface */
DEVMETHOD(device_identify, xenbusb_identify),
DEVMETHOD(device_probe, xenbusb_front_probe),
DEVMETHOD(device_attach, xenbusb_front_attach),
DEVMETHOD(device_detach, bus_generic_detach),
DEVMETHOD(device_shutdown, bus_generic_shutdown),
DEVMETHOD(device_suspend, bus_generic_suspend),
DEVMETHOD(device_resume, bus_generic_resume),
/* Bus Interface */
DEVMETHOD(bus_print_child, xenbusb_print_child),
DEVMETHOD(bus_read_ivar, xenbusb_read_ivar),
DEVMETHOD(bus_write_ivar, xenbusb_write_ivar),
DEVMETHOD(bus_alloc_resource, bus_generic_alloc_resource),
DEVMETHOD(bus_release_resource, bus_generic_release_resource),
DEVMETHOD(bus_activate_resource, bus_generic_activate_resource),
DEVMETHOD(bus_deactivate_resource, bus_generic_deactivate_resource),
/* XenBus Bus Interface */
DEVMETHOD(xenbusb_enumerate_type, xenbusb_front_enumerate_type),
DEVMETHOD(xenbusb_get_otherend_node, xenbusb_front_get_otherend_node),
{ 0, 0 }
};
DEFINE_CLASS_0(xenbusb_front, xenbusb_front_driver, xenbusb_front_methods,
sizeof(struct xenbusb_softc));
devclass_t xenbusb_front_devclass;
DRIVER_MODULE(xenbusb_front, xenstore, xenbusb_front_driver,
xenbusb_front_devclass, 0, 0);

View File

@ -0,0 +1,78 @@
#-
# Copyright (c) 2010 Spectra Logic Corporation
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions, and the following disclaimer,
# without modification.
# 2. Redistributions in binary form must reproduce at minimum a disclaimer
# substantially similar to the "NO WARRANTY" disclaimer below
# ("Disclaimer") and any redistribution must be conditioned upon
# including a substantially similar Disclaimer requirement for further
# binary redistribution.
#
# NO WARRANTY
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
# DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
# OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
# HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
# STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
# IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGES.
#
# $FreeBSD$
#
#include <sys/bus.h>
HEADER {
struct xenbus_device_ivars;
}
INTERFACE xenbusb;
/**
* \brief Enumerate all devices of the given type on this bus.
*
* \param _dev NewBus device_t for this XenBus (front/back) bus instance.
* \param _type String indicating the device sub-tree (e.g. "vfb", "vif")
* to enumerate.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* Devices that are found should be entered into the NewBus hierarchy via
* xenbusb_add_device(). xenbusb_add_device() ignores duplicate detects
* and ignores duplicate devices, so it can be called unconditionally
* for any device found in the XenStore.
*/
METHOD int enumerate_type {
device_t _dev;
const char *_type;
};
/**
* \brief Determine and store the XenStore path for the other end of
* a split device whose local end is represented by ivars.
*
* If successful, the xd_otherend_path field of the child's instance
* variables must be updated.
*
* \param _dev NewBus device_t for this XenBus (front/back) bus instance.
* \param _ivars Instance variables from the XenBus child device for
* which to perform this function.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
METHOD int get_otherend_node {
device_t _dev;
struct xenbus_device_ivars *_ivars;
}

View File

@ -1,8 +1,4 @@
/******************************************************************************
* xenbus.h
*
* Talks to Xen Store to figure out what devices we have.
*
* Copyright (C) 2005 Rusty Russell, IBM Corporation
* Copyright (C) 2005 XenSource Ltd.
*
@ -30,46 +26,64 @@
* $FreeBSD$
*/
/**
* \file xenbusvar.h
*
* \brief Datastructures and function declarations for usedby device
* drivers operating on the XenBus.
*/
#ifndef _XEN_XENBUS_XENBUSVAR_H
#define _XEN_XENBUS_XENBUSVAR_H
#include <sys/queue.h>
#include <sys/bus.h>
#include <sys/eventhandler.h>
#include <sys/malloc.h>
#include <sys/sbuf.h>
#include <machine/stdarg.h>
#include <machine/xen/xen-os.h>
#include <xen/interface/grant_table.h>
#include <xen/interface/io/xenbus.h>
#include <xen/interface/io/xs_wire.h>
#include <xen/xenstore/xenstorevar.h>
#include "xenbus_if.h"
/* XenBus allocations including XenStore data returned to clients. */
MALLOC_DECLARE(M_XENBUS);
enum {
/*
/**
* Path of this device node.
*/
XENBUS_IVAR_NODE,
/*
/**
* The device type (e.g. vif, vbd).
*/
XENBUS_IVAR_TYPE,
/*
/**
* The state of this device (not the otherend's state).
*/
XENBUS_IVAR_STATE,
/*
/**
* Domain ID of the other end device.
*/
XENBUS_IVAR_OTHEREND_ID,
/*
/**
* Path of the other end device.
*/
XENBUS_IVAR_OTHEREND_PATH
};
/*
/**
* Simplified accessors for xenbus devices
*/
#define XENBUS_ACCESSOR(var, ivar, type) \
@ -81,179 +95,184 @@ XENBUS_ACCESSOR(state, STATE, enum xenbus_state)
XENBUS_ACCESSOR(otherend_id, OTHEREND_ID, int)
XENBUS_ACCESSOR(otherend_path, OTHEREND_PATH, const char *)
/* Register callback to watch this node. */
struct xenbus_watch
{
LIST_ENTRY(xenbus_watch) list;
/* Path being watched. */
char *node;
/* Callback (executed in a process context with no locks held). */
void (*callback)(struct xenbus_watch *,
const char **vec, unsigned int len);
};
typedef int (*xenstore_event_handler_t)(void *);
struct xenbus_transaction
{
uint32_t id;
};
#define XBT_NIL ((struct xenbus_transaction) { 0 })
int xenbus_directory(struct xenbus_transaction t, const char *dir,
const char *node, unsigned int *num, char ***result);
int xenbus_read(struct xenbus_transaction t, const char *dir,
const char *node, unsigned int *len, void **result);
int xenbus_write(struct xenbus_transaction t, const char *dir,
const char *node, const char *string);
int xenbus_mkdir(struct xenbus_transaction t, const char *dir,
const char *node);
int xenbus_exists(struct xenbus_transaction t, const char *dir,
const char *node);
int xenbus_rm(struct xenbus_transaction t, const char *dir, const char *node);
int xenbus_transaction_start(struct xenbus_transaction *t);
int xenbus_transaction_end(struct xenbus_transaction t, int abort);
/*
* Single read and scanf: returns errno or zero. If scancountp is
* non-null, then number of items scanned is returned in *scanncountp.
*/
int xenbus_scanf(struct xenbus_transaction t,
const char *dir, const char *node, int *scancountp, const char *fmt, ...)
__attribute__((format(scanf, 5, 6)));
/* Single printf and write: returns errno or 0. */
int xenbus_printf(struct xenbus_transaction t,
const char *dir, const char *node, const char *fmt, ...)
__attribute__((format(printf, 4, 5)));
/*
* Generic read function: NULL-terminated triples of name,
* sprintf-style type string, and pointer. Returns 0 or errno.
*/
int xenbus_gather(struct xenbus_transaction t, const char *dir, ...);
/* notifer routines for when the xenstore comes up */
int register_xenstore_notifier(xenstore_event_handler_t func, void *arg, int priority);
#if 0
void unregister_xenstore_notifier();
#endif
int register_xenbus_watch(struct xenbus_watch *watch);
void unregister_xenbus_watch(struct xenbus_watch *watch);
void xs_suspend(void);
void xs_resume(void);
/* Used by xenbus_dev to borrow kernel's store connection. */
int xenbus_dev_request_and_reply(struct xsd_sockmsg *msg, void **result);
#if 0
#define XENBUS_IS_ERR_READ(str) ({ \
if (!IS_ERR(str) && strlen(str) == 0) { \
free(str, M_DEVBUF); \
str = ERR_PTR(-ERANGE); \
} \
IS_ERR(str); \
})
#endif
#define XENBUS_EXIST_ERR(err) ((err) == ENOENT || (err) == ERANGE)
/**
* Register a watch on the given path, using the given xenbus_watch structure
* for storage, and the given callback function as the callback. Return 0 on
* success, or errno on error. On success, the given path will be saved as
* watch->node, and remains the caller's to free. On error, watch->node will
* be NULL, the device will switch to XenbusStateClosing, and the error will
* be saved in the store.
*/
int xenbus_watch_path(device_t dev, char *path,
struct xenbus_watch *watch,
void (*callback)(struct xenbus_watch *,
const char **, unsigned int));
/**
* Register a watch on the given path/path2, using the given xenbus_watch
* structure for storage, and the given callback function as the callback.
* Return 0 on success, or errno on error. On success, the watched path
* (path/path2) will be saved as watch->node, and becomes the caller's to
* kfree(). On error, watch->node will be NULL, so the caller has nothing to
* free, the device will switch to XenbusStateClosing, and the error will be
* saved in the store.
*/
int xenbus_watch_path2(device_t dev, const char *path,
const char *path2, struct xenbus_watch *watch,
void (*callback)(struct xenbus_watch *,
const char **, unsigned int));
/**
* Advertise in the store a change of the given driver to the given new_state.
* which case this is performed inside its own transaction. Return 0 on
* success, or errno on error. On error, the device will switch to
* XenbusStateClosing, and the error will be saved in the store.
*/
int xenbus_switch_state(device_t dev,
XenbusState new_state);
/**
* Grant access to the given ring_mfn to the peer of the given device.
* Return 0 on success, or errno on error. On error, the device will
* switch to XenbusStateClosing, and the error will be saved in the
* store. The grant ring reference is returned in *refp.
*/
int xenbus_grant_ring(device_t dev, unsigned long ring_mfn, int *refp);
/**
* Allocate an event channel for the given xenbus_device, assigning the newly
* created local port to *port. Return 0 on success, or errno on error. On
* error, the device will switch to XenbusStateClosing, and the error will be
* saved in the store.
*/
int xenbus_alloc_evtchn(device_t dev, int *port);
/**
* Free an existing event channel. Returns 0 on success or errno on error.
*/
int xenbus_free_evtchn(device_t dev, int port);
/**
* Return the state of the driver rooted at the given store path, or
* XenbusStateClosed if no state can be read.
* Return the state of a XenBus device.
*
* \param path The root XenStore path for the device.
*
* \return The current state of the device or XenbusStateClosed if no
* state can be read.
*/
XenbusState xenbus_read_driver_state(const char *path);
/***
* Report the given negative errno into the store, along with the given
* formatted message.
/**
* Initialize and register a watch on the given path (client suplied storage).
*
* \param dev The XenBus device requesting the watch service.
* \param path The XenStore path of the object to be watched. The
* storage for this string must be stable for the lifetime
* of the watch.
* \param watch The watch object to use for this request. This object
* must be stable for the lifetime of the watch.
* \param callback The function to call when XenStore objects at or below
* path are modified.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* \note On error, the device 'dev' will be switched to the XenbusStateClosing
* state and the returned error is saved in the per-device error node
* for dev in the XenStore.
*/
void xenbus_dev_error(device_t dev, int err, const char *fmt,
...);
int xenbus_watch_path(device_t dev, char *path,
struct xs_watch *watch,
xs_watch_cb_t *callback);
/***
* Equivalent to xenbus_dev_error(dev, err, fmt, args), followed by
* xenbus_switch_state(dev, NULL, XenbusStateClosing) to schedule an orderly
* closedown of this driver and its peer.
/**
* Initialize and register a watch at path/path2 in the XenStore.
*
* \param dev The XenBus device requesting the watch service.
* \param path The base XenStore path of the object to be watched.
* \param path2 The tail XenStore path of the object to be watched.
* \param watch The watch object to use for this request. This object
* must be stable for the lifetime of the watch.
* \param callback The function to call when XenStore objects at or below
* path are modified.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* \note On error, \a dev will be switched to the XenbusStateClosing
* state and the returned error is saved in the per-device error node
* for \a dev in the XenStore.
*
* Similar to xenbus_watch_path, however the storage for the path to the
* watched object is allocated from the heap and filled with "path '/' path2".
* Should a call to this function succeed, it is the callers responsibility
* to free watch->node using the M_XENBUS malloc type.
*/
void xenbus_dev_fatal(device_t dev, int err, const char *fmt,
...);
int xenbus_watch_path2(device_t dev, const char *path,
const char *path2, struct xs_watch *watch,
xs_watch_cb_t *callback);
int xenbus_dev_init(void);
/**
* Grant access to the given ring_mfn to the peer of the given device.
*
* \param dev The device granting access to the ring page.
* \param ring_mfn The guest machine page number of the page to grant
* peer access rights.
* \param refp[out] The grant reference for the page.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* A successful call to xenbus_grant_ring should be paired with a call
* to gnttab_end_foreign_access() when foregn access to this page is no
* longer requried.
*
* \note On error, \a dev will be switched to the XenbusStateClosing
* state and the returned error is saved in the per-device error node
* for \a dev in the XenStore.
*/
int xenbus_grant_ring(device_t dev, unsigned long ring_mfn, grant_ref_t *refp);
/**
* Allocate an event channel for the given XenBus device.
*
* \param dev The device for which to allocate the event channel.
* \param port[out] The port identifier for the allocated event channel.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* A successfully allocated event channel should be free'd using
* xenbus_free_evtchn().
*
* \note On error, \a dev will be switched to the XenbusStateClosing
* state and the returned error is saved in the per-device error node
* for \a dev in the XenStore.
*/
int xenbus_alloc_evtchn(device_t dev, evtchn_port_t *port);
/**
* Free an existing event channel.
*
* \param dev The device which allocated this event channel.
* \param port The port identifier for the event channel to free.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* \note On error, \a dev will be switched to the XenbusStateClosing
* state and the returned error is saved in the per-device error node
* for \a dev in the XenStore.
*/
int xenbus_free_evtchn(device_t dev, evtchn_port_t port);
/**
* Record the given errno, along with the given, printf-style, formatted
* message in dev's device specific error node in the XenStore.
*
* \param dev The device which encountered the error.
* \param err The errno value corresponding to the error.
* \param fmt Printf format string followed by a variable number of
* printf arguments.
*/
void xenbus_dev_error(device_t dev, int err, const char *fmt, ...)
__attribute__((format(printf, 3, 4)));
/**
* va_list version of xenbus_dev_error().
*
* \param dev The device which encountered the error.
* \param err The errno value corresponding to the error.
* \param fmt Printf format string.
* \param ap Va_list of printf arguments.
*/
void xenbus_dev_verror(device_t dev, int err, const char *fmt, va_list ap)
__attribute__((format(printf, 3, 0)));
/**
* Equivalent to xenbus_dev_error(), followed by
* xenbus_set_state(dev, XenbusStateClosing).
*
* \param dev The device which encountered the error.
* \param err The errno value corresponding to the error.
* \param fmt Printf format string followed by a variable number of
* printf arguments.
*/
void xenbus_dev_fatal(device_t dev, int err, const char *fmt, ...)
__attribute__((format(printf, 3, 4)));
/**
* va_list version of xenbus_dev_fatal().
*
* \param dev The device which encountered the error.
* \param err The errno value corresponding to the error.
* \param fmt Printf format string.
* \param ap Va_list of printf arguments.
*/
void xenbus_dev_vfatal(device_t dev, int err, const char *fmt, va_list)
__attribute__((format(printf, 3, 0)));
/**
* Convert a member of the xenbus_state enum into an ASCII string.
*
* /param state The XenBus state to lookup.
*
* /return A string representing state or, for unrecognized states,
* the string "Unknown".
*/
const char *xenbus_strstate(enum xenbus_state state);
/**
* Return the value of a XenBus device's "online" node within the XenStore.
*
* \param dev The XenBus device to query.
*
* \return The value of the "online" node for the device. If the node
* does not exist, 0 (offline) is returned.
*/
int xenbus_dev_is_online(device_t dev);
int xenbus_frontend_closed(device_t dev);
#endif /* _XEN_XENBUS_XENBUSVAR_H */

1654
sys/xen/xenstore/xenstore.c Normal file

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,8 @@
/*
* xenbus_dev.c
* xenstore_dev.c
*
* Driver giving user-space access to the kernel's xenbus connection
* to xenstore.
* Driver giving user-space access to the kernel's connection to the
* XenStore service.
*
* Copyright (c) 2005, Christian Limpach
* Copyright (c) 2005, Rusty Russell, IBM Corporation
@ -45,18 +45,19 @@ __FBSDID("$FreeBSD$");
#include <sys/conf.h>
#include <machine/xen/xen-os.h>
#include <xen/hypervisor.h>
#include <xen/xenbus/xenbusvar.h>
#include <xen/xenbus/xenbus_comms.h>
struct xenbus_dev_transaction {
LIST_ENTRY(xenbus_dev_transaction) list;
struct xenbus_transaction handle;
#include <xen/hypervisor.h>
#include <xen/xenstore/xenstorevar.h>
#include <xen/xenstore/xenstore_internal.h>
struct xs_dev_transaction {
LIST_ENTRY(xs_dev_transaction) list;
struct xs_transaction handle;
};
struct xenbus_dev_data {
struct xs_dev_data {
/* In-progress transaction. */
LIST_HEAD(xdd_list_head, xenbus_dev_transaction) transactions;
LIST_HEAD(xdd_list_head, xs_dev_transaction) transactions;
/* Partial request. */
unsigned int len;
@ -72,13 +73,13 @@ struct xenbus_dev_data {
};
static int
xenbus_dev_read(struct cdev *dev, struct uio *uio, int ioflag)
xs_dev_read(struct cdev *dev, struct uio *uio, int ioflag)
{
int error;
struct xenbus_dev_data *u = dev->si_drv1;
struct xs_dev_data *u = dev->si_drv1;
while (u->read_prod == u->read_cons) {
error = tsleep(u, PCATCH, "xbdread", hz/10);
error = tsleep(u, PCATCH, "xsdread", hz/10);
if (error && error != EWOULDBLOCK)
return (error);
}
@ -96,7 +97,7 @@ xenbus_dev_read(struct cdev *dev, struct uio *uio, int ioflag)
}
static void
queue_reply(struct xenbus_dev_data *u, char *data, unsigned int len)
xs_queue_reply(struct xs_dev_data *u, char *data, unsigned int len)
{
int i;
@ -110,11 +111,11 @@ queue_reply(struct xenbus_dev_data *u, char *data, unsigned int len)
}
static int
xenbus_dev_write(struct cdev *dev, struct uio *uio, int ioflag)
xs_dev_write(struct cdev *dev, struct uio *uio, int ioflag)
{
int error;
struct xenbus_dev_data *u = dev->si_drv1;
struct xenbus_dev_transaction *trans;
struct xs_dev_data *u = dev->si_drv1;
struct xs_dev_transaction *trans;
void *reply;
int len = uio->uio_resid;
@ -141,10 +142,10 @@ xenbus_dev_write(struct cdev *dev, struct uio *uio, int ioflag)
case XS_MKDIR:
case XS_RM:
case XS_SET_PERMS:
error = xenbus_dev_request_and_reply(&u->u.msg, &reply);
error = xs_dev_request_and_reply(&u->u.msg, &reply);
if (!error) {
if (u->u.msg.type == XS_TRANSACTION_START) {
trans = malloc(sizeof(*trans), M_DEVBUF,
trans = malloc(sizeof(*trans), M_XENSTORE,
M_WAITOK);
trans->handle.id = strtoul(reply, NULL, 0);
LIST_INSERT_HEAD(&u->transactions, trans, list);
@ -156,11 +157,11 @@ xenbus_dev_write(struct cdev *dev, struct uio *uio, int ioflag)
BUG_ON(&trans->list == &u->transactions);
#endif
LIST_REMOVE(trans, list);
free(trans, M_DEVBUF);
free(trans, M_XENSTORE);
}
queue_reply(u, (char *)&u->u.msg, sizeof(u->u.msg));
queue_reply(u, (char *)reply, u->u.msg.len);
free(reply, M_DEVBUF);
xs_queue_reply(u, (char *)&u->u.msg, sizeof(u->u.msg));
xs_queue_reply(u, (char *)reply, u->u.msg.len);
free(reply, M_XENSTORE);
}
break;
@ -176,16 +177,14 @@ xenbus_dev_write(struct cdev *dev, struct uio *uio, int ioflag)
}
static int
xenbus_dev_open(struct cdev *dev, int oflags, int devtype, struct thread *td)
xs_dev_open(struct cdev *dev, int oflags, int devtype, struct thread *td)
{
struct xenbus_dev_data *u;
struct xs_dev_data *u;
if (xen_store_evtchn == 0)
return (ENOENT);
#if 0 /* XXX figure out if equiv needed */
nonseekable_open(inode, filp);
#endif
u = malloc(sizeof(*u), M_DEVBUF, M_WAITOK|M_ZERO);
u = malloc(sizeof(*u), M_XENSTORE, M_WAITOK|M_ZERO);
LIST_INIT(&u->transactions);
dev->si_drv1 = u;
@ -193,37 +192,33 @@ xenbus_dev_open(struct cdev *dev, int oflags, int devtype, struct thread *td)
}
static int
xenbus_dev_close(struct cdev *dev, int fflag, int devtype, struct thread *td)
xs_dev_close(struct cdev *dev, int fflag, int devtype, struct thread *td)
{
struct xenbus_dev_data *u = dev->si_drv1;
struct xenbus_dev_transaction *trans, *tmp;
struct xs_dev_data *u = dev->si_drv1;
struct xs_dev_transaction *trans, *tmp;
LIST_FOREACH_SAFE(trans, &u->transactions, list, tmp) {
xenbus_transaction_end(trans->handle, 1);
xs_transaction_end(trans->handle, 1);
LIST_REMOVE(trans, list);
free(trans, M_DEVBUF);
free(trans, M_XENSTORE);
}
free(u, M_DEVBUF);
free(u, M_XENSTORE);
return (0);
}
static struct cdevsw xenbus_dev_cdevsw = {
static struct cdevsw xs_dev_cdevsw = {
.d_version = D_VERSION,
.d_read = xenbus_dev_read,
.d_write = xenbus_dev_write,
.d_open = xenbus_dev_open,
.d_close = xenbus_dev_close,
.d_name = "xenbus_dev",
.d_read = xs_dev_read,
.d_write = xs_dev_write,
.d_open = xs_dev_open,
.d_close = xs_dev_close,
.d_name = "xs_dev",
};
static int
xenbus_dev_sysinit(void)
void
xs_dev_init()
{
make_dev(&xenbus_dev_cdevsw, 0, UID_ROOT, GID_WHEEL, 0400,
"xen/xenbus");
return (0);
make_dev(&xs_dev_cdevsw, 0, UID_ROOT, GID_WHEEL, 0400,
"xen/xenstore");
}
SYSINIT(xenbus_dev_sysinit, SI_SUB_DRIVERS, SI_ORDER_MIDDLE,
xenbus_dev_sysinit, NULL);

View File

@ -0,0 +1,39 @@
/*-
* Core definitions and data structures shareable across OS platforms.
*
* Copyright (c) 2010 Spectra Logic Corporation
* All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions, and the following disclaimer,
* without modification.
* 2. Redistributions in binary form must reproduce at minimum a disclaimer
* substantially similar to the "NO WARRANTY" disclaimer below
* ("Disclaimer") and any redistribution must be conditioned upon
* including a substantially similar Disclaimer requirement for further
* binary redistribution.
*
* NO WARRANTY
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTIBILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* HOLDERS OR CONTRIBUTORS BE LIABLE FOR SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING
* IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
* POSSIBILITY OF SUCH DAMAGES.
*
* $FreeBSD$
*/
/* Initialize support for userspace access to the XenStore. */
void xs_dev_init(void);
/* Used by the XenStore character device to borrow kernel's store connection. */
int xs_dev_request_and_reply(struct xsd_sockmsg *msg, void **result);

View File

@ -0,0 +1,338 @@
/******************************************************************************
* xenstorevar.h
*
* Method declarations and structures for accessing the XenStore.h
*
* Copyright (C) 2005 Rusty Russell, IBM Corporation
* Copyright (C) 2005 XenSource Ltd.
* Copyright (C) 2009,2010 Spectra Logic Corporation
*
* This file may be distributed separately from the Linux kernel, or
* incorporated into other software packages, subject to the following license:
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this source file (the "Software"), to deal in the Software without
* restriction, including without limitation the rights to use, copy, modify,
* merge, publish, distribute, sublicense, and/or sell copies of the Software,
* and to permit persons to whom the Software is furnished to do so, subject to
* the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
* IN THE SOFTWARE.
*
* $FreeBSD$
*/
#ifndef _XEN_XENSTORE_XENSTOREVAR_H
#define _XEN_XENSTORE_XENSTOREVAR_H
#include <sys/queue.h>
#include <sys/bus.h>
#include <sys/eventhandler.h>
#include <sys/malloc.h>
#include <sys/sbuf.h>
#include <machine/stdarg.h>
#include <machine/xen/xen-os.h>
#include <xen/interface/grant_table.h>
#include <xen/interface/io/xenbus.h>
#include <xen/interface/io/xs_wire.h>
#include "xenbus_if.h"
/* XenStore allocations including XenStore data returned to clients. */
MALLOC_DECLARE(M_XENSTORE);
struct xenstore_domain_interface;
struct xs_watch;
extern struct xenstore_domain_interface *xen_store;
typedef void (xs_watch_cb_t)(struct xs_watch *,
const char **vec, unsigned int len);
/* Register callback to watch subtree (node) in the XenStore. */
struct xs_watch
{
LIST_ENTRY(xs_watch) list;
/* Path being watched. */
char *node;
/* Callback (executed in a process context with no locks held). */
xs_watch_cb_t *callback;
};
LIST_HEAD(xs_watch_list, xs_watch);
typedef int (*xs_event_handler_t)(void *);
struct xs_transaction
{
uint32_t id;
};
#define XST_NIL ((struct xs_transaction) { 0 })
/**
* Fetch the contents of a directory in the XenStore.
*
* \param t The XenStore transaction covering this request.
* \param dir The dirname of the path to read.
* \param node The basename of the path to read.
* \param num The returned number of directory entries.
* \param result An array of directory entry strings.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* \note The results buffer is malloced and should be free'd by the
* caller with 'free(*result, M_XENSTORE)'.
*/
int xs_directory(struct xs_transaction t, const char *dir,
const char *node, unsigned int *num, const char ***result);
/**
* Determine if a path exists in the XenStore.
*
* \param t The XenStore transaction covering this request.
* \param dir The dirname of the path to read.
* \param node The basename of the path to read.
*
* \retval 1 The path exists.
* \retval 0 The path does not exist or an error occurred attempting
* to make that determination.
*/
int xs_exists(struct xs_transaction t, const char *dir, const char *node);
/**
* Get the contents of a single "file". Returns the contents in
* *result which should be freed with free(*result, M_XENSTORE) after
* use. The length of the value in bytes is returned in *len.
*
* \param t The XenStore transaction covering this request.
* \param dir The dirname of the file to read.
* \param node The basename of the file to read.
* \param len The amount of data read.
* \param result The returned contents from this file.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*
* \note The results buffer is malloced and should be free'd by the
* caller with 'free(*result, M_XENSTORE)'.
*/
int xs_read(struct xs_transaction t, const char *dir,
const char *node, unsigned int *len, void **result);
/**
* Write to a single file.
*
* \param t The XenStore transaction covering this request.
* \param dir The dirname of the file to write.
* \param node The basename of the file to write.
* \param string The NUL terminated string of data to write.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
int xs_write(struct xs_transaction t, const char *dir,
const char *node, const char *string);
/**
* Create a new directory.
*
* \param t The XenStore transaction covering this request.
* \param dir The dirname of the directory to create.
* \param node The basename of the directory to create.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
int xs_mkdir(struct xs_transaction t, const char *dir,
const char *node);
/**
* Remove a file or directory (directories must be empty).
*
* \param t The XenStore transaction covering this request.
* \param dir The dirname of the directory to remove.
* \param node The basename of the directory to remove.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
int xs_rm(struct xs_transaction t, const char *dir, const char *node);
/**
* Destroy a tree of files rooted at dir/node.
*
* \param t The XenStore transaction covering this request.
* \param dir The dirname of the directory to remove.
* \param node The basename of the directory to remove.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
int xs_rm_tree(struct xs_transaction t, const char *dir,
const char *node);
/**
* Start a transaction.
*
* Changes by others will not be seen during the lifetime of this
* transaction, and changes will not be visible to others until it
* is committed (xs_transaction_end).
*
* \param t The returned transaction.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
int xs_transaction_start(struct xs_transaction *t);
/**
* End a transaction.
*
* \param t The transaction to end/commit.
* \param abort If non-zero, the transaction is discarded
* instead of committed.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
int xs_transaction_end(struct xs_transaction t, int abort);
/*
* Single file read and scanf parsing of the result.
*
* \param t The XenStore transaction covering this request.
* \param dir The dirname of the path to read.
* \param node The basename of the path to read.
* \param scancountp The number of input values assigned (i.e. the result
* of scanf).
* \param fmt Scanf format string followed by a variable number of
* scanf input arguments.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of failure.
*/
int xs_scanf(struct xs_transaction t,
const char *dir, const char *node, int *scancountp, const char *fmt, ...)
__attribute__((format(scanf, 5, 6)));
/**
* Printf formatted write to a XenStore file.
*
* \param t The XenStore transaction covering this request.
* \param dir The dirname of the path to read.
* \param node The basename of the path to read.
* \param fmt Printf format string followed by a variable number of
* printf arguments.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of write failure.
*/
int xs_printf(struct xs_transaction t, const char *dir,
const char *node, const char *fmt, ...)
__attribute__((format(printf, 4, 5)));
/**
* va_list version of xenbus_printf().
*
* \param t The XenStore transaction covering this request.
* \param dir The dirname of the path to read.
* \param node The basename of the path to read.
* \param fmt Printf format string.
* \param ap Va_list of printf arguments.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of write failure.
*/
int xs_vprintf(struct xs_transaction t, const char *dir,
const char *node, const char *fmt, va_list ap);
/**
* Multi-file read within a single directory and scanf parsing of
* the results.
*
* \param t The XenStore transaction covering this request.
* \param dir The dirname of the paths to read.
* \param ... A variable number of argument triples specifying
* the file name, scanf-style format string, and
* output variable (pointer to storage of the results).
* The last triple in the call must be terminated
* will a final NULL argument. A NULL format string
* will cause the entire contents of the given file
* to be assigned as a NUL terminated, M_XENSTORE heap
* backed, string to the output parameter of that tuple.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of read failure.
*
* Example:
* char protocol_abi[64];
* uint32_t ring_ref;
* char *dev_type;
* int error;
*
* error = xenbus_gather(XBT_NIL, xenbus_get_node(dev),
* "ring-ref", "%" PRIu32, &ring_ref,
* "protocol", "%63s", protocol_abi,
* "device-type", NULL, &dev_type,
* NULL);
*
* ...
*
* free(dev_type, M_XENSTORE);
*/
int xs_gather(struct xs_transaction t, const char *dir, ...);
/**
* Register a XenStore watch.
*
* XenStore watches allow a client to be notified via a callback (embedded
* within the watch object) of changes to an object in the XenStore.
*
* \param watch A xenbus_watch struct with it's node and callback fields
* properly initialized.
*
* \return On success, 0. Otherwise an errno value indicating the
* type of write failure. EEXIST errors from the XenStore
* are supressed, allowing multiple, physically different,
* xenbus_watch objects, to watch the same path in the XenStore.
*/
int xs_register_watch(struct xs_watch *watch);
/**
* Unregister a XenStore watch.
*
* \param watch An xs_watch object previously used in a successful call
* to xs_register_watch().
*
* The xs_watch object's node field is not altered by this call.
* It is the caller's responsibility to properly dispose of both the
* watch object and the data pointed to by watch->node.
*/
void xs_unregister_watch(struct xs_watch *watch);
/**
* Allocate and return an sbuf containing the XenStore path string
* <dir>/<name>. If name is the NUL string, the returned sbuf contains
* the path string <dir>.
*
* \param dir The NUL terminated directory prefix for new path.
* \param name The NUL terminated basename for the new path.
*
* \return A buffer containing the joined path.
*/
struct sbuf *xs_join(const char *, const char *);
#endif /* _XEN_XENSTORE_XENSTOREVAR_H */