mutilate/cmdline.ggo
2021-07-21 01:00:08 -04:00

143 lines
6.7 KiB
Plaintext

package "mutilate"
version "0.1"
usage "mutilate -s server[:port] [options]"
description "\"High-performance\" memcached benchmarking tool"
args "-c cc --show-required -C --default-optional -l"
option "verbose" v "Verbosity. Repeat for more verbose." multiple
option "quiet" - "Disable log messages."
text "\nBasic options:"
option "server" s "Memcached server hostname[:port]. \
Repeat to specify multiple servers." string multiple
option "unix_socket" - "Use UNIX socket instead of TCP."
option "binary" - "Use binary memcached protocol instead of ASCII."
option "redis" - "Use Redis RESP protocol instead of memchached."
option "getset" - "Use getset mode, in getset mode we first issue \
a GET and if the response is MISS, then issue a SET for on that
key following distribution value."
option "getsetorset" - "Use getset mode and allow for direct writes (with optype == 2)."
option "successful" - "Only record latency and throughput stats for successful queries"
option "prefix" - "Prefix all keys with a string (helps with multi-tennant eval)" string
option "delete90" - "Delete 90 percent of keys after halfway through \
the workload, used to model Rumbel et. al. USENIX \
FAST '14 workloads. MUST BE IN GETSET MODE and
have a set number of queries"
option "assoc" - "We create hash tables by taking the truncating the \
key by b bytes. The n-b bytes are the key for redis, in the original \
(key,value). The value is a hash table and we acess field \
b to get the value. Essentially this makes redis n-way \
associative cache. Only works in redis mode. For small key \
sizes we just use normal method of (key,value) store. No hash table." int default="4"
option "qps" q "Target aggregate QPS. 0 = peak QPS." int default="0"
option "time" t "Maximum time to run (seconds)." int default="5"
option "apps" - "Number of apps, should eqaul total conns" int default="1"
option "read_file" - "Read keys from file." string default=""
option "twitter_trace" - "use twitter memcached trace format from file." int default="0"
option "keysize" K "Length of memcached keys (distribution)."
string default="30"
option "valuesize" V "Length of memcached values (distribution)."
string default="200"
option "records" r "Number of memcached records to use. \
If multiple memcached servers are given, this number is divided \
by the number of servers." int default="10000"
option "misswindow" m "Window for recording misses, used to find \
steady state, no window by default, which \
gives us summary stats in total" int default="0"
option "queries" N "Number of queries to make. 0 is unlimited (default) \
If multiple memcached servers are given, this number is divided \
by the number of servers." int default="0"
option "update" u "Ratio of set:get commands." float default="0.0"
text "\nAdvanced options:"
option "username" U "Username to use for SASL authentication." string
option "password" P "Password to use for SASL authentication." string
option "threads" T "Number of threads to spawn." int default="1"
option "affinity" - "Set CPU affinity for threads, round-robin"
option "connections" c "Connections to establish per server." int default="1"
option "depth" d "Maximum depth to pipeline requests." int default="1"
option "roundrobin" R "Assign threads to servers in round-robin fashion. \
By default, each thread connects to every server."
option "iadist" i "Inter-arrival distribution (distribution). Note: \
The distribution will automatically be adjusted to match the QPS given \
by --qps." string default="exponential"
option "skip" S "Skip transmissions if previous requests are late. This \
harms the long-term QPS average, but reduces spikes in QPS after \
long latency requests."
option "moderate" - "Enforce a minimum delay of ~1/lambda between requests."
option "noload" - "Skip database loading."
option "loadonly" - "Load database and then exit."
option "blocking" B "Use blocking epoll(). May increase latency."
option "no_nodelay" - "Don't use TCP_NODELAY."
option "warmup" w "Warmup time before starting measurement." int
option "wait" W "Time to wait after startup to start measurement." int
option "save" - "Record latency samples to given file." string
option "search" - "Search for the QPS where N-order statistic < Xus. \
(i.e. --search 95:1000 means find the QPS where 95% of requests are \
faster than 1000us)." string typestr="N:X"
option "scan" - "Scan latency across QPS rates from min to max."
string typestr="min:max:step"
text "\nAgent-mode options:"
option "agentmode" A "Run client in agent mode."
option "agent" a "Enlist remote agent." string typestr="host" multiple
option "agent_port" p "Agent port." string default="5556"
option "lambda_mul" l "Lambda multiplier. Increases share of \
QPS for this client." int default="1"
option "measure_connections" C "Master client connections per server, \
overrides --connections." int
option "measure_qps" Q "Explicitly set master client QPS, \
spread across threads and connections." int
option "measure_depth" D "Set master client connection depth." int
text "
The --measure_* options aid in taking latency measurements of the
memcached server without incurring significant client-side queuing
delay. --measure_connections allows the master to override the
--connections option. --measure_depth allows the master to operate as
an \"open-loop\" client while other agents continue as a regular
closed-loop clients. --measure_qps lets you modulate the QPS the
master queries at independent of other clients. This theoretically
normalizes the baseline queuing delay you expect to see across a wide
range of --qps values.
"
text "
Some options take a 'distribution' as an argument.
Distributions are specified by <distribution>[:<param1>[,...]].
Parameters are not required. The following distributions are supported:
[fixed:]<value> Always generates <value>.
uniform:<max> Uniform distribution between 0 and <max>.
normal:<mean>,<sd> Normal distribution.
exponential:<lambda> Exponential distribution.
pareto:<loc>,<scale>,<shape> Generalized Pareto distribution.
gev:<loc>,<scale>,<shape> Generalized Extreme Value distribution.
To recreate the Facebook \"ETC\" request stream from [1], the
following hard-coded distributions are also provided:
fb_value = a hard-coded discrete and GPareto PDF of value sizes
fb_key = \"gev:30.7984,8.20449,0.078688\", key-size distribution
fb_ia = \"pareto:0.0,16.0292,0.154971\", inter-arrival time dist.
[1] Berk Atikoglu et al., Workload Analysis of a Large-Scale Key-Value Store,
SIGMETRICS 2012
"