Loading ...
Loading ...
Loading ...
Application servers often seek
to create homogeneity, allowing
predictability to be built in –
workload optimization is the term in
vogue. If architects have the luxury
of partitioning a data center into
servers each performing dedicated
tasks, it may be possible to focus
and optimize SSD operations.
Read-intensive use cases are
typical of presentation platforms
such as web servers, social media
hosts, search engines and content-
delivery networks. Data is written
once, tagged and categorized,
updated infrequently if ever, and
read on-demand by millions of
users. Planar NAND has excellent
read performance, and limiting
the number of writes extends the
longevity of an SSD using it.
Write-intensive use cases show
up in platforms that aggregate
transactions. Speed is often
critical, such as in real-time
nancial instrument trading where
milliseconds can mean millions of
dollars. Other examples are email
servers, gathering data from sensor
networks, ERP systems and data
warehousing. Flash writes run
slower than reads because of steps
to program the state of individual
cells, and subject them to wear
from voltages applied. With larger,
more durable ash cell construction
and streamlined programming
algorithms, V-NAND-based SSDs
oer better write performance and
greater endurance.
In reality, few enterprise
applications operating on high-
value data are overwhelmingly
biased one way or the other. Overall
responsiveness is determined
by how an SSD holds up when
subjected to a mix of reads and
writes in a random workload.
Applications often run on a virtual
machine, consolidating a variety
of requesters and tasks on one
physical server, further increasing
the probability of
mixed loads.
Client SSDs that look good in
simplistic synthetic benchmarking
with partitioned read or write
loads often fall apart in real-world
scenarios of mixed loading. Data
center SSDs are designed to
withstand mixed loading, delivering
not only low real-time latency
but also a high performance
consistency gure.
WHERE REAL
-
WORLD MEETS MIXED LOADS
KEY BENCHMARKS FOR
MIXED LOADING
Synthetic benchmarks
3
characterizing
SSDs under mixed loading have three
common parameters, with two less
obvious considerations:
Read/write requests would nominally
be 50/50; many test results focus
on 70/30 as representative of OLTP
environments, while JESD219 calls for
40/60. Simply averaging independent
read and write results can lead to
incorrect conclusions.
Random transfer size is often cited at
4KB or 8KB, again typical of OLTP and
producing higher IOPS gures. Bigger
block sizes can increase throughput,
possibly overstating performance for
most applications. JESD219 places
emphasis on 4KB.
4
Queue depth (QD) indicates how
many pending requests can be kicked
o, waiting for service. Increasing QD
helps IOPS gures, but may never
be realized in actual use. Lower QDs
expose potential latency issues;
higher QDs can smooth out responses
to requesters in a multi-tenant
environment.
Drive test area should not be
restricted to a small LBA (logical
block access), amounting to articial
overprovisioning. SSDs should be
preconditioned and accesses directed
across the entire drive to engage
garbage collection routines.
Entropy, or randomness of patterns,
should be set at 100% to nullify data
reduction techniques and expose write
amplication issues. Compression and
other algorithms may reduce writes,
but performance gains are oset if
real-world data is not as uniform
as expected.
Data center SSDs are
designed to withstand
mixed loading.
Loading ...
Loading ...